Você está na página 1de 149

ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

TABLE OF CONTENTS
MODULE 1: CONCEPTS OF INFORMATION AND COMMUNICATIONS TECHNOLOGY
UNIT 1. THE COMPUTER AND HOW IT WORKS
Lesson 1. Definition of the computer, computer history and computer systems

 Different definitions of a computer


 Computer History
 Computer Systems
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise
Lesson 2. Computer Hardware (Components that make up the computer)
 Definition of Computer Hardware?
 The Central Processing Unit,
 Input Devices
 Output Devices
 Other Peripheral Devices
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise
Lesson 3. Computer Software (Computer Programs)
 Definition of computer software
 Operating Systems Software
 Application Software
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise
UNIT 2. THE INFORMATION PROCESSING CYCLE
Lesson 1. The Input and Output stages

 Definition of the Information Processing cycle


 The Input stage (Definition and Components used)
 The Output stage (Definition and Components used)
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise

Lesson 2. The Processing Stage


 Definition
 Components used in this stage
 Activity
 Summary
1
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Self Check Exercise


 Suggested Answers to the Activity and Self Check Exercise

Lesson 3. The Storage Stage


 Definition
 Components used in this stage
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise
UNIT 3. COMPUTER NETWORKS AND THE INTERNET
Lesson 1. Computer Networks

 Definition of Computer Networks


 Different types of Computer Networks
 Use of Computer Networks
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise

Lesson 2. The World Wide Web and the Internet


 Definition of the World Wide Web and the Internet
 History of the World Wide Web and the Internet
 Use of the World Wide Web and the Internet
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise

Lesson 3. Computer Networks and Data Security and Privacy


 Business issues
 Software used in Securing Networks
 Securing Data
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise

UNIT 4. VARIOUS USES OF THE COMPUTER


Lesson 1. Use of Computers in Education

 Use of Computers in Primary Schools


 Use of Computers in Secondary Schools
 Use of Computers in Colleges and Universities
 Activity
 Summary
2
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Self Check Exercise


 Suggested Answers to the Activity and Self Check Exercise

Lesson 2. Use of Computers in Business

 Use of Computers in Banking


 Use of Computers in Trade and Retailing
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise

Lesson 3. Electronic Commerce

 Definition of Electronic Commerce


 Benefits of E-Commerce
 Security
 Social Implications
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise

UNIT 5. COMPUTER SAFETY AND COMFORT

Lesson 1. Setting up the Computer lab and Rules

 Considerations for the computer laboratory


 Computer Furniture
 Lighting
 Security
 Computer Laboratory Rules
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise

Lesson 2. Computer Viruses

 Definition of Computer Viruses


 Types of Computer Viruses
 Signs and damages caused by Computer Viruses
 Protecting your Computers against Computer Viruses
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise

3
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Lesson 3. Health Concerns

 Considerations by Computer Users


 Correct use of Tools
 Electronic Discharge (ESD) and Electrical Safety
 Working Practices
 Activity
 Summary
 Self Check Exercise
 Suggested Answers to the Activity and Self Check Exercise
UNIT 1. THE COMPUTER AND HOW IT WORKS

Lesson 1: Different Definitions of the Computer, Computer History and Computer Systems
INTRODUCTION
The computer has become a household name because we are meeting it in different work styles. The
children are meeting it in their play stations, the women in the kitchen are meeting it in a microwave, the
medical doctor is using it as a scanner and in the workplace we are meeting it as a personal computer.
In this lesson we will concern ourselves with the personal computer, what it is and what makes the
computer system.
LEARNING OUTCOMES.
At the end of this lesson, you should be able to:
 state the definition of the computer
 outline how computer technology unfolded, with particular
emphasis on the “generations”
 state how people and events affected the development of computers.
 narrate story of personal computer development.
 Outline the various classifications of computers.
 Distinguish data from information.

Different Definitions of the Computer


A computer is a device or machine for making calculations or controlling operations that are expressible
in numerical or logical terms. Computers are constructed from components that perform simple well-
defined functions. The complex interactions of these components endow computers with the ability to
process information. If correctly configured (usually by, programming) a computer can be made to
represent some aspect of a problem or part of a system. If a computer configured in this way is given
appropriate input data, then it can automatically solve the problem or predict the behaviour of the
system.
The discipline which studies the theory, design, and application of computers is called computer
science.
General Principles
Computers can work through the movement of mechanical parts, electrons, photons, quantum
particles, or any other reasonably well understood physical phenomenon. Although computers have
been built out of many different technologies, nearly all computers today are electronic.

4
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Computers may directly model the problem being solved, in the sense that the problem being solved is
mapped as closely as possible onto the physical phenomena being exploited. For example, electron
flows might be used to model the flow of water in a dam. Such analog computers were once common in
the 1960s but are now rare. In most computers today, the problem is translated into mathematical
terms, and then reduced to simple Boolean algebra. Electronic circuits are then used to represent
Boolean operations. Since almost all of mathematics can be reduced to Boolean operations, a
sufficiently fast electronic computer is capable of attacking the majority of mathematical problems, and
much, much more. This basic idea, which made modern digital computers possible, was formally
identified and explored by Claude E. Shannon.
Computers cannot solve all mathematical problems. Alan Turing identified which problems could and
could not be solved by computers, and in doing so founded theoretical computer science.
Etymology

The meaning of the word computer has evolved in along with the devices that bear the name, but has
always lagged behind the capabilities of contemporary machines. The word was originally used to
describe a person who performed arithmetic calculations and this usage is still valid (although it is
becoming quite rare in the United States). The OED2 lists the year 1897 as the first year the word was
used to refer to a mechanical calculating device. By 1946 several qualifiers were introduced by the
OED2 to differentiate between the different types of machine. These qualifiers included analogue,
digital and electronic. However, from the context of the citation, it is obvious these terms were in use
prior to 1946. This complex etymology is indicative of the rapid development of computing technology.

Figure 1. Computer System and Peripherals


Computer History

The history of computers starts out about 2000 years ago, at the birth of the abacus, a wooden rack
holding two horizontal wires with beads strung on them. When these beads are moved around,
according to programming rules memorized by the user, all regular arithmetic problems can be done.
Another important invention around the same time was the Astrolabe, used for navigation. Blaise
Pascal is usually credited for building the first digital computer in 1642. It added numbers entered with
dials and was made to help his father, a tax collector. In 1671, Gottfried Wilhelm von Leibniz invented a
computer that was built in 1694. It could add, and, after changing some things around, multiply. Leibniz
invented a special stepped gear mechanism for introducing the addend digits, and this is still being
used. The prototypes made by Pascal and Leibniz were not used in many places, and considered weird
5
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

until a little more than a century later, when Thomas of Colmar (A.K.A. Charles Xavier Thomas) created
the first successful mechanical calculator that could add, subtract, multiply, and divide. A lot of improved
desktop calculators by many inventors followed, so that by about 1890, the range of improvements
included:

 Accumulation of partial results


 Storage and automatic re-entry of past results (A memory function)

 Printing of the results

Each of these required manual installation. These improvements were mainly made for commercial
users, and not for the needs of science.
Babbage

Figure 2. Charles Barbage


While Thomas of Colmar was developing the desktop calculator, a series of very interesting
developments in computers was started in Cambridge, England, by Charles Babbage (above, of which
the computer store “Babbages, now GameStop, is named), a mathematics professor. In 1812, Babbage
realized that many long calculations, especially those needed to make mathematical tables, were really
a series of predictable actions that were constantly repeated. From this he suspected that it should be
possible to do these automatically. He began to design an automatic mechanical calculating machine,
which he called a difference engine. By 1822, he had a working model to demonstrate with. With
financial help from the British government, Babbage started fabrication of a difference engine in 1823. It
was intended to be steam powered and fully automatic, including the printing of the resulting tables,
and commanded by a fixed instruction program. The difference engine, although having limited
adaptability and applicability, was really a great advance. Babbage continued to work on it for the next
10 years, but in 1833 he lost interest because he thought he had a better idea — the construction of
what would now be called a general purpose, fully program-controlled, automatic mechanical digital
computer. Babbage called this idea an Analytical Engine. The ideas of this design showed a lot of
foresight, although this couldn’t be appreciated until a full century later. The plans for this engine
required an identical decimal computer operating on numbers of 50 decimal digits (or words) and
6
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

having a storage capacity (memory) of 1,000 such digits. The built-in operations were supposed to
include everything that a modern general – purpose computer would need, even the all important
Conditional Control Transfer Capability that would allow commands to be executed in any order, not just
the order in which they were programmed. The analytical engine was soon to use punched cards
(similar to those used in a Jacquard loom), which would be read into the machine from several different
Reading Stations. The machine was supposed to operate automatically, by steam power, and require
only one person there. Babbage‘s computers were never finished. Various reasons are used for his
failure. Most used is the lack of precision machining techniques at the time. Another speculation is that
Babbage was working on a solution of a problem that few people in 1840 really needed to solve. After
Babbage, there was a temporary loss of interest in automatic digital computers. Between 1850 and
1900 great advances were made in mathematical physics, and it came to be known that most
observable dynamic phenomena can be identified by differential equations(which meant that most
events occurring in nature can be measured or described in one equation or another), so that easy
means for their calculation would be helpful. Moreover, from a practical view, the availability of steam
power caused manufacturing (boilers), transportation (steam engines and boats), and commerce to
prosper and led to a period of a lot of engineering achievements. The designing of railroads, and the
making of steamships, textile mills, and bridges required differential calculus to determine such things
as:
 centre of gravity
 centre of buoyancy
 moment of inertia
 stress distributions

Even the assessment of the power output of a steam engine needed mathematical integration. A strong
need thus developed for a machine that could rapidly perform many repetitive calculations.

7
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Use of Punched Cards by Hollerith

Figure 3. Herman Hollerith

A step towards automated computing was the development of punched cards, which were first
successfully used with computers in 1890 by Herman Hollerith (left) and James Powers, who worked
for the US. Census Bureau. They developed devices that could read the information that had been
punched into the cards automatically, without human help. Because of this, reading errors were
reduced dramatically, work flow increased, and, most importantly, stacks of punched cards could be
used as easily accessible memory of almost unlimited size. Furthermore, different problems could be
stored on different stacks of cards and accessed when needed. These advantages were seen by
commercial companies and soon led to the development of improved punch-card using computers
created by International Business Machines (IBM), Remington (yes, the same people that make
shavers), Burroughs, and other corporations. These computers used electromechanical devices in
which electrical power provided mechanical motion — like turning the wheels of an adding machine.
Such systems included features to:

 feed in a specified number of cards automatically


 add, multiply, and sort

 feed out cards with punched results

As compared to today’s machines, these computers were slow, usually processing 50 – 220 cards per
minute, each card holding about 80 decimal numbers (characters). At the time, however, punched cards
were a huge step forward. They provided a means of I/O, and memory storage on a huge scale. For
more than 50 years after their first use, punched card machines did most of the world’s first business
computing, and a considerable amount of the computing work in science.

Electronic Digital Computers

8
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Figure 4. John W. Mauchly

The start of World War II produced a large need for computer capacity, especially for the military. New
weapons were made for which trajectory tables and other essential data were needed. In 1942, John P.
Eckert, John W. Mauchly (above), and their associates at the Moore school of Electrical Engineering of
University of Pennsylvania decided to build a high – speed electronic computer to do the job. This
machine became known as ENIAC (Electrical Numerical Integrator And Calculator) The size of ENIAC‘s
numerical “word” was 10 decimal digits, and it could multiply two of these numbers at a rate of 300 per
second, by finding the value of each product from a multiplication table stored in its memory. ENIAC
was therefore about 1,000 times faster then the previous generation of relay computers. ENIAC used
18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of
electrical power. It had punched card I/O, 1 multiplier, 1 divider/square rooter, and 20 adders using
decimal ring counters, which served as adders and also as quick-access (.0002 seconds) read-write
register storage. The executable instructions making up a program were embodied in the separate
“units” of ENIAC, which were plugged together to form a “route” for the flow of information.

9
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Figure 5. The ENIAC Computer

These connections had to be redone after each computation, together with presetting function tables
and switches. This “wire your own” technique was inconvenient (for obvious reasons), and with only
some latitude could ENIAC be considered programmable. It was, however, efficient in handling the
particular programs for which it had been designed. ENIAC is commonly accepted as the first
successful high – speed electronic digital computer (EDC) and was used from 1946 to 1955. A
controversy developed in 1971, however, over the patentability of ENIAC‘s basic digital concepts, the
claim being made that another physicist, John V. Atanasoff (left) had already used basically the same
ideas in a simpler vacuum – tube device he had built in the 1930′s while at Iowa State College. In 1973
the courts found in favor of the company using the Atanasoff claim.

The Modern Stored Program EDC

Figure 6. John Von Neumann

Fascinated by the success of ENIAC, the mathematician John Von Neumann undertook, in 1945, an
abstract study of computation that showed that a computer should have a very simple, fixed physical
structure, and yet be able to execute any kind of computation by means of a proper programmed
control without the need for any change in the unit itself. Von Neumann contributed a new awareness of
how practical, yet fast computers should be organized and built. These ideas, usually referred to as the
stored – program technique, became essential for future generations of high – speed digital computers
and were universally adopted.

The Stored – Program technique involves many features of computer design and function besides the
one that it is named after. In combination, these features make very – high – speed operation
attainable. A glimpse may be provided by considering what 1,000 operations per second means. If each
instruction in a job program were used once in consecutive order, no human programmer could
generate enough instruction to keep the computer busy. Arrangements must be made, therefore, for
10
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

parts of the job program (called subroutines) to be used repeatedly in a manner that depends on the
way the computation goes. Also, it would clearly be helpful if instructions could be changed if needed
during a computation to make them behave differently.

Von Neumann met these two needs by making a special type of machine instruction, called a
Conditional control transfer – which allowed the program sequence to be stopped and started again at
any point – and by storing all instruction programs together with data in the same memory unit, so that,
when needed, instructions could be arithmetically changed in the same way as data. As a result of
these techniques, computing and programming became much faster, more flexible, and more efficient
with work. Regularly used subroutines did not have to be reprogrammed for each new program, but
could be kept in “libraries” and read into memory only when needed. Thus, much of a given program
could be assembled from the subroutine library.

The all – purpose computer memory became the assembly place in which all parts of a long
computation were kept, worked on piece by piece, and put together to form the final results. The
computer control survived only as an “errand runner” for the overall process. As soon as the advantage
of these techniques became clear, they became a standard practice.

Figure 7. The EDVAC Computer Figure 8. The UNIVAC Computer

The first generation of modern programmed electronic computers to take advantage of these
improvements were built in 1947. This group included computers using Random – Access – Memory
(RAM), which is a memory designed to give almost constant access to any particular piece of
information. . These machines had punched – card or punched tape I/O devices and RAM’s of 1,000 –
word capacity and access times of .5 Greek MU seconds (.5*10-6 seconds). Some of them could
perform multiplications in 2 to 4 MU seconds.

Physically, they were much smaller than ENIAC. Some were about the size of a grand piano and used
only 2,500 electron tubes, a lot less then required by the earlier ENIAC. The first – generation stored –
program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of

11
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

operation (ROO) and were used for 8 to 12 years. They were usually programmed in ML, although by
the mid 1950′s progress had been made in several aspects of advanced programming. This group of
computers included EDVAC (above) and UNIVAC (right) the first commercially available computers.
Advances in the 1950′s

Early in the 50′s two important engineering discoveries changed the image of the electronic – computer
field, from one of fast but unreliable hardware to an image of relatively high reliability and even more
capability. These discoveries were the magnetic core memory and the Transistor – Circuit Element.
These technical discoveries quickly found their way into new models of digital computers. RAM
capacities increased from 8,000 to 64,000 words in commercially available machines by the 1960′s,
with access times of 2 to 3 MS (Milliseconds). These machines were very expensive to purchase or
even to rent and were particularly expensive to operate because of the cost of expanding programming.
Such computers were mostly found in large computer centers operated by industry, government, and
private laboratories – staffed with many programmers and support personnel.

This situation led to modes of operation enabling the sharing of the high potential available. One such
mode is batch processing, in which problems are prepared and then held ready for computation on a
relatively cheap storage medium. Magnetic drums, magnetic – disk packs, or magnetic tapes were
usually used. When the computer finishes with a problem, it “dumps” the whole problem (program and
results) on one of these peripheral storage units and starts on a new problem. Another mode for fast,
powerful machines is called time-sharing. In time-sharing, the computer processes many jobs in such
rapid succession that each job runs as if the other jobs did not exist, thus keeping each “customer”
satisfied. Such operating modes need elaborate executable programs to attend to the administration of
the various tasks.

Advances in the 1960′s

In the 1960′s, efforts to design and develop the fastest possible computer with the greatest capacity
reached a turning point with the LARC machine, built for the Livermore Radiation Laboratories of the
University of California by the Sperry – Rand Corporation, and the Stretch computer by IBM. The LARC
had a base memory of 98,000 words and multiplied in 10 Greek MU seconds. Stretch was made with
several degrees of memory having slower access for the ranks of greater capacity, the fastest access
time being less then 1 Greek MU Second and the total capacity in the vicinity of 100,000,000 words.
During this period, the major computer manufacturers began to offer a range of capabilities and prices,
as well as accessories such as:

 Consoles
 Card Feeders

12
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Page Printers

 Cathode – ray – tube displays

 Graphing devices

These were widely used in businesses for such things as:

 Accounting
 Payroll

 Inventory control

 Ordering Supplies

 Billing

CPU’s for these uses did not have to be very fast arithmetically and were usually used to access large
amounts of records on file, keeping these up to date. By far, the most number of computer systems
were sold for the more simple uses, such as hospitals (keeping track of patient records, medications,
and treatments given). They were also used in libraries, such as the National Medical Library retrieval
system, and in the Chemical Abstracts System, where computer records on file now cover nearly all
known chemical compounds.

More Recent Advances

The trend during the 1970′s was, to some extent, moving away from very powerful, single – purpose
computers and toward a larger range of applications for cheaper computer systems. Most continuous-
process manufacturing, such as petroleum refining and electrical-power distribution systems, now used
computers of smaller capability for controlling and regulating their jobs.

In the 1960′s, the problems in programming applications were an obstacle to the independence of
medium sized on-site computers, but gains in applications programming language technologies
removed these obstacles. Applications languages were now available for controlling a great range of
manufacturing processes, for using machine tools with computers, and for many other things.
Moreover, a new revolution in computer hardware was under way, involving shrinking of computer-logic
circuitry and of components by what are called large-scale integration (LSI techniques.

In the 1950s it was realized that “scaling down” the size of electronic digital computer circuits and parts
would increase speed and efficiency and by that, improve performance, if they could only find a way to
do this. About 1960 photo printing of conductive circuit boards to eliminate wiring became more

13
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

developed. Then it became possible to build resistors and capacitors into the circuitry by the same
process. In the 1970′s, vacuum deposition of transistors became the norm, and entire assemblies, with
adders, shifting registers, and counters, became available on tiny “chips.”

In the 1980′s, very large scale integration (VLSI), in which hundreds of thousands of transistors were
placed on a single chip, became more and more common. Many companies, some new to the
computer field, introduced in the 1970s programmable minicomputers supplied with software packages.
The “shrinking” trend continued with the introduction of personal computers (PC’s), which are
programmable machines small enough and inexpensive enough to be purchased and used by
individuals. Many companies, such as Apple Computer and Radio Shack, introduced very successful
PC’s in the 1970s, encouraged in part by a fad in computer (video) games. In the 1980s some friction
occurred in the crowded PC field, with Apple and IBM keeping strong. In the manufacturing of
semiconductor chips, the Intel and Motorola Corporations were very competitive into the 1980s,
although Japanese firms were making strong economic advances, especially in the area of memory
chips.

By the late 1980s, some personal computers were run by microprocessors that, handling 32 bits of data
at a time, could process about 4,000,000 instructions per second. Microprocessors equipped with read-
only memory (ROM), which stores constantly used, unchanging programs, now performed an increased
number of process-control, testing, monitoring, and diagnosing functions, like automobile ignition
systems, automobile-engine diagnosis, and production-line inspection duties. Cray Research and
Control Data Inc. dominated the field of supercomputers, or the most powerful computer systems,
through the 1970s and 1980s.

In the early 1980s, however, the Japanese government announced a gigantic plan to design and build a
new generation of supercomputers. This new generation, the so-called “fifth” generation, is using new
technologies in very large integration, along with new programming languages, and will be capable of
amazing feats in the area of artificial intelligence, such as voice recognition.
Progress in the area of software has not matched the great advances in hardware. Software has
become the major cost of many systems because programming productivity has not increased very
quickly. New programming techniques, such as object-oriented programming, have been developed to
help relieve this problem. Despite difficulties with software, however, the cost per calculation of
computers is rapidly lessening, and their convenience and efficiency are expected to increase in the
early future. The computer field continues to experience huge growth. Computer networking, computer
mail, and electronic publishing are just a few of the applications that have grown in recent years.
Advances in technologies continue to produce cheaper and more powerful computers offering the
promise that in the near future, computers or terminals will reside in most, if not all homes, offices, and
schools.

14
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Computer Generations

A generation refers to the state of improvement in the development of a product. This term is also used
in the different advancements of computer technology. With each new generation, the circuitry has
gotten smaller and more advanced than the previous generation before it. As a result of the
miniaturization, speed, power, and memory of computers has proportionally increased. New
discoveries are constantly being developed that affect the way we live, work and play.

The First Generation: 1946-1958 (The Vacuum Tube Years)

The first generation computers were huge, slow, expensive, and often undependable. In 1946two
Americans, Presper Eckert, and John Mauchly built the ENIAC electronic computer which used vacuum
tubes instead of the mechanical switches of the Mark I. The ENIAC used thousands of vacuum tubes,
which took up a lot of space and gave off a great deal of heat just like light bulbs do. The ENIAC led to
other vacuum tube type computers like the EDVAC (Electronic Discrete Variable Automatic Computer)
and the UNIVAC I (UNIVersal Automatic Computer).

The vacuum tube was an extremely important step in the advancement of computers. Vacuum tubes
were invented the same time the light bulb was invented by Thomas Edison and worked very similar to
light bulbs. It's purpose was to act like an amplifier and a switch. Without any moving parts, vacuum
tubes could take very weak signals and make the signal stronger ( amplify it). Vacuum tubes could also
stop and start the flow of electricity instantly ( switch). These two properties made the ENIAC computer
possible.

The ENIAC gave off so much heat that they had to be cooled by gigantic air conditioners. However
even with these huge coolers, vacuum tubes still overheated regularly. It was time for something new.

Figure 9. The Vacuum Tube

The Second Generation: 1959-1964 (The Era of the Transistor)

The transistor computer did not last as long as the vacuum tube computer lasted, but it was no less
important in the advancement of computer technology. In 1947 three scientists, John Bardeen, William
Shockley, and Walter Brattain working at AT&T's Bell Labs invented what would replace the vacuum

15
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

tube forever. This invention was the transistor which functions like a vacuum tube in that it can be used
to relay and switch electronic signals.

Figure 10. The Transistor

There were obvious differences between the transisitor and the vacuum tube. The transistor was
faster, more reliable, smaller, and much cheaper to build than a vacuum tube. One transistor replaced
the equivalent of 40 vacuum tubes. These transistors were made of solid material, some of which is
silicon, an abundant element (second only to oxygen) found in beach sand and glass. Therefore they
were very cheap to produce. Transistors were found to conduct electricity faster and better than
vacuum tubes. They were also much smaller and gave off virtually no heat compared to vacuum
tubes. Their use marked a new beginning for the computer. Without this invention, space travel in the
1960's would not have been possible. However, a new invention would even further advance our ability
to use computers.

The Third Generation: 1965-1970 (Integrated Circuits - Miniaturizing the Computer)

Transistors were a tremendous breakthrough in advancing the computer. However no one could
predict that thousands even now millions of transistors (circuits) could be compacted in such a small
space. The integrated circuit, or as it is sometimes referred to as semiconductor chip, packs a huge
number of transistors onto a single wafer of silicon. Robert Noyce of Fairchild Corporation and Jack
Kilby of Texas Instruments independently discovered the amazing attributes of integrated circuits.
Placing such large numbers of transistors on a single chip vastly increased the power of a single
computer and lowered its cost considerably.

Figure 11. The Integrated Circuits (IC)

Since the invention of integrated circuits, the number of transistors that can be placed on a single chip
has doubled every two years, shrinking both the size and cost of computers even further and further
enhancing its power. Most electronic devices today use some form of integrated circuits placed on
printed circuit boards-- thin pieces of bakelite or fiberglass that have electrical connections etched onto
them -- sometimes called a mother board.

These third generation computers could carry out instructions in billionths of a second. The size of
these machines dropped to the size of small file cabinets. Yet, the single biggest advancement in the
computer era was yet to be discovered.

16
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The Fourth Generation: 1971-Today (The Microprocessor)

Figure 12. The Microprocessor

This generation can be characterized by both the jump to monolithic integrated circuits(millions of
transistors put onto one integrated circuit chip) and the invention of the microprocessor (a single chip
that could do all the processing of a full-scale computer ). By putting millions of transistors onto one
single chip more calculation and faster speeds could be reached by computers. Because electricity
travels about a foot in a billionth of a second, the smaller the distance the greater the speed of
computers.

However what really triggered the tremendous growth of computers and its significant impact on our
lives is the invention of the microprocessor. Ted Hoff, employed by Intel (Robert Noyce's new
company) invented a chip the size of a pencil eraser that could do all the computing and logic work of a
computer. The microprocessor was made to be used in calculators, not computers. It led, however, to
the invention of personal computers, or microcomputers.

It wasn't until the 1970's that people began buying computer for personal use. One of the earliest
personal computers was the Altair 8800 computer kit. In 1975 you could purchase this kit and put it
together to make your own personal computer. In 1977 the Apple II was sold to the public and in 1981
IBM entered the PC (personal computer) market.

Today we have all heard of Intel and its Pentium® Processors and now we know how it all got started.
The computers of the next generation will have millions upon millions of transistors on one chip and will
perform over a billion calculations in a single second. There is no end in sight for the computer
movement.

17
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Types of Computer Systems

Supercomputer

A supercomputer is a computer that leads the world in terms of processing capacity, particularly speed of
calculation, at the time of its introduction. The term Super Computing was first used by New York World
newspaper in 1920 to refer to the large custom built tabulators IBM had made for Columbia University.
Supercomputers introduced in the 1960s, designed primarily by Seymour Cray at Control Data
Corporation (CDC), led the market into the 1970s until Cray left to form his own company, Cray Research.
He then took over the supercomputer market with his new designs, all in all holding the top spot in
supercomputing for 25 years (1965–1990). In the 1980s a large number of smaller competitors entered
the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these
disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-off
custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of
the 1980s companies to gain their experience, although Cray Inc. still specializes in building
supercomputers.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's
also-ran, as can be seen from the world's first (non solid state) digital programmable electronic computer
Colossus, used to break some German ciphers in World War II. CDC's early machines were simply very
fast single processors, some ten times the speed of the fastest machines offered by other companies. In
the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer
players developed their own such processors at lower price points to enter the market. In the later 1980s
and 1990s, attention turned from vector processors to massive parallel processing systems with
thousands of simple CPUs; some being off the shelf units and others being custom designs. Today,
parallel designs are based on "off the shelf" RISC microprocessors, such as the PowerPC or PA-RISC.
Software Tools
Software tools for distributed processing include standard APIs such as MPI and PVM and open source-
based software solutions such as Beowulf and openMosix which facilitate the creation of a sort of "virtual
supercomputer" from a collection of ordinary workstations or servers. Technology like Rendezvous pave
the way for the creation of ad hoc computer clusters. An example of this is the distributed rendering
function in Apple's Shake compositing application. Computers running the Shake software merely need
to be in proximity to each other, in networking terms, to automatically discover and use each other's
resources. While no one has yet built an ad hoc computer cluster that rivals even yesteryear's
supercomputers, the line between desktop, or even laptop, and supercomputer is beginning to blur, and
is likely to continue to blur as built-in support for parallelism and distributed processing increases in
mainstream desktop operating systems. An easy programming language for Supercomputers remains an
open research topic in Computer Science.

18
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Uses
Supercomputers are used for highly calculation-intensive tasks such as weather forecasting, climate
research (including research into global warming), molecular modeling (computing the structures and
properties of chemical compounds, biological macromolecules, polymers, and crystals), physical
simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear
weapons, and research into nuclear fusion), cryptanalysis, and the like. Military and scientific agencies
are heavy users.
Design
Supercomputers traditionally gained their speed over conventional computers through the use of
innovative designs that allow them to perform many tasks in parallel, as well as complex detail
engineering. They tend to be specialised for certain types of computation, usually numerical calculations,
and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed
to ensure the processor is kept fed with data and instructions at all times—in fact, much of the
performance difference between slower computers and supercomputers is due to the memory hierarchy
design and componentry. Their I/O systems tend to be designed to support high bandwidth, with latency
less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort
to eliminating software serialization, and using hardware to accelerate the remaining bottlenecks.

Supercomputer challenges and technologies

A supercomputer generates heat and must be cooled. Cooling most supercomputers is a major HVAC
problem.

Information cannot move faster than the speed of light between two parts of a supercomputer. For this
reason, a supercomputer that is many meters across must have latencies between its components
measured at least in the tens of nanoseconds. Seymour Cray's Cray supercomputer designs attempted to
keep cable runs as short as possible for this reason.

Supercomputers consume and produce massive amounts of data in a very short period of time. Much
work is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
 Vector processing
 Liquid cooling
 Non-Uniform Memory Access (NUMA)
 Striped disks (the first instance of what was later called RAID)
 Parallel file systems

19
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Processing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in
specialist high-performance applications. Vector processing techniques have trickled down to the mass
market in DSP architectures and SIMD processing instructions for general-purpose computers.

Operating systems
Supercomputer operating systems, today most often variants of UNIX, are every bit as if not more
complex as those for smaller machines. Their user interfaces tends to be less developed however, as the
OS developers have limited programming resources to spend on non-essential parts of the OS (i.e., parts
not directly contributing to the optimal utilization of the machine's hardware). This stems from the fact that
because these computers, often priced at millions of dollars, are sold to a very small market, their R&D
budgets are often limited. Interestingly this has been a continuing trend throughout the supercomputer
industry, with former technology leaders such as Silicon Graphics taking a backseat to such companies as
Nvidia, who have been able to produce cheap, feature rich, performant, and innovative products due to
the vast number of consumers driving their R&D.
Historically, until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility
and code portability for performance (processing and memory access speed). For the most part,
supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The
Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing
community. Similarly different vectorizing and parallelizing incompatible compilers for Fortran existed. This
trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between
the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's
UniCOS).
For this reason, in the future, the highest performance systems are likely to have a UNIX flavor but with
incompatible system unique features (especially for the highest end systems at secure facilities).

Programming
The parallel architectures of supercomputers often dictate the use of special programming techniques to
exploit their speed. Special-purpose Fortran compilers can often generate faster code than the C or C++
compilers, so Fortran remains the language of choice for scientific programming, and hence for most
programs run on supercomputers. To exploit the parallelism of supercomputers, programming
environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated
shared memory machines are being used.
Types of General Purpose Supercomputers
There are three main classes of general-purpose supercomputers:

20
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Vector processing machines allow the same (arithmetical) operation to be carried out on a large amount
of data simultaneously.

Tightly connected cluster computers use specially developed interconnects to have many processors and
their memory communicate with each other, typically in a NUMA architecture. Processors and networking
components are engineered from the ground up for the supercomputer. The fastest general-purpose
supercomputers in the world today use this technology.

Commodity clusters use a large number of commodity PCs, interconnected by high-bandwidth low-latency
local area networks.
As of 2002, Moore's Law and economies of scale are the dominant factors in supercomputer design: a
single modern desktop PC is now more powerful than a 15-year old supercomputer, and at least some of
the design tricks that allowed past supercomputers to out-perform contemporary desktop machines have
now been incorporated into commodity PC's. Furthermore, the costs of chip development and production
make it uneconomical to design custom chips for a small run and favor mass-produced chips that have
enough demand to recoup the cost of production.
Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in
essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-
grained parallelization that limits the amount of information that needs to be transferred between
independent processing units. For this reason, traditional supercomputers can be replaced, for many
applications, by "clusters" of computers of standard design which can be programmed to act as one large
computer.
Special-purpose Supercomputers
Special-purpose supercomputers are high-performance computing devices with a hardware architecture
dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom
VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for
applications such as astrophysics computation and brute-force code-breaking

Mainframe Computers

Mainframes (often colloquially referred to as "big iron") are large and "expensive" computers used mainly by government institutions and
arge companies for legacy applications, typically bulk data processing (such as censuses, industry/consumer statistics, ERP, and bank
transaction processing).
21
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The term arose during the early 1970s with the introduction of smaller, less complex computers such as the DEC PDP
series, which became known as mini(computer)s. The industry/users then coined the term "mainframe" to describe
larger, earlier types (previously known simply as "computers").
Definition
Modern mainframe computers' abilities are not so much defined by their high-quality internal engineering and resulting
proven reliability, expensive but high-quality technical support, top-notch security, and strict backward compatibility for
older software. These machines can and do run successfully for years without interruption, with repairs taking place
whilst they continue to run. Mainframe vendors offer such services as off-site redundancy—if a machine does break
down, the vendor offers the option to run customers' applications on their own machines (often without users even
noticing the change) whilst repairs go on.
Often, mainframes support thousands of simultaneous users who gain access through "dumb" terminals and early
mainframes either supported this timesharing mode or operated in batch mode where users had no direct access to the
computing service, it solely providing back office functions. At this time mainframes were so called because of their
very substantial size and requirements for specialised HVAC and electrical power. Nowadays mainframes support
access via any user interface, including the Web.
Some mainframes have the ability to run (or "host") multiple operating systems and thereby operate not as a single
computer but as a number of "virtual machines". In this role, a single mainframe can replace dozens of smaller servers,
reducing management and administrative costs while providing greatly improved scalability and reliability. The reliability
is improved because of the hardware redundancy noted above, and the scalability is achieved because hardware
resources can be reallocated among the "virtual machines" as needed, these features are available on most mid to
high end servers already. The cost advantage of doing this is usually negated by the high price of the mainframe.
As of late 2004, IBM mainframes dominate the market at over 90% marketshare. Unisys still manufactures ClearPath
mainframes, based on earlier Sperry and Burroughs product lines. Fujitsu is nominally still in the market, producing
machines based on the former Siemens and Amdahl lines, while Hitachi has left the mainframe business (except for
the zSeries 800 jointly designed with IBM). Acquisition costs vary, but new IBM mainframes start "under $200,000"
(zSeries 890 Model 110, U.S. 2004 list price, excluding disk storage).

History
Several manufacturers produced mainframe computers since the 1960s and 1970s; in the "glory days" it was "IBM and
the Seven Dwarfs": Burroughs, Control Data Corporation, General Electric, Honeywell, NCR, RCA, and Univac. The
larger of the latter companies were also often referred to as "The BUNCH" from their initials (Burroughs, Univac, NCR,
CDC, Honeywell).
Shrinking demand and tough competition caused a huge shakeout in the market in the early 80s—RCA sold out to
Univac and GE also left; Honeywell was bought out by Bull; Univac merged with Sperry to form Sperry/Univac, which
was later merged with Burroughs to form Unisys Corporation in 1986 (dubbed "dinosaurs mating"). In 1991, AT&T

22
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

briefly owned NCR.


For a period of time companies found that servers based on microcomputer designs could be deployed at a fraction of
the acquisition cost and offer local users much greater control over their own systems. "Dumb terminals" used for
interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand
plummeted and new mainframe installations were restricted mainly to financial services and government. For a while,
there was a consensus among industry analysts that the mainframe was a dying market as mainframe platforms were
increasingly replaced by personal computer networks.
That trend started to turn around in the late 1990s as corporations found new uses for their mainframes, since they can
offer web server performance similar to that of hundreds of smaller machines, but with much lower power and
administration costs. The growth of e-business has also dramatically increased the number of backend transactions
processed by tried-and-true mainframe software. As of late 2004, IBM's mainframe revenues are increasing even with
price reductions, thanks to attractive TCOs.
Another factor currently increasing mainframe use is the development of the Linux operating system, which is capable
of running on many mainframe systems, either directly or, more commonly, in virtual machines. Linux allows companies
and governments to take advantage of the software and development expertise from the open source community while
enjoying the low per-user costs and high reliability (and security) of mainframes.
Mainframes Vs Supercomputers
The distinction between supercomputers and mainframes is not a hard and fast one, but generally one can say that
supercomputers focus on problems which are limited by calculation speed while mainframes focus on problems which
are limited by Input/output and reliability. As a consequence:

Because of the parallelism visible to the programmer, supercomputers are often quite complicated to program and
require specialized, task-specific software. In contrast, mainframes hide parallelism from the programmer. (One side
effect is that even older software can benefit from adding mainframe CPs.)

Supercomputers are optimized for complicated computations that take place largely in memory, while mainframes are
optimized for simple computations involving huge amounts of external data accessed from databases ("mixed
workload").

Supercomputers tend to cater to science and the military, while mainframes tend to target business and civilian
government applications. Weather modelling, protein folding analysis, and digital film rendering are all tasks well suited
to supercomputers. Credit card processing, bank account management, market trading, and social insurance
processing are tasks well suited to mainframes. (Exception: Certain military applications require high security, a
mainframe strength.)

Supercomputers often run tasks that can tolerate interruption (e.g. global warming forecasts). Mainframes tend to run
those functions that must run reliably, even for years of continuous service (e.g. airline bookings).
23
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Supercomputers are often purpose-built for one or a very few specific institutional tasks. Mainframes typically handle a
wide variety of important, everyday tasks.

Mainframes assiduously and thoroughly support older software (dating back to applications written in the mid-1960s, in
IBM's case) alongside new software. Supercomputers tend not to have backward compatibility as a central design
feature.

Mainframes tend to have numerous ancillary service processors assisting their main central processors (for
cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual "processor count" is much
higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since
they don't appreciably add to raw number crunching power.
There's also some blurring of the term "mainframe" with high-end PC and UNIX servers. (Some PC and UNIX server
vendors occasionally refer to their systems as "mainframes" or "mainframe-like.") That blurring of the term is not widely
accepted, with the market in general agreement that true mainframes (particularly IBM Series) are genuinely and
demonstrably different.
Statistics
It has been reported that:
85% of all mainframe programs are written in the COBOL programming language
7% are written in Assembler, C or C++
5% are written in PL/I
3% are written in Java and other languages
Java use is increasing rapidly as of late 2004, and these figures are likely out-of-date. (See also zAAP, WebSphere,
and Linux.) Also, mainframe COBOL has recently acquired numerous Web-oriented features, such as XML parsing,
with PL/I following close behind.

Most mainframes (rumoured to be 90%) have IBM's CICS software installed.

In the early 1990s the media and many business and computing analysts predicted the death of the mainframe. The
predictions were disproved as many companies embraced the mainframe as offering an affordable means to handle
their Internet business models.

The quality of service offered by mainframes mean they are the preferred technology for many business critical
applications.

As of late 2004, IBM claimed over 200 new (21st century) mainframe customers—that is, customers that had never
previously owned a mainframe.

24
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Minicomputer

Figure 13. HP2114 minicomputer


Minicomputer is a largely obsolete term for a class of multi-user computers which make up the middle range of the
computing spectrum, in between the largest multi-user systems (mainframe computers) and the smallest single-user
systems (microcomputers or personal computers). More modern terms for such machines include midrange systems
(common in IBM parlance) workstations (common in Sun Microsystems and general UNIX/Linux parlance), and servers.

History

The term evolved in the 1960s to describe the "small" third generation computers that became possible with the use of
transistor and core memory technologies. They usually took up one or a few cabinets, compared with mainframes that
would usually fill a room. The first successful minicomputer was Digital Equipment Corporation's 12-bit PDP-8, which cost
from US$16,000 upwards when launched in 1964.

As microcomputers developed in the 1970s and 80s, minicomputers filled the mid-range area between low powered
microcomputers and high capacity mainframes. At the time microcomputers were single-user, relatively simple machines
running simple program-launcher operating systems like CP/M or DOS, while minis were much more powerful systems that
ran full multi-user, multitasking operating systems like VMS and Unix. The classical mini was a 16-bit computer, while the
emerging higher performance 32-bit minis were often referred to as superminis.

The 7400 series of TTL integrated circuits started appearing in minicomputers in the late 1960s. The 74181 Arithmetic and
Logic Unit was commonly used in the CPU data paths. Each 74181 had a bus width of four bits, hence the popularity of
bit-slice architecture. The 7400 series offered data-selectors, multiplexers, three-state buffers, memories, etc. in dual inline
packages with one-tenth inch spacing making major system components and architecture evident to the naked eye.
Starting in the 1980s, many minicomputers used VLSI (Large and Very Large scale Integration), often making the
hardware organization much less apparent.

Today at the turn of the millennium few minicomputers are still in use, having been overtaken by Fourth Generation
computers built using a more robust version of the microprocessor technology that is used in personal computers. These
are referred to as "servers", taking the name from the server software that they run (typically file server and back-end
database software, including email and web server software).

The decline of the minis happened due to the lower cost of microprocessor based hardware, the emergence of inexpensive
and easily deployable local area network systems, and the desire of end-users to be less reliant on inflexible minicomputer
manufacturers and IT departments / "data centres" – with the result that minicomputers and dumb terminals were replaced
by networked workstations and PCs in the latter half of the 1980s.

During the 1990s the change from minicomputers to inexpensive PC networks was cemented by the development of
several versions of Unix to run on the Intel x86 microprocessor architecture, including Solaris, Linux and

25
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

FreeBSD/NetBSD. Also, the Microsoft Windows series of operating systems now includes server versions that support
preemptive multitasking and other features required for servers, beginning with Windows NT. Significantly, Windows NT
was written largely by designers from DEC who were responsible for the DEC VMS operating system, originally developed
for the VAX minicomputer range in the 1970s. Also, as microprocessors have become more powerful, CPUs built up from
multiple components, once the distinguishing feature differentiating mainframes and midrange systems from
microcomputers, have become increasingly obsolete, even in the largest mainframe computers

Impact

Several pioneering computer companies first built minicomputers, such as DEC, Data General, and Hewlett-Packard (HP)
(who now refers to its HP3000 minicomputers as "servers" rather than "minicomputers"). And although today's PCs and
servers are clearly microcomputers physically, architecturally their CPUs and operating systems have evolved largely by
integrating features from minicomputers.

Workstation
A computer workstation, often colloquially referred to as workstation, is a high-end general-purpose microcomputer designed to be used
by one person at a time and which offers higher performance than normally found in a personal computer, especially with respect to
graphics, processing power and the ability to carry out several tasks at the same time. The 3Station by 3Com was a typical early example.
When comparing with some of the old definitions of computing power, some people may consider a workstation to be the equivalent of a
one-person minicomputer.
In the early 1980s, pioneers in this field were Apollo Computer and Sun Microsystems who created
UNIX-based workstations based on the Motorola 68000 processor.
Workstations tend to be very expensive, typically several times the cost of a standard PC and
sometimes costing as much as a new car. The high expense usually comes from using costlier
components that (one hopes) run faster than those found at the local computer store. Manufacturers try
to take a "balanced" approach to system design, making certain that data can flow unimpeded between
the many different subsystems within a computer. Additionally, workstation makers tend to push to sell
systems at higher prices in order to maintain somewhat larger profit margins than the commodity-driven
PC manufacturers.
The systems that come out of workstation companies often feature SCSI or Fibre Channel disk storage
systems, high-end 3D accelerators, single or multiple 64-bit processors, large amounts of RAM, and
well-designed cooling. Additionally, the companies that make the products tend to have very good
repair/replacement plans. However, the line between workstation and PC is increasingly becoming
blurred as trends toward consolidation and cost-cutting have caused workstation manufacturers to use
"off the shelf" PC components and graphics solutions as opposed to proprietary in-house developed
technology. Some attempts have been made to produce low-cost workstations (which are still expensive
by PC standards), but they have often had lackluster performance.
The fact that consumer products of PCs and game consoles are now themselves at the cutting edge of
technology makes deciding whether or not to purchase a workstation very difficult for many
organizations. Sometimes, these systems are still required, but many places opt for the less-expensive,
if more fault-prone, PC-level hardware.
26
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The Personal Computer

Personal computer and peripherals. From left to right: ink jet printer, TV (irrelevant), CRT monitor, broadband cable
modem for the internet, flat bed scanner. The tower (CPU, hard drive, etc) can just be glimpsed at bottom right. The
keyboard and mouse are wireless.

The term personal computer or PC has three meanings:


IBM's range of PCs that led to the use of the term - see IBM PC.
Any computer based on IBM's original specifications also known as IBM PC compatible.
Any microcomputer - (the subject of this article).
The first generation of microcomputers were called just that, and only sold in small numbers to those able to
(build them from kits or) operate them: engineers and accomplished hobbyists. The second generation micros were
known as home computers, and are discussed in that section.

History

A personal computer is an inexpensive microcomputer, originally designed to be used by only one person at a time,
and which is IBM PC compatible - (though in common usage it may sometimes refer to non-compatible machines).
The earliest known use of the term was in New Scientist magazine in 1964, in a series of articles
called "The World in 1984". In "The Banishment of Paper Work," Arthur L. Samuel of IBM's Watson
Research Center writes, "While it will be entirely feasible to obtain an education at home, via one's
own personal computer, human nature will not have changed."
The first generation of microcomputers that started to appear in the 1970s (see home
computers) were less powerful and in some ways less versatile than business computers of the
day (but in other ways more versatile, in terms of built-in sound and graphics capabilities), and
were generally used by computer enthusiasts for learning to program, for running simple
office/productivity applications, for electronics interfacing, and/or games, as well as for
accessing BBS's, general online services such as CompuServe, The Source, or Genie, or
platform-specific services such as Quantum Link (US) or Compunet (UK).
It was the launch of the VisiCalc spreadsheet, initially for the Apple II and later for the Atari 8-bit
family, Commodore PET, and IBM PC that became the "killer app" that turned the
microcomputer into a business tool. Later, Lotus 1-2-3, a combined spreadsheet (partly based
on VisiCalc), presentation graphics, and simple database application, became the PCs own
killer app. Good word processor programs also appeared for many home computers. The low
cost of personal computers led to great popularity in the home and business markets during the

27
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

1980s. In 1982, Time magazine named the personal computer its Man of the Year.
During the 1990s, the power of personal computers increased radically, blurring the formerly
sharp distinction between personal computers and multi-user computers such as mainframes.
Today higher-end computers often distinguish themselves from personal computers by greater
reliability or greater ability to multitask, rather than by straight CPU power.
Architecture
Personal computers can be categorized by size and portability:
 the desktop computer
 the portable computer
 the notebook or laptop
 the PDA
 the wearable computer
Most classes of PCs still follow the same basic IBM PC compatible design. This design makes them easily
upgradable and standardized.

Motherboard
The motherboard is the primary circuit board for a computer. Most other computer components
plug directly into the motherboard to allow them to exchange information. Motherboards usually
hold a chipset, BIOS, CMOS, parallel port, PS/2 keyboard and mouse ports and expansion
bays. Sometimes a secondary daughter board is plugged into the motherboard to provide more
expansion bays and to cut down on its size.

Central processing unit


The Central processing unit or CPU is the part of the computer that performs most of the
calculations that make programs or Operating Systems run. The CPU plugs directly into the
motherboard by one of many different types of sockets. Most IBM PC compatible computers
use an x86-compatible processor made by Intel, AMD, VIA Technologies or Transmeta.
The hardware capabilities of personal computers can usually be extended by the addition of
Expansion cards. The standard expansion bays for personal computers are ISA, PCI and AGP.
A PC may also be upgraded by the addition of extra drives (CD-ROM, Hard drive, etc).
Standard storage device interfaces are ATA, Serial ATA or SCSI.
Non Compatible Personal Computers
Despite the overwhelming popularity of the personal computer, a number of non-IBM PC
compatible microcomputers (sometimes also generically called Personal Computers) are still
popular in niche uses. The leading alternative is Apple Computer's proprietary Power
Macintosh platform, based on the PowerPC computer architecture.

SELF ASSESSMENT QUESTIONS

28
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

1. What is a computer?
2. Write a short precise history of computers
3. Explain briefly about computer generations
4. Explain the following:
a) Super Computer
b) Mainframe computer
c) Personal computer

29
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Lesson 2: Computer Hardware (the components that make up a computer)

Introduction

Computer Hardware refers to objects that you can actually touch, like disks, disk drives, display
screens, keyboards, printers, boards, and chips. In contrast, software is untouchable. Software exists
as ideas, concepts, and symbols, but it has no substance.

Books provide a useful analogy. The pages and the ink are the hardware, while the words, sentences,
paragraphs, and the overall meaning are the software. A computer without software is like a book full of
blank pages -- you need software to make the computer useful just as you need words to make a book
meaningful.

 The Motherboard and things directly attached to it.


 Computer chassis and screen, with standard sizes (i.e. ISO A4 for notebook chassis).

 Storage media

 Other peripherals

Lesson Objectives

At the end of this lesson you will be:

 Identify the basic components of the computer system


 Know the use of each component
 Know the classes of computer hardware

Computer Hardware can be put into classes as follows:

Class 1 Components

Class 1 components are integral to the function of the computer.

CPU
The CPU (Central Processing Unit) is the 'brain' of the computer. It's typically a square ceramic
package plugged into the motherboard, with a large heat sink on top (and often a fan on top of
that heat sink). All instructions the computer will process are processed by the CPU. There are
many "CPU architectures", each of which has its own characteristics and trade-offs. The
dominant CPU architectures used in personal computing are x86 and PowerPC. x86 is easily the
most popular processor for this class of machine (the dominant manufacturers of x86 CPUs are

30
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Intel and AMD). The other architectures are used, for instance, in workstations, servers or
embedded systems CPUs contain a small amount of static RAM (SRAM) called a cache. Some
processors have two or three levels of cache, containing as much as several megabytes of
memory.

Dual Core

Some of the new processors made by Intel and AMD are Dual core. The Intel designation for dual core
are "Pentium D", "Core Duo" and "Core 2 Duo" while AMD has its "X2" series and "FX-6x".

The core is where the data is processed and turned into commands directed at the rest of the
computer. Having two cores increases the data flow into the processor and the command flow
out of the processor potentially doubling the processing power, but the increased performance is
only visible with multithreaded applications and heavy multitasking.

Hyper Threading
Hyper Threading is a technology that uses one core but adds a virtual processor to an additional thread
at the same time. Normally the processor carries out one task and then proceeds onto the next
task. But with Hyper Threading the processor continually switches in-between each task as if to
do them at the same time.

Case

Most modern computers have an "ATX form factor" case in which ATX-compatible power supplies,
Mainboards and Drives can be mounted.

The Mini-ITX is much different in important ways unlike its relatives the Micro-ATX and the Flex-ATX.
The mainboard size can be up to 170mm x 170 mm which is smaller than the Flex and Micro-
ATX can be. Usually at less than 100 watts, the Mini-ITX PSU is energy efficient. The Mini-ITX is
also backward-compatible with the Flex/Micro-ATX models.

During the 1980's and 1990's almost all cases were beige, even Apple's Macintosh line. A few rare
exceptions to this were black. Only recently have computer case designers realized that there
was a worthwhile market for other colors and designs. This has led to all sorts of modifications to
the basic design of a computer case. Now it is easy to find cases with transparent windows and
glowing lights illuminating their insides.

Power Supply

All computers have some sort of power supply. This converts the supply voltage (AC 110 or 220V) to
different voltages such as DC 5V, 12V and 3.3V. These are needed inside the computer system
by nearly every component inside the computer. There will be a bunch of connectors coming off
of the supply, called Molex connectors. They come in varying sizes, meant for different
applications, such as the motherboard (usually the largest of the connectors), the hard and
optical drives (a bunch of medium-sized connectors), as well as the floppy drive (a relatively
small connector, also saw usage among videocards in 2004). As newer standards come out, the
31
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

types of connectors have changed. Many power supplies now come with power connectors for
Serial ATA hard drives. These are smaller and are "hot-swappable", meaning they can be
removed and plugged in again without fear of data loss or electrical problems.

The power supply also has an exhaust fan that is responsible for cooling the power supply, as well as
providing a hot air exhaust for the entire case. Some power supplies have two fans to promote
this effect.

It is important to buy a power supply that can accommodate all of the components involved. Some may
argue that it is the most important part of a computer, and therefore it is worth spending the
money to get a decent one.

Motherboard

The Motherboard (also called Mainboard) is a large, thin, flat, rectangular fiberglass board (typically
green) attached to the case. The Motherboard carries the CPU, the RAM, the chipset and the
expansion slots (PCI, AGP - for graphics -, ISA, etc.). The Motherboard also holds things like the
BIOS (Basic Input Output System) and the CMOS Battery (a coin cell that keeps an embedded
RAM in the motherboard -often NVRAM- powered to keep various settings in effect). Most
modern motherboards have onboard sound and LAN controller, some of them even have on-
board graphics. These are adequate for standard office work and system sounds. But dedicated
sound and graphics cards plugged into the expansion slots offer much better quality and
performance. The expansion slots (PCI, PCI-e, PCI-X, AGP, ISA, etc.) allow additional functions.

RAM

Random Access Memory (RAM) is a memory that the microprocessor uses to store data during
processing. This memory is volatile (loses its contents at power-down). When a software
application is launched, the executable program is loaded from hard drive to the RAM. The
microprocessor supplies address into the RAM to read instructions and data from it. RAM is
needed because hard drives are too slow to operate with the speed of a microprocessor. Some
of the types of RAM are SDRAM, DDR RAM, Rambus RAM, SIMM, DIMM.

PCI Express Cards/Slots

The PCI Express standard was created to replace both AGP and PCI slots, with PCI Express 16x and
PCI 1x respectively for most implementations. The current implementation of PCI Express allows
up to PCI Express 32x. The reason for the change is that the older PCI cards don't transfer data
quickly enough to keep up with modern day gaming, AutoCAD and video editing software. Think
of it this way, there is a tap that is two inches in diameter, but a drain that is only one inch in
diameter. The water doesn't drain quickly enough and eventually the sink overflows. Just like a
PCI video card.

AGP Cards

Most graphic cards produced from about 1998-2004 were AGP (Accelerated Graphics Port) cards.
They are placed in a certain slot on the mainboard with an extra high data transfer rate. The
interface was invented to keep the graphics card away from the PCI bus, which was starting to
32
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

become too constrained for modern graphics cards. Every graphic card carries a graphic chip
(GPU) and very fast DDR RAM for textures and 3D data. Their data buses have 1X, 2X, 4X, and
8X speeds. The bus is 32-bit, much like PCI. AGP slots are slightly shorter than PCI slots and
often brown in color. A similar type of slot called AGP Pro is longer and has extra power leads to
accommodate modern video cards. It didn't really catch on in the mainstream market, and
graphics card makers preferred to add an extra power connector to supply the power they
needed.

PCI Cards

The PCI (Peripheral Component Interconnect) bus is the most popular internal interconnect for
personal computers. They are usually white in color.

The specification features:

Plug and play configuration (through standardised means for interacting with configuration software)

Standardised electrical connections

Common PCI implementations in desktop PCs feature:

32-bit addressing
33-MHz bus clock

High-end implementations may also feature:

64-bit addressing
"Hot plugging" (the ability to add / remove PCI devices from a running machine)
66-MHz bus clock
(all of these are characteristic of PCI-X)

There have been many revisions and evolutions of the PCI specification over the years. Recently, PCI-
X has sought to extend the aging architecture for the needs of modern server-class machines,
avoiding some of the performance bottlenecks of previous revisions. The new PCI Express
specification seems likely to succeed PCI in all classes of personal computer within the next few
years.

ISA Cards

Industry Standard Architecture (ISA) cards were the original PC extension cards. Originally running on
an 8-bit bus, they ran on a 16-bit bus as of 1984. Like PCI slots, they supported Plug-and-Play
as of 1993 (prior to this, one had to set jumpers for IRQ interrupts and such). In comparison to
PCI slots, they are rather long, and often black in color. They are not found on most computers
built after 1999.

Class 2 Components: Storage

33
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Class 2 components are storage media for non-volatile data.

Do not put magnetic media (including floppy disks, hard drives, video cassette tape) through airport X-
ray machines. The X-rays themselves are not the problem -- it's the magnetic fields from the
conveyor-belt motors that all too often erase magnetic media.

Optical media -- Compact Disks (CDs) and the similar-looking DVDs -- are completely immune to
magnetic fields. They can be run through airport X-ray machines without any problems.

Flash memory is also immune to magnetic fields.

Sometimes one can distinguish between "fixed media" (the hard drive) that is more or less permanently
mounted inside the computer case, and "removable media" (just about every other kind of media)
that is easy to pull from one computer and put into another computer.

Floppy Disk drives

8" Floppy Disk: In the late 1960s IBM invented the 8-inch floppy disk. This was the first floppy disk
design. Used in the 1970s and as a read-only disk it had storage-write restrictions to the people it
was distributed to. However, later on a read-write format came about. In today's modern society
it is rare to find a computer that uses the 8-inch floppy disk.

5.25" Floppy Disk: This disk was introduced some time later, and was used extensively in the 1980s.

3.5" Floppy Disk: This is the one the oldest and more commonly used storage media listed here. Floppy
disks hold from 400KB up to 1.44 MB. 720K(low-density) and 1.44 MB(high-density) with a 3.5"
disc are usually the average type found. Floppy disks have largely been superseded by flash
drives as a transfer medium, but are still widely used as backup storage.

Hard drive

A hard drive consists of one or more magnetic platters or disks and a read arm with two
electromagnetic coils for each disk. Each hard disk is divided into many sectors, each containing
a certain amount of data. As of now, it is the cheapest and most common way to store a lot of
data in a small space.

CD-ROM drive

Compact Disc Read Only Memory (CD-ROM) is a standard format for storing a variety of data. A CD-
ROM holds about 700 MB of data. The media resembles a small, somewhat flexible plastic disc.
Any scratch or abrasion on the data side of the disc can lead to it being unreadable.

Cleaning CD's: Dust can be removed from a CD's surface using compressed air or by very lightly
wiping the information side with a very soft cloth (such as an eyeglass cleaning cloth) from the
center of the disc in an outward direction. Wiping the information surface of any type of CD in a
circular motion around the center, however, has been known to create scratches in the same

34
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

direction as the information and potentially cause data loss. Fingerprints or stubborn dust can be
removed from the information surface by wiping it with a cloth dampened with diluted dish
detergent (then rinsing) or alcohol (methylated spirits or isopropyl alcohol) and again wiping from
the center outwards, with a very soft cloth (non-linting : polyester, nylon, etc.). It is harmful,
however, to use acetone, nail polish remover, kerosene, petrol/gasoline, or any other type of
petroleum-based solvent to clean a CD-R; the use of petroleum based solvents will damage the
polycarbonate surface and the CD-R will become unreadable.

CD-RW drive

Compact disc Read/Write drives support the creation of CD-R and CD-RW discs, and also function as
CD-ROM drives. These drives use low-powered lasers to 'burn' data into the active layer of the
disc. CD-R (Compact disc recordable) discs are 'write once' - once they have been written to, the
data cannot be erased or changed. However, multisessions can be created and more data can
be added.

CD-RW (Compact disc rewritable) discs can be rewritten or erased multiple times. This is a two-pass
process so they typically take twice as long as CD-R discs to produce. CD-RW drives will
typically have three speed ratings - one for reading discs, one for writing CD-R discs and another
for writing CD-RW discs. Speed ratings vary from 1x to 52x, where 1x means that a CD is
written/read in 'real time' - a 52 minute audio CD would take about 52 minutes to create at 1x
speed, and about 1 minute at 52x speed.

The data can be written to the disc in a variety of formats to create an audio CD, a data CD, a video CD
or a photo CD. The audio CDs should play on most standard audio CD equipment and the video
and photo CDs will play on many consumer DVD players. Many CD writers (also known as
'burners') are now combination drives which also function as DVD-ROM drives. Most DVD-RW
drives also have CD-RW capabilities.

DVD-ROM drive

Digital Video/Versatile Disk Read Only Memory (DVD-ROM). This optical drive works on a similar
principle to the CD-ROM, with a laser being used to read data stored in pits on the surface of a
reflective disk. DVDs are read using a shorter wavelength of light (a red laser, rather than an
infra-red one). In addition to having a greater data-density, DVDs may be double sided and may
be "dual layer".

DVD-RW drive

DVD's hold about 4.7 gigabytes and dual-layer disks hold 8.4 gigabytes (dual layer equipment and
disks are now becoming more affordable)

Other removable media

Flash memory

35
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Some common types of Flash memory cards are CompactFlash, Secure Digital (SD), and xD. There
are other formats which have fallen into deprecation, such as Smartmedia and MultiMediaCard
(MMC).

Flash memory is faster than magnetic media and much more rugged. The only reason Flash hasn't
replaced hard drives is that Flash memory is much more expensive per gigabyte than hard
drives.

USB Flash drive

Memory sticks or Flash drives are solid-state NAND flash chips packaged to provide additional memory
storage. These drives are quickly replacing floppy disks as a means of transferring data from one
PC to another in the absence of a network.

Class 3 Components: Peripherals

Class 3 components are components which allow humans to interface with computers.

Display device

Includes computer monitors and other display devices. CRTs and LCDs are common. LCDs are a more
recent development, and are gradually replacing CRTs as they become more affordable. LCD's
in addition to being lighter also use less energy and generate less heat.

Sound Output

Includes internal or external speakers and headphones.

Mouse

A user interface device that can enable different kinds of control than a keyboard, particularly in GUIs. It
was developed at the Xerox PARC (Palo Alto Research Center) and adopted and made popular
with the Apple Mac. Today, nearly all modern operating systems can use a mouse. Most mice
(sometimes the plural is 'mouses' to prevent confusion with the rodent) are made from plastic,
and may use a ball to track movement, an LED light, or a laser. Today you can get a wireless
mouse that allows you to easily give a presentation without being tied to a desk. These mice are
usually LED or Laser based tracking.

History

In 1964, the first prototype computer mouse was made to use with a graphical user interface (GUI),
windows. Douglas Engelbart received a patent for the wooden shell with two metal wheels
(computer mouse U.S. Patent # 3,541,541) in 1970, describing it in the patent application as an
"X-Y position indicator for a display system.” It was nicknamed the mouse because the tail came
out the end, Engelbart revealed about his invention. His version of windows was not considered
36
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

patentable (no software patents were issued at that time), but Douglas Engelbart has over 45
other patents to his name. There was also a DB-9 connector that was used to be an old serial
mouse connector.

Keyboard

A keyboard is an input device which is connected to a computer and used to type instructions or
information into the computer. Typically, a keyboard has about 100 or so keys.

Keyboards differ between languages. Most English-speaking people use what is called a QWERTY
layout. This refers to the order of the top row of keys. Some foreign languages (i. e. German) use
QWERTZ, where the Z and Y are switched. Many laptop computers do not include a number
pad. (There is sometimes a function on the keyboard to enable a numpad-like mode.)

Modern keyboards sometimes have extra controls such as volume, and keys that can be programmed
to bring up programs of the user's choice.

Printer

A printer makes marks on paper. It can print images and text.

The most common types of printers today are:

Laser printer: Prints very crisp text, but cheaper models can only print in black and white. Good for
places like offices where high printing speed is needed.
Color inkjet printer: Prints photos and other images in color (using 4 colors of ink -- cyan, magenta,
yellow, and black), but the text they print is often not as crisp as a laser printer.

The average printer of the early 1990s would connect to a computer through its parallel port. To connect
it to the computer via parallel port, one would have to screw it into the port. Today many printers
are connected through USB. This is because it is easier to connect and remove through a simple
plug and play system. It also allows for faster transfer speeds than parallel.

Scanner

A scanner is a device for digitizing paper documents into images that may be manipulated by a
computer. The two main classes of scanner are:
Hand-held scanners (in which the user manually drags a small scanning head over the document), and
flat-bed scanners (which are designed to accommodate a whole sheet of paper, which is then
examined by a motorised scanning head).

If the original document contained text, Optical Character Recognition (OCR) software may be used to
reconstruct the text of the document from the scanned images.

Modem

A contraction of "Modulator - demodulator", a modem allows a computer to communicate over an


analogue medium (most commonly a telephone line). The modem encodes digital signals from
37
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

the computer as analogue signals suitable for transmission (modulation) and decodes digital
data from a modulated analogue signal (demodulation). Using modems two computers may
communicate over a telephone line, with the data passed between them being represented as
sound. Modems are usually involved with dial-up internet services. As broadband catches on,
they are falling into disuse. However, the devices used to connect to broadband connections are
also called modems, specifically DSL Modems or Cable Modems.

Connectors and Cables

There are many different types of connectors and cables in personal computers, and this section will
address as many as the various editors deem relevant.

Internal Connectors

Several types of cables are used to connect components together inside the case, providing power and
a path for data. These include: Motherboard Power Connector: This connector is designed
especially to move electricity from the power supply to the motherboard. Older computers use
the AT power connections, with two six-pin connectors lined up side by side. ATX motherboards
used a single connector with 20 pins arranged in two rows of 10.

Many motherboards now also use supplementary power connectors, such as a 4 pin plug specifically
for the CPU supply.

Some others have more than 20 pins for the main connector. The extra pins are in the form of a
'separate' connector that fits onto the end of the standard 20 pin connector. This may be used or
not as required by the particular motherboard.

The PCI-e interface may also require the use of further power cables from the power supply.

Power Connectors for Drives: Hard drives, optical drives, and, increasingly, high-end video cards use a
4-wire power connection, of which several are available from a power supply.

Floppy drives use a smaller connector.

With the introduction of the SATA interface for data another type of power connector for drives was also
introduced. This is thinner than the previous power connector.

40 and 80-pin IDE Cables: These cables are used by hard drives and optical drives to transfer data to
and from the motherboard. These are now sometimes called PATA (Parallel ATA) cables to
differentiate from the more recent SATA. SATA (Serial ATA) Cables: These cables are now used
by most hard drives and even optical drives to carry data to and from the motherboard. They are
much thinner than SATA ribbon cables, and the connectors are much smaller. Generally red
coloured.

SATA drives generally also require a new type of power connector, though some can also use the older
white 'Molex' plug. Adaptors are available if the power supply doesn't have the correct connector.

38
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

34-pin Floppy Cables: These are used to connect floppy drives to floppy disk connectors on the
mainboard/motherboard.

External Connectors

Without connections to the rest of the world, a computer would just be a fancy paperweight. Numerous
connectors are used to make a computer useful.

AT Keyboard Connector: Found on older computers, this connector is large and round with five pins.

PS/2 Connector: This connector is currently the most popular for connecting both the keyboard and
mouse. Note that older mice once used serial ports (defined below), and newer mice frequently
use the Universal Serial Bus (USB).

VGA Connector: This connector has 3 rows of 5 pins each, and is used to connect the computer to the
display screen.

Parallel Port (DB-25): This connector is commonly used to interface with printers, and can also transfer
between computers. It has been mostly replaced by USB. Serial Port (DB-9): This 9-pin
connector is used to connect all sorts of devices, but is being replaced by USB. It has been used
in the past to connect mice and transfer data between computers.

Universal Serial Bus (USB): This relatively recent connector can connect the computer to almost
anything. It has been used for storage devices, printers, sound, mice, keyboards, cameras, and
even networking. USB 2.0 allows transfer speeds of up to 480 Mbps.

FireWire (IEEE 1394) port: This high-speed connection runs at 400Mbps (1394a) or 800Mbps (1394b),
and can connect up to 63 external devices to a single port. Most digital camcorders have a
firewire port to connect to a computer.

RJ-11 (phone) Connectors: This is the type of connector you will see on phones and modems. It is not
used for much else.

RJ-45 Connectors: These are used to connect computers to an Ethernet network. Maximum speed of
such a connection is now 1000 Mbps (1Mb/second is equal to one megabit), or 1 Gbps
(1Gb/second is equal to one gigabit.)

Audio Connectors: Three of these connectors can be found on an average sound card, and are used to
connect to microphones (usually pink), speakers (usually green), and other audio devices
(usually blue). The external device connector is usually a silver or gold-plated plug that fits into a
round hole.

LESSON 3: Computer Software (Programs that make the computer run)

Introduction

In this lesson we will look at computer software. After looking at the previous lesson you have an idea
as to what computer software is.

39
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Computer software, or just software, is the collection of computer programs and related data that
provide the instructions telling a computer what to do. We can also say software refers to one or
more computer programs and data held in the storage of the computer for some purposes.
Program software performs the function of the program it implements, either by directly providing
instructions to the computer hardware or by serving as input to another piece of software. The
term was coined to contrast to the old term hardware (meaning physical devices). In contrast to
hardware, software is intangible, meaning it "cannot be touched". [1] Software is also sometimes
used in a more narrow sense, meaning application software only. Sometimes the term includes
data that has not traditionally been associated with computers, such as film, tapes, and records.

Examples of computer software include:

Application software includes end-user applications of computers such as word processors or video
games, and ERP software for groups of users.

Middleware controls and co-ordinates distributed systems.


Programming languages define the syntax and semantics of computer programs. For example, many
mature banking applications were written in the COBOL language, originally invented in 1959.
Newer applications are often written in more modern programming languages.

System software includes operating systems, which govern computing resources. Toda large
applications running on remote machines such as Websites are considered to be system
software, because the end-user interface is generally through a graphical user interface, such as
a web browser.

 Testware is software for testing hardware or a software package.


 Firmware is low-level software often stored on electrically programmable memory devices.
Firmware is given its name because it is treated like hardware and run ("executed") by other
software programs.

 Shrinkware is the older name given to consumer-bought software, because it was often sold in
retail stores in a shrink-wrapped box.

 Device drivers control parts of computers such as disk drives, printers, CD drives, or computer
monitors.

 Programming tools help conduct computing tasks in any category listed above. For
programmers, these could be tools for debugging or reverse engineering older legacy systems
in order to check source code compatibility.

History

The first theory about software was proposed by Alan Turing in his 1935 essay Computable numbers
with an application to the Entscheidungsproblem (Decision problem) . The term "software" was first used
in print by John W. Tukey in 1958. Colloquially, the term is often used to mean application software. In
computer science and software engineering, software is all information processed by computer system,
programs and data. The academic fields studying software are computer science and software
engineering.

40
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The history of computer software is most often traced back to the first software bug in 1946. As more
and more programs enter the realm of firmware, and the hardware itself becomes smaller, cheaper and
faster due to Moore's law, elements of computing first considered to be software, join the ranks of
hardware. Most hardware companies today have more software programmers on the payroll than
hardware designers, since software tools have automated many tasks of Printed circuit board
engineers. Just like the Auto industry, the Software industry has grown from a few visionaries operating
out of their garage with prototypes. Steve Jobs and Bill Gates were the Henry Ford and Louis Chevrolet
of their times, who capitalized on ideas already commonly known before they started in the business. In
the case of Software development, this moment is generally agreed to be the publication in the 1980s
of the specifications for the IBM Personal Computer published by IBM employee Philip Don Estridge.
Today his move would be seen as a type of crowd-sourcing.

Until that time, software was bundled with the hardware by Original equipment manufacturers (OEMs)
such as Data General, Digital Equipment and IBM. When a customer bought a minicomputer, at that
time the smallest computer on the market, the computer did not come with Pre-installed software, but
needed to be installed by engineers employed by the OEM. Computer hardware companies not only
bundled their software, they also placed demands on the location of the hardware in a refrigerated
space called a computer room. Most companies had their software on the books for 0 dollars, unable to
claim it as an asset (this is similar to financing of popular music in those days). When Data General
introduced the Data General Nova, a company called Digidyne wanted to use its RDOS operating
system on its own hardware clone. Data General refused to license their software (which was hard to
do, since it was on the books as a free asset), and claimed their "bundling rights". The Supreme Court
set a precedent called Digidyne v. Data General in 1985. The Supreme Court let a 9th circuit decision
stand, and Data General was eventually forced into licensing the Operating System software because it
was ruled that restricting the license to only DG hardware was an illegal tying arrangement. Soon after,
IBM 'published' its DOS source for free, and Microsoft was born. Unable to sustain the loss from
lawyer's fees, Data General ended up being taken over by EMC Corporation. The Supreme Court
decision made it possible to value software, and also purchase Software patents. The move by IBM
was almost a protest at the time. Few in the industry believed that anyone would profit from it other than
IBM (through free publicity). Microsoft and Apple were able to thus cash in on 'soft' products. It is hard
to imagine today that people once felt that software was worthless without a machine. There are many
successful companies today that sell only software products, though there are still many common
software licensing problems due to the complexity of designs and poor documentation, leading to
patent trolls.

With open software specifications and the possibility of software licensing, new opportunities arose for
software tools that then became the de facto standard, such as DOS for operating systems, but also
various proprietary word processing and spreadsheet programs. In a similar growth pattern, proprietary
development methods became standard Software development methodology.

Overview

A layer structure showing where operating system is located on generally used software systems on
desktops

Software includes all the various forms and roles that digitally stored data may have and play in a
computer (or similar system), regardless of whether the data is used as code for a CPU, or other
interpreter, or whether it represents other kinds of information. Software thus encompasses a wide
array of products that may be developed using different techniques such as ordinary programming
languages, scripting languages, microcode, or an FPGA configuration.
41
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The types of software include web pages developed in languages and frameworks like HTML, PHP,
Perl, JSP, ASP.NET, XML, and desktop applications like OpenOffice.org, Microsoft Word developed in
languages like C, C++, Java, C#, or Smalltalk. Application software usually runs on an underlying
software operating systems such as Linux or Microsoft Windows. Software (or firmware) is also used in
video games and for the configurable parts of the logic systems of automobiles, televisions, and other
consumer electronics.

Computer software is so called to distinguish it from computer hardware, which encompasses the
physical interconnections and devices required to store and execute (or run) the software. At the lowest
level, executable code consists of machine language instructions specific to an individual processor. A
machine language consists of groups of binary values signifying processor instructions that change the
state of the computer from its preceding state. Programs are an ordered sequence of instructions for
changing the state of the computer in a particular sequence. It is usually written in high-level
programming languages that are easier and more efficient for humans to use (closer to natural
language) than machine language. High-level languages are compiled or interpreted into machine
language object code. Software may also be written in an assembly language, essentially, a mnemonic
representation of a machine language using a natural language alphabet. Assembly language must be
assembled into object code via an assembler.

Types of software

Practical computer systems divide software systems into three major classes[citation needed]: system
software, programming software and application software, although the distinction is arbitrary, and often
blurred.

System software

System software provides the basic functions for computer usage and helps running the computer
hardware and system. It includes a combination of the following:

 device drivers
 operating systems

 servers

 utilities

 window systems

System software is responsible for managing a variety of independent hardware components, so that
they can work together harmoniously. Its purpose is to unburden the application software programmer
from the often complex details of the particular computer being used, including such accessories as

42
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

communications devices, printers, device readers, displays and keyboards, and also to partition the
computer's resources such as memory and processor time in a safe and stable manner.

Programming software

Programming software usually provides tools to assist a programmer in writing computer programs, and
software using different programming languages in a more convenient way. The tools include:

 compilers
 debuggers

 interpreters

 linkers

 text editors

An Integrated development environment (IDE) is a single application that attempts to manage all these
functions.

Application software

System software does not aim at a certain application fields. In contrast, different application software
offers different function based on users and the area it served. Application software is developed for
some certain purpose, which either can be a certain program or a collection of some programmes, such
as a graphic browser or the data base management system. Application software allows end users to
accomplish one or more specific (not directly computer development related) tasks. Typical applications
include:

 industrial automation
 business software

 video games

 quantum chemistry and solid state physics software

 telecommunications (i.e., the Internet and everything that flows on it)

 databases

 educational software

 Mathematical software

 medical software

 molecular modeling software

 image editing

 spreadsheet

 simulation software
43
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Word processing

 Decision making software

Application software exists for and has impacted a wide variety of topics.

Software topics

Architecture

Users often see things differently than programmers. People who use modern general purpose
computers (as opposed to embedded systems, analog computers and supercomputers) usually see
three layers of software performing a variety of tasks: platform, application, and user software.

 Platform software: Platform includes the firmware, device drivers, an operating system, and
typically a graphical user interface which, in total, allow a user to interact with the computer and
its peripherals (associated equipment). Platform software often comes bundled with the
computer. On a PC you will usually have the ability to change the platform software.
 Application software: Application software or Applications are what most people think of when
they think of software. Typical examples include office suites and video games. Application
software is often purchased separately from computer hardware. Sometimes applications are
bundled with the computer, but that does not change the fact that they run as independent
applications. Applications are usually independent programs from the operating system, though
they are often tailored for specific platforms. Most users think of compilers, databases, and
other "system software" as applications.

 User-written software: End-user development tailors systems to meet users' specific needs.
User software include spreadsheet templates and word processor templates. Even email filters
are a kind of user software. Users create this software themselves and often overlook how
important it is. Depending on how competently the user-written software has been integrated
into default application packages, many users may not be aware of the distinction between the
original packages, and what has been added by co-workers.

Documentation

Most software has software documentation so that the end user can understand the program, what it
does, and how to use it. Without clear documentation, software can be hard to use—especially if it is
very specialized and relatively complex like Photoshop or AutoCAD.

Developer documentation may also exist, either with the code as comments and/or as separate files,
detailing how the programs works and can be modified.

Library

An executable is almost always not sufficiently complete for direct execution. Software libraries include
collections of functions and functionality that may be embedded in other applications. Operating
systems include many standard Software libraries, and applications are often distributed with their own
libraries.

Standard
44
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Since software can be designed using many different programming languages and in many different
operating systems and operating environments, software standard is needed so that different software
can understand and exchange information between each other. For instance, an email sent from a
Microsoft Outlook should be readable from Yahoo! Mail and vice versa.

Execution

Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory).
Once the software has loaded, the computer is able to execute the software. This involves passing
instructions from the application software, through the system software, to the hardware which
ultimately receives the instruction as machine code. Each instruction causes the computer to carry out
an operation – moving data, carrying out a computation, or altering the control flow of instructions.

Data movement is typically from one place in memory to another. Sometimes it involves moving data
between memory and registers which enable high-speed data access in the CPU. Moving data,
especially large amounts of it, can be costly. So, this is sometimes avoided by using "pointers" to data
instead. Computations include simple operations such as incrementing the value of a variable data
element. More complex computations may involve many operations and data elements together.

Quality and reliability

Software quality is very important, especially for commercial and system software like Microsoft Office,
Microsoft Windows and Linux. If software is faulty (buggy), it can delete a person's work, crash the
computer and do other unexpected things. Faults and errors are called "bugs." Many bugs are
discovered and eliminated (debugged) through software testing. However, software testing rarely – if
ever – eliminates every bug; some programmers say that "every program has at least one more bug"
(Lubarsky's Law). All major software companies, such as Microsoft, Novell and Sun Microsystems,
have their own software testing departments with the specific goal of just testing. Software can be
tested through unit testing, regression testing and other methods, which are done manually, or most
commonly, automatically, since the amount of code to be tested can be quite large. For instance, NASA
has extremely rigorous software testing procedures for many operating systems and communication
functions. Many NASA based operations interact and identify each other through command programs
called software. This enables many people who work at NASA to check and evaluate functional
systems overall. Programs containing command software enable hardware engineering and system
operations to function much easier together.

License

The software's license gives the user the right to use the software in the licensed environment. Some
software comes with the license when purchased off the shelf, or an OEM license when bundled with
hardware. Other software comes with a free software license, granting the recipient the rights to modify
and redistribute the software. Software can also be in the form of freeware or shareware.

Patents

Software can be patented; however, software patents can be controversial in the software industry with
many people holding different views about it. The controversy over software patents is that a specific
algorithm or technique that the software has may not be duplicated by others and is considered an
intellectual property and copyright infringement depending on the severity.

45
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Design and implementation

Design and implementation of software varies depending on the complexity of the software. For
instance, design and creation of Microsoft Word software will take much more time than designing and
developing Microsoft Notepad because of the difference in functionalities in each one.

Software is usually designed and created (coded/written/programmed) in integrated development


environments (IDE) like Eclipse, Emacs and Microsoft Visual Studio that can simplify the process and
compile the program. As noted in different section, software is usually created on top of existing
software and the application programming interface (API) that the underlying software provides like
GTK+, JavaBeans or Swing. Libraries (APIs) are categorized for different purposes. For instance,
JavaBeans library is used for designing enterprise applications, Windows Forms library is used for
designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication
Foundation is used for designing web services. Underlying computer programming concepts like
quicksort, hashtable, array, and binary tree can be useful to creating software. When a program is
designed, it relies on the API. For instance, if a user is designing a Microsoft Windows desktop
application, he/she might use the .NET Windows Forms library to design the desktop application and
call its APIs like Form1.Close() and Form1.Show()[6] to close or open the application and write the
additional operations him/herself that it need to have. Without these APIs, the programmer needs to
write these APIs him/herself. Companies like Sun Microsystems, Novell, and Microsoft provide their
own APIs so that many applications are written using their software libraries that usually have
numerous APIs in them.

Computer software has special economic characteristics that make its design, creation, and distribution
different from most other economic goods.[7][8] A person who creates software is called a programmer,
software engineer, software developer, or code monkey, terms that all have a similar meaning.

Industry and organizations

A great variety of software companies and programmers in the world comprise a software industry .
Software can be quite a profitable industry: Bill Gates, the founder of Microsoft was the richest person
in the world in 2009 largely by selling the Microsoft Windows and Microsoft Office software products.
The same goes for Larry Ellison, largely through his Oracle database software. Through time the
software industry has become increasingly specialized.

Non-profit software organizations include the Free Software Foundation, GNU Project and Mozilla
Foundation. Software standard organizations like the W3C, IETF develop software standards so that
most software can interoperate through standards such as XML, HTML, HTTP or FTP.

Other well-known large software companies include Novell, SAP, Symantec, Adobe Systems, and
Corel, while small companies often provide innovation.

UNIT 2. THE INFORMATION PROCESSING CYCLE

INTRODUCTION
This unit will teach you about the information processing cycle, its definition and the different stages
that make up the cycle. We will also learn about the different components that are used in each and
every stage of the cycle.
46
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

OBJECTIVES

At the end of this unit you should be able to:


 Be able to define the Information Processing Cycle
 Know the components used in the Input and Output Stages
 Understand the difference between Input and Output Processes
 understand how data is input to a computer system, and differentiate among
various input equipment.
 Appreciate the user relationship with computer input and output.
 List the different methods of computer output.
 Differentiate among different kinds of printers.
 Appreciate the large variety of input and output options available in the
marketplace.
 Be able to describe how a monitor works, and know the characteristics that
determine quality.
 Be able to describe the Processing Stage of the Information Processing Cycle
 Know the components that are used in the Processing Stage
 Understand the measures of computer processing speed and approaches that
increase speed.
 List the benefits of secondary storage.
 Differentiate the principal types of secondary storage: magnetic disk, optical disk,
and magnetic tape.
 Be able to describe how data is stored on a disk.
 Identify storage media available for personal computers.

Lesson 1. The Definition of the Information Processing Cycle (IPC) and the input and
Processing
Stages

Definition

Information Processing Cycle is the sequence of events in processing information, which includes (1)
input, (2) processing, (3) storage and (4) output. This means the path that data must follow to become
useful information.

Data is defined as raw information or a collection of facts which have not been processed yet while
information is the processed data that has passed through the Information processing Cycle (IPC). It is
meaning and allows an organisation to make decisions and solve problems.

Types of Input

Data means the raw facts given to the computer.


47
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Programs are the sets of instructions that direct the computer.

Commands are special codes or key words that the user inputs to perform a task, like RUN
"ACCOUNTS". These can be selected from a menu of commands like "Open" on the File menu.
They may also be chosen by clicking on a command button.

User response is the user's answer to the computer's question, such as choosing OK, YES, or
NO or by typing in text, for example the name of a file.

Keyboard
The first input device we will look at is the Keyboard. The image used on the next page to illustrate
the various keys may not look like the keyboard you are using. Several variations are popular and
special designs are used in some companies. The keyboards shown below put the function keys in
different places. The Enter and Backspace keys are different shapes and sizes. One has arrow keys
while the other doesn't. It's enough to confuse a person's fingers!!

The backslash key has at least 3 popular placements: at the end of the numbers
row, above the Enter key, and beside the Enter key. We also have the Windows
keyboards which have two extra keys. One pops up the Start Menu and the other
displays the right-click context sensitive menu. Ergonomic keyboards even have a different shape,
curved to fit the natural fall of the wrists.
Mouse A ball underneath rolls as the mouse moves across
the mouse pad. The cursor on the screen follows the
motion of the mouse. Buttons on the mouse can be
clicked or double-clicked to perform tasks, like to
select an icon on the screen or to open the selected document.

Many recent mice have a scroll wheel as the middle button.

There are new mice that don't have a ball at all. They use a laser to
sense the motion of the mouse instead. High tech!

Practice clicking on the images of mice above. The image will change with a
successful click.

Advantage: Moves cursor around the screen faster than using keystrokes.
Disadvantage: Requires moving hand from keyboard to mouse and back.
Repeated motion can lead to carpal tunnel syndrome

48
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Trackball Instead of moving the whole mouse


around, the user rolls the trackball
only, which is on the top or side.
Advantage: Does not need as much desk space as a mouse.
Is not as tiring since less motion is needed.
Disadvantage: Requires fine control of the ball with just one finger or
thumb.
Repeated motions of the same muscles is tiring and can
cause carpal tunnel syndrome.
Glidepad Uses a touch sensitive pad for
controlling cursor. The user slides
finger across the pad and the
cursor follows the finger movement.
For clicking there are buttons, or
you can tap on the pad with a
finger. The glidepad is a popular
alternate pointing device for
laptops.
Advantage: Does not need as much desk space as a mouse.
Can readily be built into the keyboard.
Has finer resolution. That is, to achieve the same cursor
movement onscreen takes less movement of the finger on
the glidepad than it does mouse movement.
Can use either buttons or taps of the pad for clicking.
Disadvantage: The hand tires faster than with a mouse since there is no
support.
Some people don't find the motion as natural as a mouse.
Game Devices Cursor motion controlled by vertical stick (joystick) or arrow buttons
(gamepad)

Advantage: A joystick gives a more natural-feeling control for motion in


games, especially those where you are flying a plane or
spaceship.
Both have more buttons for special functions than a mouse
and can combine buttons for even more actions.
Disadvantage: More expensive
Bulky
Better ones require an additional peripheral card for best
performance

49
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Pen Input Used esp. in Personal Digital


Assistants (PDA)
Also called a stylus.

Pen Input is used for:

Advantage: Can use handwriting instead of typing


Can use gestures instead of typing commands
Small size
Disadvantage: Must train device to recognize handwriting.
Must learn gestures or train device to recognize the ones
you create
Can lose the pen which is not usually attached to the device

Touchscreen Make selection by just touching the


screen.

Advantage: It's natural to do - reach


out and touch
something.
Disadvantage: It's tiring if many
choices must be made.
It takes a lot of screen
space for each choice
since fingers are bigger
than cursors.

Digitizers and Converts drawings, photos, etc. to


Graphics Tablets digital signal.
The tablets have special
commands

Advantage: Don't have to redraw graphics already created


Most find it easier to draw with a stylus than with a mouse.
Disadvantage: Expensive
A terminal consists of a keyboard and a screen so it can be considered an input device, especially
some of the specialized types.

Some come as single units.

Terminals are also called:

 Display Terminals

50
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Video Display Terminals or VDT

A dumb terminal has no ability to process or store data.


It is linked to minicomputer, mainframe, or super computer. The keyboard and viewing
screen may be a single piece of equipment.

An intelligent, smart, or programmable terminal can process or store on its own, at least to a limited
extent. PCs can be used as smart terminals.

A point-of-sale terminal (POS) is an example of a special purpose terminal.


These have replaced the old cash registers in nearly all retail stores. They can
update inventory while calculating the sale. They often have special purpose
keys.
For example, many restaurants have separate touchpads for each food item available.

Multimedia is a combination of sound and images with text and graphics. This would include
movies, animations, music, people talking, sound effects like the roar of a crowd and smashing
glass.

Sound Input

Recording sounds for your computer requires special equipment. Microphones can capture
sounds from the air which is good for sound effects or voices. For music the best results come
from using a musical instrument that is connected directly to the computer. Software can
combine music recorded at different times. You could be a music group all by yourself - singing
and playing all the parts!

Voice Input

Voice input systems are now becoming available at the local retail level. You must be careful to
get the right system or you'll be very disappointed.

Decide first what you want to do since a voice input program may not do all of
these:
Data entry - Talking data into the computer when your hands and eyes are
busy should certainly be more efficient. You'd have to be very
careful about your pronunciation!
Command and control - Telling the computer what to do instead of typing commands, like
saying "Save file". Be careful here, too. The dictionary of
understood words does not include some of the more "forceful"
ones.
Speaker recognition - Security measures can require you to speak a special phrase.
51
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The computer must recognize your voice to let you in.


Speech to text - Translating spoken words direct to type would suit some authors
just fine. You'd have to watch out for those "difficult to translate"
phrases like "hmmm" and "ah, well, ... ummm."
A number of companies are now using speech recognition in their telephone systems. For
example to find out what your bank account balance is, instead of punching in your account
number on the phone keypad and choosing option 3 for current balance, you could speak your
account number and say "Current balance". The computer will even talk back and tell you what it
thinks you said so you can make corrections. Wow!

How do they change voice to data??


1. Convert voice sound waves to digital form (digital signal
processing -DSP)
2. Compare digitized voice input to stored templates
3. Check grammar rules to figure out words
4. Present unrecognized words for user to identify

Types of Voice Recognition systems


Speaker dependent system The software must be trained to recognize each
word by each individual user. This might take
hours of talking the dictionary into the computer,
to be optimistic.
Speaker independent system The software recognizes words from most
speakers with no training. It uses templates. A
strong accent would defeat the system, however.
Discrete speech recognition The speaker must pause between words for the
computer to tell when a word stops.
Continuous speech recognition The speaker may use normal conversational
flow.
Natural language The speaker could say to the computer "How
soon can we ship a dozen of product #25 in blue
to Nashville?" - and get an answer!!

Science fiction has come to life!!!

Video Input

A digital camera takes still photos but records the pictures on computer disks or memory
chips. The information contained can be uploaded to a computer for viewing.

A video camera or recorder (VCR) can record data that can be uploaded to
the computer with the right hardware. Though it is not digital data, you can

52
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

still get good results with the right software.


Both of these take huge amounts of storage. Photos make for very large files.

A web cam is a tiny video camera designed especially to sit on your computer. It
feeds pictures directly to the computer - no tape or film to develop. Of course you
are limited by the length of the cable that connects the camera to the computer. But
like any camera, it will take a picture of what you point it at!

So what do people do with a web cam? They use it for video conferencing over the
Internet. They show the world what's going on outside their window (weather, traffic). They take digital
pictures and make movies- family, pets, snow storms, birthday parties, whatever.

The first goal of data automation is to avoid mistakes in data entry by making the initial entering of the
data as automatic as possible. Different situations require different methods and equipment.

A second goal of data automation is to avoid having to re-enter data to perform a different task with it.

For example, the old style cash register would add up your purchase and calculate the tax. The clerk
entered the amounts by hand (the data entry part). Later the numbers off the store copy of the cash
register tapes would have to be added up manually, or entered into a computer program (another data
entry task). For an up-to-date inventory someone would have to go count all the things on the shelves
(a third data entry task).

With modern data automation, using bar codes on every item in the store, a computer check-out
register along with a bar code scanner will calculate the sale plus transfer the information directly to
the computer that does the store bookkeeping plus adjust the inventory records by subtracting the
items just sold. The human errors possible at each step of data entry are now avoided. Of course,
there are still ways for errors to occur, just not as many. In addition, a new feature is available with
computerized cash registers - a receipt that states the name of the item bought as well as the price.

General Devices
Scanner- The scanner works like a copy
machine. It creates a digital image
of what it scanned. Scanned text
cannot be edited at this point.

Flat bed scanners open wide enough to allow you to lay a


document or book flat on the glass surface. You can even
make a scan of your hand!

A document scanner can only scan individual sheets of paper, not books or objects.

53
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Bar-Code Hand-held or fixed devices that can read the bar codes on packages.
Scanner

Credit Card Swipe the credit card through the device, which reads the magnetic
Reader numbers in the magnetic strip on the card.

Special types of characters read with special devices


Bar Codes - Retail shops now use printed bar codes on products to track
inventory and calculate the sale at the checkout counter. The
US Post Office uses bar codes to sort mail, but the bars are
different from those used for pricing products.

Optical example - test scoring


Marks - A special machine "reads" the marks. Woe to the student who
takes a test with this kind of score sheet and doesn't get those
bubbles colored in correctly!

Magnetic Ink - Bank account # is printed in special ink with


magnetic qualities which can be read by the
right machine.

Magnetic The back of a credit card has a magnetic strip that contains
Strip - magnetically encoded numbers. A credit card reader can read
the numbers and transmit them to a computer to verify that the
card is good.

Optical There are coding systems that use letters or special characters
Characters - that are especially shaped to be easy for machines to read.

RFID - Radio Frequency Identification uses special tags that contain chips which are
programmed with information. A RFID reader device sends a signal to the chip which
makes the tag send a short-range radio signal with the information. These tags have a
wide variety of uses with more being used every day.
Examples:

 shipping containers - what's in the container, where it came from,


where it is going
 surgery equipment - to get a count of items to make sure none are still
in the patient

 patient ID bracelet - patient name, medical info

 pet ID tag - pet's name, medical info, owner's name and contact info

54
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 library book - book ID for automatically checking books out and back in

 product on store shelf - auto-scan at checkout, out-door scanning for


theft, auto-update of inventory

OCR software

Optical Character Recognition: This software takes a scanned image and converts the characters in the
image into computer characters. The document can now be edited with a word processor. This is a very
tricky process. Documents must be carefully checked for wrong conversions. If the original print was not
very crisp and clean, errors are very likely. Manually checking for proper translation is necessary. These
programs are getting really good if they have a clear scan to work with.

A famous slogan in computing sums up the importance of accurate data:

GIGO = Garbage In, Garbage Out

Conclusions are no better than the data they are based on.

Checking for Accuracy


A major task for any program that accepts data is to try to guarantee the accuracy of the input.
Some kinds of errors cannot be caught but many of the most common kinds of mistakes can
be spotted by a well-designed program.

A program should attempt to do the following:


1. test data type and format ex. 2/a/96 is not a date

ex. If a phone number should have exactly 10 digits with the area
code, then 555-123 is not acceptable.

2. test data reasonableness ex. 231 should not be a person's age, at least not since the time of
Noah and the Flood

ex. A sale of $50,000 worth of chewing gum at the corner market is

55
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

probably missing a decimal point somewhere!

3. test data consistency ex. A man's death date should be later than his birth date!

ex. The sum of the monthly paychecks should be the same as the
total paid for the year.

4. test for transcription and ex. Typing 7754 instead of 7154 is a transcription error, typing the
transposition errors wrong character.

ex. Typing 7754 instead of 7745 is a transposition error,


interchanging two correct characters.

Both are very hard to check for.

What is processing?

Processing is the thinking that the computer does - the calculations, comparisons, and decisions.
People also process data. What you see and hear and touch and feel is input. Then you connect this
new input with what you already know, look for how it all fits together, and come up with a reaction, your
output. "That stove is hot. I'll move my hand now!"

The kind of "thinking" that computers do is very different from what people do.

Machines have to think the hard way. They do one thing at a time, one step at a time. Complex
procedures must be broken down into VERY simple steps. Then these steps can be repeated hundreds
or thousands or millions of times. All possible choices can be tried and a list kept of what worked and
what didn't.

People, on the other hand, are better at recognizing patterns than they are at single facts and step-by-
step procedures. For example, faces are very complex structures. But you can identify hundreds and
even thousands of different faces with just a glance.

56
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

A human can easily tell one face from another, even when the faces belong to strangers. You don't
recognize Mom's face because you remember that Mom's nose is 4 cm long, 2.5 cm wide, and has a
freckle on the left side! You recognize the whole pattern of Mom's face. There are probably a lot of folks
with noses the size and shape of Mom's. But no one has her whole face.

But a computer must have a lot of specific facts about a face to recognize it. Teaching computers to
pick Mom's face out of a crowd is one of the hardest things scientists have tried to do yet with
computers. But babies do it naturally!

So computers can't think in the same way that people do. But what they do, they do excellently well and
very, very fast.

Modern computers are digital, that is, all info is stored as a string of zeros or ones - off or on. All the
thinking in the computer is done by manipulating these digits. The concept is simple, but working it all
out gets complicated.

1 bit = one on or off


position

1 byte = 8 bits

So 1 byte can be one of 256 possible combinations of 0 and 1.


Numbers written with just 0 and 1, are called binary numbers.

Each 1 is a power of 2 so that the digits in the figure represent the number:

= 2 7 + 0 + 2 5 + 0 + 2 3 + 2 2 + 0 +0

= 128 +0 +32 + 0 + 8 + 4 + 0 + 0

57
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

= 172

Every command and every input is converted into digital data, a string of 0's and 1's.

For more information on binary numbers, see Base Arithmetic.


Digital Codes

All letters, numbers, and symbols are assigned code values of 1's and 0's. A number of different digital
coding schemes are used by digital devices.
Three common code sets are:

ASCII (used in UNIX and DOS/Windows-based computers)


EBCDIC (for IBM System 390 main frames)
Unicode (for Windows NT and recent browsers)

The ASCII code set uses 7 bits per character, allowing 128 different characters. This is enough for the
English alphabet in upper case and lower case, the symbols on a regular English typewriter, and some
combinations reserved for internal use.

An extended ASCII code set uses 8 bits per character, which adds another 128 possible characters.
This larger code set allows for foreign languages symbols like letters with accents and several graphical
symbols.

ASCII has been superseded by other coding schemes in modern computing. But it is still used for
transferring plain text data between different programs or computers that use different coding schemes.

58
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

If you're curious to see the table of ASCII and EBCDIC codes, see Character Codes.
Unicode uses 16 bits per character, so it takes twice the storage space that ASCII coding, for
example, would take for the same characters. But Unicode can handle many more characters. The goal
of Unicode is to represent every element used in every script for writing every language on the planet.
Whew! Quite a task!

Version 5 of Unicode has codes for over 107,000 characters instead of the wimpy few hundred for ASCII
and EBCDIC. Ninety different scripts can be displayed with Unicode (if your computer has the font
needed), including special punctuation and symbols for math and geometry. (Some languages have
more than one script like Japanese, which uses three scripts: Kanji, Hiragana, and Katakana.) English
and the European languages like Spanish, French, and German use the Latin script. Cyrillic is used
several languages including Russian, Bulgarian, and Serbian.

At the Unicode site you can view sections of the Unicode code charts . The complete list is far
too long to put on one page! Click on the name of a script (red letters on gray) to see a PDF chart of the
characters. View the charts for scripts you never heard of.

59
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Parity

With all these 0's and 1's, it would be easy for the computer to make a mistake! Parity is a clever way to
check for errors that might occur during processing.

In an even parity system an extra bit (making a total of 9 bits) is assigned to be on or off so as to make
the number of on bits even. So in our example above 10101100 there are 4 on bits (the four 1's). So the
9th bit, the parity bit, will be 0 since we already have an even number of on bits.

In an odd parity system the number of on bits would have to be odd. For our example number
10101100, there are 4 on bits (the 1's), so the parity bit is set to on, that is 1, to make a total of 5 on bits,
an odd number.

If the number of on bits is wrong, an error has occurred. You won't know which digit or digits are wrong,
but the computer will at least know that a mistake occurred.

Memory chips that store your data can be parity chips or non-parity chips. Mixing them together can
cause odd failures that are hard to track down.

60
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The CPU, or Central Processing Unit, is the part of the computer where work gets done. In most
computers, there is one processing chip.

Control Unit

This is the part of the computer that controls the Machine Cycle. It takes numerous cycles to do even a
simple addition of two numbers.

ALU

stands for Arithmetic/Logic Unit

This is the part that executes the computer's commands.


A command must be either a basic arithmetic operation:
+ - * /
add, subtract, multiply, divide

or one of the logical comparisons:


> < = not =.
greater than, less than, equal to, not equal to

Everything else has to be broken down into these few operations. Only one operation is done in each
Machine Cycle.

The ALU can only do one thing at a time but can work very, very fast.

Operating System

This is the instructions that the computer uses to tell itself how it "operates". It's the answer to "Who am
I and what can I do?"

Some common operating systems are DOS, various versions of Windows, OS/2, UNIX, LINUX, System
X. These all behave in very different ways and have different hardware requirements. So they won't all
run on all machines.

Only the parts of the operating system that are currently being used will be loaded into Main Memory.

Applications

These are the various programs that are currently running on the computer.

By taking turns with the Machine Cycle, modern computers can have several different programs
running at once. This is called multi-tasking.

Each open application has to have some data stored in Main Memory, even if the application is on rest
break and is just sitting there. Some programs (graphics programs are notorious for this) require a lot of
the Main Memory space, and may not give it up even if they are shut down! Rather rude, actually!!

Input/Output Storage

61
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

When you enter new data, the keystrokes must be stored until the computer can do something with the
new data.

When you want data printed out or displayed, it must be stored somewhere handy first.

Working Storage

The numbers and characters that are the intermediate results of computer operations must be stored
until the final values are calculated. These values "in progress" are kept in temporary locations.

For example, if the computer is adding up the numbers 3, 5, and 6, it would first add 3 to 5 which yields
a value of 8. The 8 is stored in working storage. Then the 8 and 6 are added and the new value 14 is
stored. The value of 14 is now available to be displayed on the screen or to be printed or to be used in
another calculation.

Unused Storage

One hopes that there is always some storage space that is not in use.

If space runs out in Main Memory, the computer will crash, that is, stop working.

There are programs that sense when space is getting short and warn the user. The user could then
close some of the open applications to free up more space in Main Memory. Sometimes the warning is
too late to prevent the crash. Remember that all the data in Main Memory vanishes when the power
goes off. Thus a crash can mean a lot of lost work

The computer can only do one thing at a time. Each action must be broken down into the most basic
steps. One round of steps from getting an instruction back to getting the next instruction is called the
Machine Cycle.

The Machine Cycle

Fetch - get an instruction from Main Memory

Decode - translate it into computer commands

Execute - actually process the command

Store - write the result to Main Memory

62
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

For example, to add the numbers 5 and 6 and show the answer on the screen requires the
following steps:
1. Fetch instruction: "Get number at address 123456"

2. Decode instruction.

3. Execute: ALU finds the number. (which happens to be 5)

4. Store: The number 5 is stored in a temporary spot in Main Memory.

5 - 8 Repeat steps for another number (= 6)

9. Fetch instruction: "Add those two numbers"

10. Decode instruction.

11. Execute: ALU adds the numbers.

12. Store: The answer is stored in a temporary spot.

13. Fetch instruction: "Display answer on screen."

14. Decode instruction.

15. Execute: Display answer on screen.

Speed
The immense speed of the computer enables it to do millions of such steps in a second.
In fact, MIPS, standing for millions of instructions per second, is one way to measure computer speeds.
We need a method of naming the places where Main Memory stores data.
Each location needs a unique name, just like houses in a town need a unique street

63
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

address.

Rather than a street name and house number, memory addresses are just numbers.

A memory address holds 1 byte of data where

1 bit = 0 or 1, on or off
1 byte = 8 bits
1 kilobyte (K or KB) = 1024 bytes
1 megabyte (MB) = 1024 kilobytes

You might wonder why 1024 instead of 1000 bytes per kilobyte. That is because computers don't count
by tens like people. Computers count by twos and powers of 2. 1024 is 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x
2, that is 2 times itself ten times. It's a rather convenient size number (for computers!).

Update: Things are changing faster than I can type! The explanation above is no longer
entirely true (July 2000). Different scientific and technical areas are using the words
differently. For data storage devices and telecommunications a megabyte is 1 000 000
bytes. For data transmission in LANs a megabyte is 1 048 576 bytes as described above.
But for data storage on a floppy disk a megabyte is 1 024 000 bytes!

We all are impatient and want our computer to work as fast as possible, and certainly faster than the
guy's at the next desk!
Many different factors determine how fast your computer gets things done. Processor speed is one
factor. But what determines the processor's speed?

Processor Speed affected by:

64
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

System clock rate = rate of an electronic pulse used to synchronize


processing
(Only one action can take place between pulses.)

Measured in megahertz (MHz) where 1 MHz = 1 million cycles per second


or gigahertz (GHz) where 1 GHz = 1 billion cycles per second.

This is what they are talking about if they say a computer is a 2.4 GHz
machine. It's clock rate is 2.4 billion cycles per second.

Bigger number = faster processing

Bus width = the amount of data the CPU can transmit at a time to main
memory and to input and output devices.
(Any path bits travel is a bus.)

An 8-bit bus moves 8 bits of data at a time.


Bus width can be 8, 16, 32, 64, or 128 so far.

Think of it as "How many passengers (bits) can fit on the bus at once to go
from one part of the computer to another."

Bigger number = faster transfer of data

Word size = a word is the amount of data the CPU can process at one
time.

An 8-bit processor can manipulate 8 bits at a time.


Processors can be 8-, 16-, 32-, or 64-bit so far.

Bigger the number = faster processing

You want a nice match between the word size and the bus size and the clock. It wouldn't do any good to
have a bus that can deliver data 128 bits at a time, if the CPU can only use 8 bits at a time and has a
slow clock speed. A huge line of data would form, waiting to get off the bus! When computers gets
clogged like that, bad things can happen to your data. It's like people waiting to get into the theater. After
a while, some of them may leave!!

There are several physical components of a computer that are directly involved in processing. The
processor chip itself, the memory devices, and the motherboard are the main ones.

65
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Microprocessor- a single silicon chip containing CPU, ALU, and some memory.

The ROM (Read Only Memory) contains the minimum instructions that the
computer needs to get started, called booting. What a user does on the computer
cannot change what is stored in ROM.

There may also be another chip dedicated to calculations.

The microprocessor chip is located on a large circuit board called the main board or
motherboard.
The physical size of a computer chip is very small, as the ant below illustrates.

Processor speed is measured in Megahertz (MHz) or Gigahertz (GHz).

Memory Devices:

Vacuum tube - oldest type. Didn't hold up long and generated a lot of heat.
Core - small metal rings. Magnets tip a ring to left or right, which
represents on and off. Relatively slow.
Semiconductor - integrated circuit on a chip. This is what modern computers use
for memory. Pictured below is a 72-pin SIMM.

Memory Speed

RAM (Random Access Memory) is what the computer uses as Main Memory. Memory speed measures
the time it takes to move data in or out of memory. It is measured differently for different kinds of
memory chips:

 in nanoseconds (ns ) (smaller is faster) for EDO and FPM


1 ns = 1 billionth of a second.
 in megahertz (MHz) (higher is faster) for SDR SDRAM, DDR, SDRAM, and RDRAM.

66
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The capacity of a memory chip is measured in megabytes or gigabytes. For example, 256 MB of RAM
is required to run WindowsXP and 512MB is much better. For Windows 7 the requirements are 1
gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit).

Several such memory boards can be installed in the computer to increase the amount of RAM
available. Motherboards have only so many slots for memory so there are limits. Some motherboards
require that all slots be filled and that all slots contain the same size memory board. It can get
frustrating as there are no warning labels about this!

Motherboard

Here we see a diagram and a photo of a motherboard (or main circuit board).
This one is suitable for a Pentium CPU. Nothing has been plugged in or attached yet.

What is Output?

Output is data that has been processed into useful form, now called Information.

Types of Output
Hard copy:

printed on paper or other permanent media

67
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Soft copy:

displayed on screen or by other non-permanent means

Categories of Output

Text documents Graphics Multimedia


including reports, charts, graphs, combination of text,
letters, etc. pictures graphics, video, audio

The most often used means of Output are the printer for hard copy and the computer screen for soft
copy. Let's look at the features of each.

Computer Basics

5 - Output: Printer Features

The job of a printer is to put on paper what you see on your monitor. How easy this is to do and how
successfully it is done determines whether or not you are happy with which printer you chose.

Monitor screens and printers do not use the same formatting rules. In the olden days of computers,
the way something looked on the screen could be VERY different from how it would look when printed.

Early word processors didn't have a way to show what the printed version would look like. Now a word
processor that doesn't have print preview, would be laughed off the shelf. Nowadays we expect to see
a WYSIWYG view (What You See Is What You Get), where you see almost exactly what the document
will look like in print, while you are still working on it.

How fast?
68
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The speed of a printer is measured in:

cps = characters per second

lpm = lines per minute

ppm = pages per minute

The faster the printing, the more expensive the


printer.

What paper type used?


Continuous-Form Paper

Advantage: Don't need to put in new paper often

Disadvantage: May need to separate the pages and remove the strips of
perforations from the edges.

Single Sheet

Advantage: Can change to special paper easily, like letterhead or envelopes.

Disadvantage: Must add paper more often.

What print quality?


LQ Letter Quality = as good as best typewriter output

NLQ Near Letter Quality = nearly as good as best typewriter output

Draft used internally or for a test print

The better the quality, the slower the printing.

A more numerical measure of print quality is printer resolution. Measured in dots per inch (dpi), this
determines how smooth a diagonal line is when printed. A resolution of 300 dpi will produce text that
shows jagged edges only under a magnifying glass. A lower resolution than this will produce text with
stair-step edges, especially at large sizes. Even higher resolutions are needed to get smooth photo
reproduction.

69
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Professionals in graphics use 1200 to 2400 dpi printers. Draft quality on such a printer would be 600 dpi.
What will it print?

Printers vary in what varieties of type they can print. You must know the limits of your printer
to avoid unhappy surprises! Modern printers can handle most anything, but older printers may
not. Yes, there are still old, clunky computers and printers in use out there in the real world.
Typeface Set of letters, numbers, and special characters with similar design

Styles Bold, italic, underlined...

Size Is measured in points


One point = 1/72 of an inch like: 12 pt 18 pt 24 pt 36 pt
Use 10 or 12 pt for writing a letter or report.

Font A complete set of letters, etc. in the same typeface, style, and size

Color Printing in color takes longer, uses more expensive inks/toner, looks best on more
expensive papers, but can add a lot to the quality of the output

Graphics Pictures add a lot to a document, but not all printers


can print graphics.

Will it fit?
The footprint, or the physical size of a printer, determines where it can be
placed. You must consider several things:

 Where will you put it?


On top of a table or cabinet or on a shelf or in a drawer?? Is there
enough space for the printer and for the blank paper and the
printouts?

o Blank
70
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

paper
If the blank paper is in a drawer or tray under the printing
mechanism, can you pull the drawer all the way out?
If the paper is in an upright stack on top of the printer, is
there room for your hand and the paper as you put it blank
pages?

o Where does the printed page wind up?


On top of the printer or out in front of it?

There must be a good match between the space you need to work with the
printer and the spot you choose to put it! Otherwise, your print-outs may wind
up puddled on the floor or you could bash your knuckles whenever you put in
a stack of blank paper.

What kind of cable connection?


Serial cable Sends data only 1 bit at a time
Printer can be up to 1000 feet away from the computer.

Maximum data transfer speed = 115 kilobits/s (.115Mbits/s)

Parallel cable Sends data 8 bits at a time


Printer must be within 50 feet of the computer.

Maximum data transfer speed: 115 kilobytes/s (.115MBYTES/s).


This is 8 times faster than the maximum serial speed.

Newer printers may need bi-directional cable so that the printer can
talk back to the computer. Such a cable is required if the printer can
give helpful error messages. It's startling, but nice, the first time your
computer politely says "Ink is getting low" or "Please place paper in
the AutoSheet feeder."

Oddly, Windows XP does not support spooling for a parallel


connection to a printer. Spooling is what allows you to do other
things on the computer while the printer is processing and printing
the document. WinXP does spool when the printer uses a USB
connection.

71
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

USB cable Printer must be within 5 meters (16.5 feet) of the computer, when
connecting straight to the computer.
[You can hook up several 5 m. cables and USB hubs in a chain - up
to 25 meters.]

Maximum data transfer speed: 12 megabits/s (1.5 MBYTES/s) Lots


faster!

Best choice:

The new USB (Universal Serial Bus) connection is likely your best choice, if your
printer can use it. It is faster and a USB connector can be unplugged and re-plugged
without turning off the system. USB ports are rapidly replacing parallel ports. The
printer cannot handle the data as fast as the USB port can send it. The real limit on
how fast a printer works is in how fast printer can get the characters onto the paper.

Serial cable may have to be used if a printer is shared in a fairly large office, due to
the length of cable needed.

Output: Printer Types

Any of the current types of printers satisfies the work and cost requirements for someone. Each has
strengths and weaknesses. Choose your type of printer based on which of the features previously
discussed are important to your work, then choose the specific printer that best suits both your tasks
and pocketbook.

Impact Printers

With this type of printer something strikes paper & ribbon together to
form a character, like a typewriter.

Advantages: Less expensive


Fast (some types)
Can make multiple copies with multipart paper

Disadvantages: Noisy!
Print quality lower in some types.
Poor graphics or none at all.

72
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Types of Impact Printers


Dot Matrix Forms characters using row(s) of pins, 9, 18, or 24 which impact the
ribbon on top of the paper. Also called pin printers.

The more pins, the smoother-looking the characters.


Most dot matrix printers have the characteristics below:
Bi-directional - prints left to right and also right to left

Tractor feed - uses sprockets to pull continuous-feed


paper

Friction feed - uses pressure to pull single sheets

Advantages: Inexpensive A dot-matrix y & an


Can do multi-copy forms enlargement
Disadvantages: Can be slow
Loud
Graphics of low quality, if possible at all

Animation showing
how columns of pins
print the letter y
(courtesy of Bill
Lewis)

Chain and Uses characters on a band or chain that is moved into place before
Band Printers striking the characters onto the paper.

Advantages: Very fast


up to 3000 lpm (lines per minute)

73
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Disadvantages: Very expensive


Very loud

Non-Impact Printers

This type of printer does not involve actually striking the paper. Instead, it
uses ink spray or toner powder.

Advantages: Quiet!
Can handle graphics and often a wider variety of fonts than impact printers.

Disadvantages: More expensive


Slower

Types of Non-Impact Printers


Ink Jet Sprays ink onto paper to form characters

Advantages: Quiet
High quality text and graphics. Some can do
color.

Disadvantages: Cannot use multiple-copy paper


Ink can smear

Thermal Uses heat on chemically treated paper to form


characters. Fax machines that use rolls of paper are also
of this type.

Advantages: Quiet

Disadvantages: Relatively slow


Expensive, requiring special paper
Cannot use multiple-copy paper

Page Works like a copy machine, using toner and a heat bar.
Printer Laser printers are in this category.

Advantages: Quiet
Faster than other non-impact printers, from 4
to 16 ppm (pages per minute)

74
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

High quality print and graphics. Some can do


color.

Disadvantages: More expensive than impact printers


Cannot use multiple-copy paper

Thus, Things to Consider When Choosing a Printer:


How much output? What speed is needed?
Is heavy-duty equipment necessary?

Quality of output needed? Resolution needed


Photo quality?

Location of printer? How big a footprint can be handled?


Is loudness important?

Multiple carbon copies needed?

Expense of ink or toner? How much does a cartridge cost and how many pages will it produce? H
measured??
Photo inks are more expensive!

Number of cartridges? Just one 3-color cartridge or separate black and color cartridges or a car
color(best , if you can afford it!)

Output: Screen Features

The device which displays computer output to us has various names:

Screen from "computer screen" or "display screen"

Monitor from its use as a way to "monitor" the progress of a program

VDT = video display terminal from early network terminals

CRT = cathode ray tube from the physical mechanism used for the screen.

VDU = visual display unit to cover all the mechanisms from desktop CRTs to LCD flat
screens on laptops to LED screen on palmtops

75
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Making Colored Pictures


LCD screen

LCD (Liquid Crystal Display) screen is very flat and thin. LCD displays
are made of two layers of a polarizing material with a liquid crystal solution in between, divided into tiny
cells. An electrical signal to a cell makes the crystals line up in a way that keeps light from going through
entirely or just partly. When the screen is black, all the crystals lined up so that no light gets through.

To make color an LCD screen uses 3 colored subcells for each cell: Red, Green, and Blue. This RGB
system can create all the other colors by combining how much of each of these colors you see.The
signal for a picture cleverly light ups just the right subcells in just the right strengths to show the desired
color. Your eye blends the colors in the cells together and you see a picture.
LCD screens used to be hard to see unless you were directly in front of the screen. Recent
developments have fixed this issue.

76
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

CRT screen:
A CRT monitor screen uses a cathode ray tube. The screen is coated on the inside surface with dots of
chemicals called phosphors. When a beam of electrons hits a dot, the dot will glow.

On a color monitor these phosphor dots are in groups of three: Red, Green, and Blue. This RGB system
can create all the other colors by combining what dots are aglow.

There are 3 signals that control the 3 electron beams in the monitor, one for each RGB color. Each
beam only touches the dots that the signal tells it to light. All the glowing dots together make the picture
that you see. The human eye blends the dots to "see" all the different colors.

A shadow mask blocks the path of the beams in a way that lets each beam only light its assigned color
dots. (Very cool trick!)

Scan Pattern
There are two patterns used by CRT monitors to cover the whole screen. Both scan across the screen,
in a row 1 pixel high, from left to right, drop down and scan back left. (LCD screens do not use these
methods but display the whole screen at once.)

The non-interlaced pattern scans each row of pixels in turn, from top to bottom. This type is more prone
to flicker if the scan has not started over by the time the phosphor dots have quit glowing from the last
scan. This can make your eyes hurt or even make you nauseous.

The interlaced pattern scans every other row of pixels. So the odd rows are done, then the even rows, in
the same left to right to left way. But since the rows of pixels are very close together, the human eye
doesn't notice as easily if a row has gone dim before it is rescanned. Much friendlier to your eyes and

77
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

stomach.
Light vs. Ink
Colors created by glowing dots are not quite the same as those created by ink on the printer. Screens
use the RGB system described above. Inks use the CMYK system using the colors Cyan (a kind of
blue), Magenta (a kind of red), Yellow, and blacK. This is why what you see on your screen is not quite
the same color when you print.

Physics Lesson:

Color from mixing pigments: Ink and paint make colors by the colors that they reflect. The other colors
are absorbed, or subtracted, from the light hitting the object. The primary colors for inks and paints are
traditionally said to be red, yellow, and blue. It is more accurate to say magenta, yellow, and cyan.
These cannot be created by mixing other colors, but mixing them does produce all other colors.

Color from mixing lights: Lights show the colors that the light source sends out (emits). The colors from
different light sources are added together to make the color that you see. A computer screen uses this
process. The primary colors for lights are red, green, and blue-violet. Mixed together, they can produce
all the other colors.

Color from optical mixing: The illusion of color can be created by tricking the eye. Artists of the
Impressionist period created paintings using only dots of color. Newspaper photos are made of dots,
also. The human eye blends the colors to "see" shapes and colors that were not actually drawn with
lines, just suggested by the dots.

Here we see a photo of a water droplet acting as a magnifying glass


on a CRT screen. Notice that three colors of dots create the overall pink color on the screen due to
different strengths of each color and the brain working its magic. (Photo: Julie Radalj)

Screen Features
Size Desktop screens are usually 15 - 23 in. by diagonal measurement. (This is how TV
screens are measured, too.) Larger sizes are available, at a significantly higher cost.
Prices are dropping, however.

Resolution Determines how clear and detailed the image is.


Pictures on a screen are made up of tiny dots.
1 dot on screen = 1 pixel (from "picture element")

78
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The more pixels per inch, the clearer and more detailed the picture.

One measure of this is the dot pitch, the distance between the dots that make up the
picture on the screen. However, different manufacturers measure differently. Most
measure from dot center to the center of the nearest same color dot. Some measure
from the center of a dot to an imaginary vertical line through the center of the nearest
dot of the same color, giving a smaller number for the same dots as the previous
method. Some monitors use skinny rectangles instead of dots and so must use a
different method altogether. So, dot pitch has become less useful as a measure of
monitor quality. A dot pitch of .28 is very common and .26 should be good for nearly all
purposes, however it is measured.

Refresh Rate How often the picture is redrawn on the monitor. If the rate is low, the picture will
appear to flicker. Flicker is not only annoying but also causes eye strain and nausea.
So, a high refresh rate is desirable. 60 times per second is tolerable at low resolutions
for most people. 75 times per second or more is better and is necessary for high
resolutions.

Type Old types = CGA, EGA, VGA


Current type = super VGA
Determines what resolutions are available and how many colors can be
displayed.

Type Stands for Resolution(s)

CGA Color Graphics Adapter 320 x 200

EGA Extended Graphics Adapter 640 x 350

VGA Video Graphics Adapter 640 x 480

SVGA Super VGA 800 x 600, 1024 x 768, or 1280 x 1024 etc.

New systems now come with super VGA with a picture size of 800 x 600 pixels (as a
minimum) and 16 million colors

Color The number of colors displayed can vary from 16 to 256 to 64 thousand to 16.7 million.
The more colors, the smoother graphics appear, especially photos.

The number of colors available actually depends more on the video card used and on
how much memory is devoted to the display. It takes 8 bits to describe 1 pixel when
79
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

using 256 colors. It takes 24 bits per pixel when using 16 million colors. So a LOT of
memory is needed to get those millions of colors. Video cards now come with extra
memory chips on them to help handle the load.

Reverse video example:

Cursor/
The symbol showing where you are working on the screen, like: and
Pointer
In the olden days of just DOS, there were few choices for the cursor. The invention of
the blinking cursor was a tremendous event. Under Windows there are a huge number
of basic to fantasy cursors to choose from.

Scrolling Moving the lines displayed on the screen up or down one line at a time

Type of Screens
Monochrome one color text on single color background, i.e. white letters
on blue, or green characters on black

Color various colors can be displayed. (This one is easy!)

CRT Formerly most common type of monitor, which uses a cathode ray tube.

Liquid Crystal Displ Used in laptops esp. Large LCD monitors are the most
ay common now.
(LCD)

Plasma Screens Used for very large screens and some laptops. Flat, good
color, but much more expensive.
These have gone out of style.

80
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Output: Other Devices

Special tasks require special equipment.

There are a number of special-use output devices. More are announced every day. From recording
earthquake tremors to displaying CAT scans, from recording analysis in a sound studio to displaying
metal fatigue in aircraft structures, we have more and more special tasks that use computers and
thus require print or screen display.

Examples:
Data projectors Microfilm (COM)

Projects the image onto a wall screen Computer Output Microfilm


The computer directly generates the microfilm
images.

81
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Large Format Printers Sound

Used especially for building plans and engineering Computers can output voice messages, music,
drawing and really large pictures. data as sound. Of course you have to have
speakers and a sound card.
Plotters use a pen to draw continuous lines and are
favored for engineering drawings, which require both
large sheets of paper and precise lines.

What is Storage?

Storage refers to the media and methods used to keep information available for later use. Some
things will be needed right away while other won't be needed for extended periods of time. So
different methods are appropriate for different uses.

Earlier when learning about processing, we saw all the kinds of things that are stored in Main
Memory.

82
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Main Memory = Primary Storage


Main memory keeps track of what is currently being processed. It's volatile, meaning that turning
the power off erases all of the data.

Poof!!

For Main Memory, computers use RAM, or Random Access Memory. These memory chips are the
fastest, but most expensive, type of storage.

Auxiliary Storage = Secondary Storage


Auxiliary storage holds what is not currently being processed. This is the stuff
that is "filed away", but is ready to be pulled out when needed.

It is nonvolatile, meaning that turning the power off does not erase it.

Auxiliary Storage is used for:


 Input - data and programs
 Output - saving the results of processing

So, Auxiliary Storage is where you put last year's tax info, addresses for old customers, programs you
may or may not ever use, data you entered yesterday - everything that is not being used right now.

Magnetic Disks

Of the various types of Auxiliary Storage, the types used most often involve some type of magnetic
disk. These come in various sizes and materials, as we shall see. This method uses
83
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

magnetism to store the data on a magnetic surface.

Advantages: high storage capacity


reliable
gives direct access to data

A drive spins the disk very quickly underneath a read/write head, which does what its name says. It
reads data from a disk and writes data to a disk. (A name that actually makes sense!)
Types of Magnetic Disks

Hard Disks

These consist of 1 or more metal platters which are sealed inside a case.
The metal is one which is magnetic. The hard disk is usually installed
inside the computer's case, though there are removable and cartridge
types, also.

Technically the hard drive is what controls the motion of the hard disks
which contain the data. But most people use "hard disk" and "hard drive" interchangeably. They don't
make that mistake for floppy disks and floppy drives, described below. It is clearer with floppies that
the drive and the disk are separate things.

Diskette / Floppy Disk (nearly extinct!)


Sizes:

5¼" 3½"
(really old stuff) (pretty much gone now)

Both sizes are made of mylar with an oxide coating. The oxide provides the magnetic quality for the
disk. The "floppy" part is what is inside the diskette covers - a very floppy piece of plastic (i.e. the mylar).

These disks are rapidly vanished. New computers often come without a floppy disk drive at all unless
you ask for one.

84
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Other Removable Magnetic Media

Several other kinds of removable magnetic media are in use, such as the Zip
disk. All of these have a much higher capacity than floppy disks.

Each type of media requires its own drive. The drives and disks are much
more expensive than floppy drives and disks, but then, you are getting much
larger capacities.

There are other kinds of storage devices that are not magnetic, such as flash drives, or are not disks,
such as magnetic tape. These will be discussed later.

Disk Format

All magnetic disks are similarly formatted, or divided into areas, called

tracks

sectors

cylinders

The formatting process sets up a method of assigning addresses to the different areas. It also sets
up an area for keeping the list of addresses. Without formatting there would be no way to know what
data went with what. It would be like a library where the pages were not in books, but were scattered
around on the shelves and tables and floors. You'd have a hard time getting a book together. A
formatting method allows you to efficiently use the space while still being able to find things.

Tracks
A track is a circular ring on one side of
the disk. Each track has a number.
The diagram shows 3 tracks.

Sectors
85
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

A disk sector is a wedge-shape piece of the disk, shown in yellow. Each sector
is numbered.
On a 5�" disk there are 40 tracks with 9 sectors each.
On a 3�" disk there are 80 tracks with 9 sectors each.

So a 3�" disk has twice as many named places on it as a 5�" disk.

A track sector is the area of intersection of a track and a sector, shown in yellow.

Clusters
A cluster is a set of track sectors, ranging from 2 to 32 or
more, depending on the formatting scheme in use.

The most common formatting scheme for PCs sets the


number of track sectors in a cluster based on the capacity of
the disk. A 1.2 gig hard drive will have clusters twice as large
as a 500 MB hard drive.

1 cluster is the minimum space used by any read or write. So


there is often a lot of slack space, unused space, in the cluster
beyond the data stored there.

There are some new schemes out that reduce this problem,
but it will never go away entirely.

The only way to reduce the amount of slack space is to reduce


the size of a cluster by changing the method of formatting. You
could have more tracks on the disk, or else more sectors on a
track, or you could reduce the number of track sectors in a
cluster.

Cylinders
A cylinder is a set of matched tracks.
86
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

On a double-sided floppy, a track from the


top surface and the same # track from the
bottom surface of the disk make up a
cylinder. The concept is not particularly
useful for floppies.

On a hard disk, a cylinder is made of all the


tracks of the same # from all the metal disks
that make up the "hard disk".
If you put these all together on top of each
other, you'd have something that looks like a
tin can with no top or bottom - a cylinder.

The computer keeps track of what it has put where on a disk by remembering the
addresses of all the sectors used, which would mean remembering some
combination of the cylinder, track, and sector. Thank goodness we don't have to
remember all these numbers!

Where the difference between addressing methods shows up is in the time it takes
for the read/write head to get into the right position. The cylinder method writes
data down the disks on the same cylinder. This works faster because each metal
platter has a read/write head for each side and they all move together. So for one
position of the read/write heads, the computer can put some data on all the
platters before having to move the heads to a new position.

What happens when a disk is formatted?


1. All data is erased.
Don't forget this!!

2. Surfaces are checked for physical and magnetic


defects.

87
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

3. A root directory is created to list where things are on


the disk.

6 - Storage: Disk Capacity

88
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The capacity of a magnetic disk depends on several factors.

We always want the highest amount of data stored in the least possible space. (People are so
greedy this way!) So the capacities of storage media keep increasing while cost keeps decreasing.
It's a lovely situation for the user!

Capacity of a Disk depends on:


1. # of sides used:

single-sided double-sided

2. Recording density -

how close together the bits can be on a track sector of the


innermost track

3. # of tracks on the disk

Capacity of Disks
5¼" floppy - 360 KB or 1.2 MB

89
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

3½" floppy - 720 KB or 1.44 MB

Hard disk

early ones = 20 MB

currently up to 3 TB Internal hard drive


(July 2010) where 1 TB = 1 terabyte =
1000 GB

The future???

Advances in technology for the read/write head and for the densities on the disks are bringing larger
and larger disk capacities for about the same price. In fact, you cannot find a small capacity drive to
buy, even if you wanted one! 2 TB drives are plentiful (July 2010) and for the same price that we used
to buy 1 Gig drives (under $200). It's enough to make you cry to think of what we paid over the years
and what we could get for those dollars today. Ah, well. That's the way the computer world works!
Accessing Data

The process of accessing data has 4 steps.


1. Seek
2. Rotate
3. Settle
4. Data transfer
Step Meas Click to start and stop
ured animations
as:

1. seek seek
move the time
head to (ms)
proper
track

90
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

2. rotate rotati
rotate disk onal
under the delay
head to (ms)
the correct
sector

3. settle settli
head ng
lowers to time
disk; (ms)
wait for
vibrations
from
moving to
stop
(actually
touches
only on
floppies)

4. data tr data
ansfer transf
er
copy data rate
to main (kbs)
memory

where ms stands for millisecond = .001 second and


kbs is kilobytes per second.

Total time to transfer a kilobyte:

for floppies, 175 - 300 ms

for hard drive, 15 - 80 ms

new hard drives, .0032 ms (300 MB per sec).


This is seriously fast!!
(Jun. 2009)

Clearly, getting data from a hard disk is immensely faster than from a floppy.
91
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Caring for Disks

To keep your storage media happy and healthy you must observe certain precautions.

Each medium has its own particular weaknesses and hazards to avoid. Be careful or suffer the
consequences - lost data, which means, at best, lots of lost time and effort! This section is about floppy
disks and hard disks only. Other storage media are discussed later.
Care of Hard Disks

There are fewer precautions for hard disks since they are
more protected by being sealed in air-tight cases. But when damage does occur, it is a more serious
matter. Larger amounts of data can be lost and hard disks are much, much more expensive that floppy
disks.

Hard disks can have problems from magnetic fields and heat like floppies do, but these are very rare.

Most problems occur when the read/write head (looks like a pointer in the photo) damages the metal
disk by hitting or even just touching it. This is called a head crash.

When the computer is on, the hard disk is spinning extremely fast. Any contact
at all can cause pits or scratches. Every scratch or pit is lost data. Damage in
the root directory turns the whole hard disk into a lovely doorstop! It's
completely dead.

So the goal here is to keep that read/write head where it belongs, just barely above the hard
disk, but never, ever touching it.

Don't

92
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Turn the computer off and quickly


Jar the computer while the
back on before spinning has
disk is spinning. Drop it - ever.
stopped.

Caring for Data

Besides protecting the physical medium you are using to store data, you must also consider what you
can do to safeguard the data itself. If the disk is kept from physical harm, but the data gets erased, you
still have a major problem.

So what can you do to safeguard the data on which you rely??

Write protect This keeps your files from being overwritten with new ones.
Removable media including USB drives:
Look for a tiny write-protect switch on the device.

Hard disks and devices without a switch:


 Make files Read-Only and/or Hidden to keep them from being
overwritten. This is done by changing the file attributes using whatever
system you have for managing files.
 Assign a password to each file, which can be done with some
programs and some USB drives.

 Encrypt the files. This will require special software and


remembering the decryption key.

Backup Make multiple copies of important data often.

The more important the files are, the more copies in more
places you need.

Anti-malware Use a set of programs that continuously look for an attack by a virus, trojan, or
worm .

93
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Computer viruses/trojans/worms are sneaky computer


programs that can erase your data and even your
whole system. Many are merely annoying and are
created as practical jokes. But there are a number of
very damaging malware programs out there, plus
others that are out to steal your passwords or use your
computer to damage or annoy others.

Your computer gets one of these nasties by


downloading an infected file from the internet
(sometimes without your knowledge!) or your office
network, or by first using a removable disk in an
infected computer and then accessing a file on that
removable disk with your own computer. This makes it
difficult to keep them from spreading.

Once you have disinfected your computer, it can get re-


infected from a removable disk that was used between
the time you were infected with the malware and when
you disinfected it. A number of nasty viruses hide for
quite a while before doing their nasty things. So you
can infect a lot of your own backups and other disks
and spread the infection, all unknowingly, to others. So
run an antimalware program that actively looks for
infections all the time. Don't wait until you have
symptoms. A lot of damage can be done before you
figure out that you have a problem.

Optical Disks

An entirely different method of recording data is used for optical disks. These
include the various kinds of CD and DVD discs.

You may guess from the word "optical" that it has to do with light. You'd be exactly right! Laser light,
in fact.

Optical disks come in several varieties which are made in somewhat different ways for

94
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

different purposes.
How optical disks are similar

 Formed of layers
 Data in a spiral groove on starting from the center of the disk

 Digital data (1's and 0's)

 1's and 0's are formed by how the disk absorbs or reflects light from a tiny laser.

The different types of optical disks use different materials and methods to absorb and reflect the
light.

How It Works (a simple version)


An optical disc is made mainly of polycarbonate (a plastic). The data is stored on a layer inside the
polycarbonate. A metal layer reflects the laser light back to a sensor.

To read the data on a disk, laser light shines through the polycarbonate and hits the data layer. How
the laser light is reflected or absorbed is read as a 1 or a 0 by the computer.

In a CD the data layer is near the top of the disc, the label side.

In a DVD the data layer is in the middle of the disc. A DVD can actually have data in two layers. It
can access the data from one side or from both sides. This is how a double-sided, double-layered
DVD can hold 4 times the data that a single-sided, single-layered DVD can.

Materials
The materials used for the data (recording) and metal (reflecting) layers are different for
different kinds of optical disks.

CD- DVD- Type Data Layer Metal Layer

CD-ROM DVD-ROM Read Only Molded Aluminum

95
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

(Also silicon, silver, or gold


(Audio/video (Video/audio, in double-layered DVDs)
PC software) PC use)

CD-R DVD-R Recordable (once!) Organic dye Silver, gold, silver alloy
DVD+R

CD-RW DVD-RW Rewritable Phase-changing Aluminum


DVD+RW (write, erase, write metal alloy film
DVD+RAM again)

Read Only:
The most common type of optical disk is the CD-ROM, which stands for Compact Disc - Read Only
Memory. It looks just like an audio CD but the recording format is quite different. CD-ROM discs are
used for computer software.

DVD used to stand for Digital Video Device or Digital Versatile Device, but now it doesn't really stand for
anything at all! DVDs are used for recording movies and large amounts of data.

The CDs and DVDs that are commercially produced are of the Write Once Read Many (WORM) variety.
They can't be changed once they are created.

The data layer is physically molded into the polycarbonate. Pits (depressions) and lands (surfaces) form
the digital data. A metal coating (usually aluminum) reflects the laser light back to the sensor. Oxygen
can seep into the disk, especially in high temperatures and high humidity. This corrodes the aluminum,
making it too dull to reflect the laser correctly.

CD-ROM and DVD-ROM disks should be readable for many, many years (100? 200?), but only if you
treat them with respect.
Write Once:
The optical disks that you can record on your own computer are CD-R, DVD-R, and DVD+R discs,
called writable or recordable disks.

The metal and data layers are separate. The metal layer can be gold, silver, or a silver alloy.

Go for the Gold: Gold layers are best because gold does not corrode. Naturally, the best is more
expensive. Sulfur dioxide from the air can seep in and corrode silver over time.

The data layer is an organic dye that the writing laser changes. Once the laser modifies the dye, it
cannot be changed again. Write Once! Ultraviolet light and heat can degrade the organic dye.

Manufacturers say that these disks have a shelf-life of 5 - 10 years before they are used for recording.
There is no testing yet about how long the data will last after you record it. Humph!

96
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

A writable disk is useful as a backup medium when you need long-term storage of your data. It is less
efficient for data that changes often since you must make a new recording each time you save your
data. Pricing of the disks will be important to your decision to use writable disks.
Rewrite:
An option for backup storage of changing data is rewritable disks, CD-RW, DVD-RW, DVD+RW,
DVD+RAM.

The data layer for these disks uses a phase-changing metal alloy film. This film can be melted by the
laser's heat to level out the marks made by the laser and then lasered again to record new data.

In theory you can erase and write on these disks as many as 1000 times, for CD-RW, and even 100,000
times for the DVD-RW types.

Advantages of Optical Disks

1. Physical: An optical disk is much sturdier than tape or a floppy disk. It is physically
harder to break or melt or warp. It's somewhat harder to lose than a USB flash drive.
2. Delicacy: It is not sensitive to being touched, though it can get too dirty or scratched to
be read. It can be cleaned!

3. Magnetic: It is entirely unaffected by magnetic fields.

4. Capacity: Optical disks hold a lot of data, especial the double-sided DVDs.

Plus, the non-data side of the disk can have a pretty label!

For software providers, an optical disk is a great way to store the software and data that they want to
distribute or sell.

Disadvantages of Optical Disks

1. Cost: The main disadvantage has been cost.


The cost of a CD-RW drive has dropped drastically and quickly. In 1995 such a drive was
around $3000. In the summer of 1997 CD-RW drives were down to just under $1000. In March
2003 a CD-RW that will read at 40X speed, write on CD-R media at 40X speed, and write on
re-writable media at 12X, could be bought for under $100 US. In July 2010 you could get an
internal DVD�RW drive for under $50!

The cost of disks can add up, too. Recordable disks (one time only) cost about $.30 US each
(March 2003) In July 2010 you can find these in packs of 100 for less than $.03 each. Re-
writable disks cost about $1 each. (July 2010) You have to be careful about the capacity and
maximum recording speed. The boxes all look a lot alike!

97
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

So for commercial use, the read/write drives are quite cost effective. For personal use, they are
available and are cheap enough to use for data storage for everyone.

2. Duplication: It is not quite as easy or as fast to copy an optical disk as it isto copy files
to a USB flash drive. You need the software and hardware for writing disks! (This is an
advantage as far as commercial software providers are concerned!) But discs are easy to label
and to store.

Care of Optical Disks (CDs, DVDs)

Your CDs and DVDs are not going to last forever. They certainly store data longer than other storage
media! Mis-handling your optical disk can quickly make your data unreadable. Even fingerprints can do
damage over time.

Data loss comes from:

 Physical damage - breaking, melting, scratching...


 Blocking of laser light by dirt, paint, ink, glue...

 Corrosion of the reflecting layer

Here are some do's and don'ts for keeping your CDs and DVDs healthy.

 Cleaning:
o Keep it clean!

o Handle by the edges or center hole.

o Put it back in its case as soon as you are finished with it. No laying around on
the desktop!!

o Remove dirt and smudges with a clean cotton cloth by wiping from the center to
the outer edge, NOT by wiping around the disk. Wiping in a circle can create a curved
scratch, which can confuse the laser.

o For stubborn dirt, use isopropyl alcohol or methanol or CD/DVD cleaning


detergent.

 Labeling:

o Don't use an adhesive label. The adhesive can corrupt your data in just a few
months!

o Don't write on or scratch the data side of the disk - ever!

98
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

o Don't scratch the label side.

o Don't write on the label side with a pencil or pen (scratches!)

o Don't write on the label side with a fine-point marker or with any solvent-based
marker. Use markers for CDs. (Solvent may dissolve the protective layer.)

 Storage:

o Store optical disks upright on edge, like a book, in a plastic case designed
specifically for them. Not flat for long periods!

o Store in a cool, dark environment where the air is clean and dry. NO SMOKE!
Low humidity.

 How you treat it:

o Keep away from high heat and high humidity which accelerate corrosion.

o Keep out of sunlight or other sources of ultraviolet light.

o Keep away from smoke or other air pollution.

o Don't bend it!

o Don't use a disk as a coaster or a frisbee or a bookmarker!

Recording

 Check disk for flaws and dirt BEFORE recording on it.


 Only open a recordable disk just before you plan to record on it.

 After recording, make sure the disk works as you expect: Read data; run programs.
Other Devices

Invention springs eternal in the computer industry. So more and different devices are brought out all the
time, especially for special uses.

The history of computing suggests that some new technology will take over the market in the near
future. Guessing which one will win the race is what makes fortunes in the stock market!

Flash Memory Several different brands of removable storage cards, also called memory
cards, are now available. These are solid-state devices (no moving
parts) that read and write data electrically, instead of magnetically.

Devices like digital cameras, digital camcorders, and cell phones may use
99
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

CompactFlash, SmartMedia, Memory Stick, or another flash memory card.

Laptop computers use PCMCIA cards, another type of flash memory, as solid-state
hard disks.

USB drive This new type of flash memory storage device does not yet have
a generally accepted name. Each company calls it something
different, including flash drive, jump drive, flash pen, thumb
drive, key drive, and mini-USB drive.

All are small, about the size of your thumb or a large car key, and plug into a USB port
on the computer. No drivers are needed for recent versions of Windows. Plug it in and
the computer reports a new drive!

Such small flash drives can have storage capacities from 8 MB to 128 GB or more!

Some flash drives include password protection and the ability to run software right off
the USB drive. So cool!

Removable Several types of special drives that compress data are available.
hard drives An external hard drive can be used for backup, too.

The image at the right is of an external Zip drive with a disk


sticking out.

Mass storage Businesses with very large sets of data that need easy access use sets of cartridges
with robot arms to pull out the right one on command.

Smart cards A chip on the card itself tracks changes, like deducting
purchases from the amount entered originally on the card.
Smart cards are already used in Europe and at colleges instead
of using a handful of coins at vending machines and at
laundromats.

Another use involves a new sensor technology which lets a smart card read your
fingerprint right on the card. The digital image of the fingerprint is then transmitted to a
database to compare it with the one on file for that card. You can prove you are really
you!!

Optical cards A chip on the card holds information like health records and auto repair records. They
can hold more data than the smart cards since they don't need to do any processing.

100
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

UNIT 3: COMPUTER NETWORK AND THE INTERNET

Lesson 1. Computer Networks

INTRODUCTION

In this lesson we will talk about computer networks, their uses and different types of networks.

OBJECTIVES

At the end of this lesson you should be able to:

 Become acquainted with the evolution of data communications systems, from centralized
processing to teleprocessing to distributed data processing to local area networks.
 Know the basic components of a data communications system.
 Know data transmission methods, including types of signals, modulation, and choices among
transmission modes.
 Differentiate the various kinds of communication links, and appreciate the need for protocols.
 Understand network configurations.
 Know local area network components, types and protocols.
 Become acquainted with examples of networking.
.

Computer networking

Computer networking or Data communications (Datacom) is the engineering discipline concerned with
the communication between computer systems or devices. A computer network is any set of computers
or devices connected to each other with the ability to exchange data. [1] Computer networking is
sometimes considered a sub-discipline of telecommunications, computer science, information
technology and/or computer engineering since it relies heavily upon the theoretical and practical
application of these scientific and engineering disciplines. The three types of networks are: the Internet,
the intranet, and the extranet. Examples of different network methods are:

 Local area network (LAN), which is usually a small network constrained to a small geographic
area. An example of a LAN would be a computer network within a building.
 Metropolitan area network (MAN), which is used for medium size area. examples for a city or a
state.

 Wide area network (WAN) that is usually a larger network that covers a large geographic area.

 Wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and WAN.

All networks are interconnected to allow communication with a variety of different kinds of media,
including twisted-pair copper wire cable, coaxial cable, optical fiber, power lines and various wireless
technologies.[2] The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited
distances (e.g. via the interconnections of the Internet[3]). Networking, routers, routing protocols, and
networking over the public Internet have their specifications defined in documents called RFCs.[4]

Views of networks
101
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Users and network administrators typically have different views of their networks. Users can share
printers and some servers from a workgroup, which usually means they are in the same geographic
location and are on the same LAN whereas a Network Administrator is responsible to keep that
network, up and running. A community of interest has less of a connection of being in a local area, and
should be thought of as a set of arbitrarily located users who share a set of servers , and possibly also
communicate via peer-to-peer technologies.

Network administrators can see networks from both physical and logical perspectives. The physical
perspective involves geographic locations, physical cabling, and the network elements (e.g., routers,
bridges and application layer gateways that interconnect the physical media. Logical networks, called,
in the TCP/IP architecture, subnets, map onto one or more physical media. For example, a common
practice in a campus of buildings is to make a set of LAN cables in each building appear to be a
common subnet, using virtual LAN (VLAN) technology.

Both users and administrators will be aware, to varying extents, of the trust and scope characteristics of
a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under
private administration usually by an enterprise, and is only accessible by authorized users (e.g.
employees).[5] Intranets do not have to be connected to the Internet, but generally have a limited
connection. An extranet is an extension of an intranet that allows secure communications to users
outside of the intranet (e.g. business partners, customers). [5]

Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected
by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets,
and aggregates of subnets, which share the registered IP address space and exchange information
about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-
readable names of servers are translated to IP addresses, transparently to users, via the directory
function of the Domain Name System (DNS).

Over the Internet, there can be business-to-business (B2B), business-to-consumer (B2C) and
consumer-to-consumer (C2C) communications. Especially when money or sensitive information is
exchanged, the communications are apt to be secured by some form of communications security
mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any
access by general Internet users, using secure Virtual Private Network (VPN) technology.

When used for gaming one computer will need to be the server while the others play through it.

History of computer networks

Before the advent of computer networks that were based upon some type of telecommunications
system, communication between calculation machines and early computers was performed by human
users by carrying instructions between them. Many of the social behaviors seen in today's Internet were
demonstrably present in the nineteenth century and arguably in even earlier networks using visual
signals.

In September 1940 George Stibitz used a teletype machine to send instructions for a problem set from
his Model at Dartmouth College in New Hampshire to his Complex Number Calculator in New York and
received results back by the same means. Linking output systems like teletypes to computers was an
interest at the Advanced Research Projects Agency (ARPA) when, in 1962, J.C.R. Licklider was hired
and developed a working group he called the "Intergalactic Network", a precursor to the ARPANet.

102
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing System for distributed users
of large computer systems. The same year, at MIT, a research group supported by General Electric and
Bell Labs used a computer DEC's to route and manage telephone connections.

Throughout the 1960s Leonard Kleinrock, Paul Baran and Donald Davies independently conceptualized
and developed network systems which used datagrams or packets that could be used in a network
between computer systems.

1965 Thomas Merrill and Lawrence G. Roberts created the first wide area network (WAN).

The first widely used PSTN switch that used true computer control was the Western Electric introduced
in 1965.

In 1969 the University of California at Los Angeles, SRI (in Stanford), University of California at Santa
Barbara, and the University of Utah were connected as the beginning of the ARPANET network using
50 kbit/s circuits. Commercial services using X.25 were deployed in 1972, and later used as an
underlying infrastructure for expanding TCP/IP networks.

Computer networks, and the technologies needed to connect and communicate through and between
them, continue to drive computer hardware, software, and peripherals industries. This expansion is
mirrored by growth in the numbers and types of users of networks from the researcher to the home
user.

Today, computer networks are the core of modern communication. All modern aspects of the Public
Switched Telephone Network (PSTN) are computer-controlled, and telephony increasingly runs over the
Internet Protocol, although not necessarily the public Internet. The scope of communication has
increased significantly in the past decade, and this boom in communications would not have been
possible without the progressively advancing computer network.

Networking methods

One way to categorize computer networks is by their geographic scope, although many real-world
networks interconnect Local Area Networks (LAN) via Wide Area Networks (WAN) and wireless wide
area networks (WWAN). These three (broad) types are:

Local area network (LAN)

A local area network is a network that spans a relatively small space and provides services to a small
number of people.

A peer-to-peer or client-server method of networking may be used. A peer-to-peer network is where


each client shares their resources with other workstations in the network. Examples of peer-to-peer
networks are: Small office networks where resource use is minimal and a home network. A client-server
network is where every client is connected to the server and each other. Client-server networks use
servers in different capacities. These can be classified into two types:

1. Single-service servers
2. Print servers

103
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The server performs one task such as file server, while other servers can not only perform in the
capacity of file servers and print servers, but also can conduct calculations and use them to provide
information to clients (Web/Intranet Server). Computers may be connected in many different ways,
including Ethernet cables, Wireless networks, or other types of wires such as power lines or phone
lines.

The ITU-T G.hn standard is an example of a technology that provides high-speed (up to 1 Gbit/s) local
area networking over existing home wiring (power lines, phone lines and coaxial cables).

Wide area network (WAN)

A wide area network is a network where a wide variety of resources are deployed across a large
domestic area or internationally. An example of this is a multinational business that uses a WAN to
interconnect their offices in different countries. The largest and best example of a WAN is the Internet,
which is a network composed of many smaller networks. The Internet is considered the largest network
in the world.[6] The PSTN (Public Switched Telephone Network) also is an extremely large network that
is converging to use Internet technologies, although not necessarily through the public Internet.

A Wide Area Network involves communication through the use of a wide range of different technologies.
These technologies include Point-to-Point WANs such as Point-to-Point Protocol (PPP) and High-Level
Data Link Control (HDLC), Frame Relay, ATM (Asynchronous Transfer Mode) and Sonet (Synchronous
Optical Network). The difference between the WAN technologies is based on the switching capabilities
they perform and the speed at which sending and receiving bits of information (data) occur.

Wireless networks (WLAN, WWAN)

A wireless network is basically the same as a LAN or a WAN but there are no wires between hosts and
servers. The data is transferred over sets of radio transceivers. These types of networks are beneficial
when it is too costly or inconvenient to run the necessary cables. For more information, see Wireless
LAN and Wireless wide area network. The media access protocols for LANs come from the IEEE.

The most common IEEE 802.11 WLANs cover, depending on antennas, ranges from hundreds of
meters to a few kilometers. For larger areas, either communications satellites of various types, cellular
radio, or wireless local loop (IEEE 802.16) all have advantages and disadvantages. Depending on the
type of mobility needed, the relevant standards may come from the IETF or the ITU.

Network topology

The network topology defines the way in which computers, printers, and other devices are connected,
physically and logically. A network topology describes the layout of the wire and devices as well as the
paths used by data transmissions.

Network topology has two types:

 Physical
 Logical

Commonly used topologies include:

 Bus
104
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Star

 Tree (hierarchical)

 Linear

 Ring

 Mesh

o partially connected

o fully connected (sometimes known as fully redundant)

The network topologies mentioned above are only a general representation of the kinds of topologies
used in computer network and are considered basic topologies

Network interface controller

Network Interface Card (NIC)

A 1990s Ethernet network interface controller card which


connects to the motherboard via the now-obsolete ISA bus.
This combination card features both a (now obsolete)
bayonet cap BNC connector (left) for use in coaxial-based
10base2 networks and an RJ-45 connector (right) for use in
twisted pair-based 10baseT networks. (The ports could not
be used simultaneously.)

Motherboard via one of:


Connects to
 Integrated
 PCI Connector

 ISA Connector

 PCI-E

105
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 FireWire

 USB

Network via one of:

 Fast Ethernet
 Gigabit Ethernet

 Optical fiber

 Token ring
Speeds 10 Mbit/s
100 Mbit/s
1000 Mbit/s
up to 160 Gbit/s
Common manufacturers Novell
Intel
Realtek
Others

A network interface card, network adapter, network interface controller (NIC), or LAN adapter is a
computer hardware component that interfaces to a computer network.

Whereas network cards used to be expansion cards that plug into a computer bus, the low cost and
ubiquity of the Ethernet standard means that most newer computers have a network interface built into
the motherboard.

Purpose

The NIC allows computers to communicate over a computer network. It is both an OSI layer 1 (physical
layer) and layer 2 (data link layer) device, as it provides physical access to a networking medium and
provides a low-level addressing system through the use of MAC addresses. It allows users to connect
to each other either by using cables or wirelessly.

Although other network technologies exist (e.g. token ring), Ethernet has achieved near-ubiquity since
the mid-1990s.

Every Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored
in ROM carried on the card. Every computer on an Ethernet network must have a card with a unique
MAC address. Normally it is safe to assume that no two network cards will share the same address,
because card vendors purchase blocks of addresses from the Institute of Electrical and Electronics
Engineers (IEEE) and assign a unique address to each card at the time of manufacture.

106
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Madge 4/16Mbps TokenRing ISA NIC

Ethernet 10Base-5/2 ISA NIC

Whereas network cards used to be expansion cards that plug into a computer bus, the low cost and
ubiquity of the Ethernet standard means that most newer computers have a network interface built into
the motherboard. These either have Ethernet capabilities integrated into the motherboard chipset or
implemented via a low cost dedicated Ethernet chip, connected through the PCI (or the newer PCI
express) bus. A separate network card is not required unless multiple interfaces are needed or some
other type of network is used. Newer motherboards may even have dual network interfaces built-in.

Implementation

The card implements the electronic circuitry required to communicate using a specific physical layer
and data link layer standard such as Ethernet or token ring. This provides a base for a full network
protocol stack, allowing communication among small groups of computers on the same LAN and large-
scale network communications through routable protocols, such as IP.

There are four techniques used to transfer data, the NIC may use one or more of these techniques.

 Polling is where the microprocessor examines the status of the peripheral under program
control.
 Programmed I/O is where the microprocessor alerts the designated peripheral by applying its
address to the system's address bus.

 Interrupt-driven I/O is where the peripheral alerts the microprocessor that it's ready to transfer
data.

 DMA is where an intelligent peripheral assumes control of the system bus to access memory
directly. This removes load from the CPU but requires a separate processor on the card.

107
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

A network card typically has a RJ45, BNC, or AUI socket where the network cable is connected, and a
few LEDs to inform the user of whether the network is active, and whether or not there is data being
transmitted on it. Network cards are typically available in 10/100/1000 Mbit/s varieties. This means they
can support a notional maximum transfer rate of 10, 100 or 1000 Megabits per second.

Sometimes the words 'controller' and 'card' are used interchangeably when talking about networking
because the most common NIC is the network interface card. Although 'card' is more commonly used, it
is less encompassing. The 'controller' may take the form of a network card that is installed inside a
computer, or it may refer to an embedded component as part of a computer motherboard, a router,
expansion card, printer interface or a USB device.

Introduction

A computer network allows sharing of resources and information among interconnected devices. In the
1960s, the Advanced Research Projects Agency (ARPA) started funding the design of the Advanced
Research Projects Agency Network (ARPANET) for the United States Department of Defense. It was
the first computer network in the world.[1] Development of the network began in 1969, based on designs
developed during the 1960s.
Purpose
Computer networks can be used for a variety of purposes:
 Facilitating communications. Using a network, people can communicate efficiently and easily
via email, instant messaging, chat rooms, telephone, video telephone calls, and video
conferencing.
 Sharing hardware. In a networked environment, each computer on a network may access and
use hardware resources on the network, such as printing a document on a shared network
printer.

 Sharing files, data, and information. In a network environment, authorized user may access
data and information stored on other computers on the network. The capability of providing
access to data and information on shared storage devices is an important feature of many
networks.

 Sharing software. Users connected to a network may run application programs on remote
computers.

 Information preservation.
 Security.
 Speed up.

Network classification

The following list presents categories used for classifying networks.

108
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Connection method

Computer networks can be classified according to the hardware and software technology that is used to
interconnect the individual devices in the network, such as optical fiber, Ethernet, wireless LAN,
HomePNA, power line communication or G.hn.

Ethernet as it is defined by IEEE 802 utilizes various standards and mediums that enable
communication between devices. Frequently deployed devices include hubs, switches, bridges, or
routers. Wireless LAN technology is designed to connect devices without wiring. These devices use
radio waves or infrared signals as a transmission medium. ITU-T G.hn technology uses existing home
wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area
network.
Wired technologies
 Twisted pair wire is the most widely used medium for telecommunication. Twisted-pair cabling
consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two
insulated copper wires twisted into pairs. Computer networking cabling consist of 4 pairs of
copper cabling that can be utilized for both voice and data transmission. The use of two wires
twisted together helps to reduce crosstalk and electromagnetic induction. The transmission
speed ranges from 2 million bits per second to 100 million bits per second. Twisted pair cabling
comes in two forms which are Unshielded Twisted Pair (UTP) and Shielded twisted-pair (STP)
which are rated in categories which are manufactured in different increments for various
scenarios.

 Coaxial cable is widely used for cable television systems, office buildings, and other worksites
for local area networks. The cables consist of copper or aluminum wire wrapped with insulating
layer typically of a flexible material with a high dielectric constant, all of which are surrounded
by a conductive layer. The layers of insulation help minimize interference and distortion.
Transmission speed range from 200 million to more than 500 million bits per second.

 Optical fiber cable consists of one or more filaments of glass fiber wrapped in protective
layers. It transmits light which can travel over extended distances. Fiber-optic cables are not
affected by electromagnetic radiation. Transmission speed may reach trillions of bits per
second. The transmission speed of fiber optics is hundreds of times faster than for coaxial
cables and thousands of times faster than a twisted-pair wire. [citation needed ]

Wireless technologies
 Terrestrial microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The
equipment look similar to satellite dishes. Terrestrial microwaves use low-gigahertz range,
which limits all communications to line-of-sight. Path between relay stations spaced approx, 30
miles apart. Microwave antennas are usually placed on top of buildings, towers, hills, and
mountain peaks.

 Communications satellites – The satellites use microwave radio as their telecommunications


medium which are not deflected by the Earth's atmosphere. The satellites are stationed in
space, typically 22,000 miles (for geosynchronous satellites) above the equator. These Earth-
orbiting systems are capable of receiving and relaying voice, data, and TV signals.

109
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Cellular and PCS systems – Use several radio communications technologies. The systems are
divided to different geographic areas. Each area has a low-power transmitter or radio relay
antenna device to relay calls from one area to the next area.

 Wireless LANs – Wireless local area network use a high-frequency radio technology similar to
digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum
technology to enable communication between multiple devices in a limited area. An example of
open-standards wireless radio-wave technology is IEEE.

 Infrared communication , which can transmit signals between devices within small distances
not more than 10 meters peer to peer or ( face to face ) without any body in the line of
transmitting.

Scale
Networks are often classified as local area network (LAN), wide area network (WAN), metropolitan area
network (MAN), personal area network (PAN), virtual private network (VPN), campus area network
(CAN), storage area network (SAN), and others, depending on their scale, scope and purpose, e.g.,
controller area network (CAN) usage, trust level, and access right often differ between these types of
networks. LANs tend to be designed for internal use by an organization's internal systems and
employees in individual physical locations, such as a building, while WANs may connect physically
separate parts of an organization and may include connections to third parties.
Functional relationship (network architecture)
Computer networks may be classified according to the functional relationships which exist among the
elements of the network, e.g., active networking, client–server and peer-to-peer (workgroup)
architecture.
Network topology
Main article: Network topology

Computer networks may be classified according to the network topology upon which the network is
based, such as bus network, star network, ring network, mesh network. Network topology is the
coordination by which devices in the network are arranged in their logical relations to one another,
independent of physical arrangement. Even if networked computers are physically placed in a linear
arrangement and are connected to a hub, the network has a star topology, rather than a bus topology.
In this regard the visual and operational characteristics of a network are distinct. Networks may be
classified based on the method of data used to convey the data, these include digital and analog
networks.
Types of networks based on physical scope

Common types of computer networks may be identified by their scale.


Local area network

110
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

A local area network (LAN) is a network that connects computers and devices in a limited geographical
area such as home, school, computer laboratory, office building, or closely positioned group of
buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be
based on Ethernet technology, although new standards like ITU-T G.hn also provide a way to create a
wired LAN using existing home wires (coaxial cables, phone lines and power lines). [2]
Typical library network, in a branching tree topology and controlled access to resources

All interconnected devices must understand the network layer (layer 3), because they are handling
multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet
connections to the user device and a Gigabit Ethernet connection to the central router, could be called
"layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be
more correct to call them access routers, where the router at the top is a distribution router that
connects to the Internet and academic networks' customer access routers.

The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher
data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current
Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data
transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s. [3]
Personal area network
A personal area network (PAN) is a computer network used for communication among computer and
different information technological devices close to one person. Some examples of devices that are
used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even
video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically
extends to 10 meters.[4] A wired PAN is usually constructed with USB and Firewire connections while
technologies such as Bluetooth and infrared communication typically form a wireless PAN.
Home area network
A home area network (HAN) is a residential LAN which is used for communication between digital
devices typically deployed in the home, usually a small number of personal computers and accessories,
such as printers and mobile computing devices. An important function is the sharing of Internet access,
often a broadband service through a CATV or Digital Subscriber Line (DSL) provider. It can also be
referred to as an office area network (OAN).
Wide area network
A wide area network (WAN) is a computer network that covers a large geographic area such as a city,
country, or spans even intercontinental distances, using a communications channel that combines
many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission
facilities provided by common carriers, such as telephone companies. WAN technologies generally
function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and
the network layer.
111
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Campus network
A campus network is a computer network made up of an interconnection of local area networks (LAN's)
within a limited geographical area. The networking equipments (switches, routers) and transmission
media (optical fiber, copper plant, Cat5 cabling etc.) are almost entirely owned (by the campus tenant /
owner: an enterprise, university, government etc.).

In the case of a university campus-based campus network, the network is likely to link a variety of
campus buildings including; academic departments, the university library and student residence halls.
Metropolitan area network
A Metropolitan area network is a large computer network that usually spans a city or a large campus.
Sample EPN made of Frame relay WAN connections and dialup remote access.

Sample VPN used to interconnect 3 offices and remote users

Enterprise private network


An enterprise private network is a network build by an enterprise to interconnect various company sites,
e.g., production sites, head offices, remote offices, shops, in order to share computer resources.
Virtual private network
A virtual private network (VPN) is a computer network in which some of the links between nodes are
carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by
physical wires. The data link layer protocols of the virtual network are said to be tunneled through the
larger network when this is the case. One common application is secure communications through the
public Internet, but a VPN need not have explicit security features, such as authentication or content
encryption. VPNs, for example, can be used to separate the traffic of different user communities over
an underlying network with strong security features.
VPN may have best-effort performance, or may have a defined service level agreement (SLA) between
the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than
point-to-point.
Internetwork
An internetwork is the connection of two or more private computer networks via a common routing
technology (OSI Layer 3) using routers. The Internet is an aggregation of many internetworks, hence its
name was shortened to Internet.
Backbone network
A Backbone network (BBN) A backbone network or network backbone is part of a computer network
infrastructure that interconnects various pieces of network, providing a path for the exchange of
information between different LANs or subnetworks.[1][2] A backbone can tie together diverse networks
in the same building, in different buildings in a campus environment, or over wide areas. Normally, the
backbone's capacity is greater than the networks connected to it.
112
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

A large corporation that has many locations may have a backbone network that ties all of the locations
together, for example, if a server cluster needs to be accessed by different departments of a company
that are located at different geographical locations. The pieces of the network connections (for example:
ethernet, wireless) that bring these departments together is often mentioned as network backbone.
Network congestion is often taken into consideration while designing backbones.

Backbone networks should not be confused with the Internet backbone.


Global Area Network
A Global Area Network (GAN) is a network used for supporting mobile communications across an
arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile
communications is handing off the user communications from one local coverage area to the next. In
IEEE Project 802, this involves a succession of terrestrial wireless LANs.[5]
Internet
The Internet is a global system of interconnected governmental, academic, corporate, public, and
private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It
is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by
DARPA of the United States Department of Defense. The Internet is also the communications
backbone underlying the World Wide Web (WWW).

Participants in the Internet use a diverse array of methods of several hundred documented, and often
standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP
addresses) administered by the Internet Assigned Numbers Authority and address registries. Service
providers and large enterprises exchange information about the reachability of their address spaces
through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission
paths.
Intranets and extranets
Intranets and extranets are parts or extensions of a computer network, usually a local area network.

An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers
and file transfer applications, that is under the control of a single administrative entity. That
administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet
is the internal network of an organization. A large intranet will typically have at least one web server to
provide users with organizational information.

An extranet is a network that is limited in scope to a single organization or entity and also has limited
connections to the networks of one or more other usually, but not necessarily, trusted organizations or
entities—a company's customers may be given access to some part of its intranet—while at the same
time the customers may not be considered trusted from a security standpoint. Technically, an extranet
may also be categorized as a CAN, MAN, WAN, or other type of network, although an extranet cannot
consist of a single LAN; it must have at least one connection with an external network.
113
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Overlay network
An overlay network is a virtual computer network that is built on top of another network. Nodes in the
overlay are connected by virtual or logical links, each of which corresponds to a path, perhaps through
many physical links, in the underlying network.
A sample overlay network: IP over SONET over Optical

For example, many peer-to-peer networks are overlay networks because they are organized as nodes
of a virtual system of links run on top of the Internet. The Internet was initially built as an overlay on the
telephone network .[6]

Overlay networks have been around since the invention of networking when computer systems were
connected over telephone lines using modem, before any data network existed.

Nowadays the Internet is the basis for many overlaid networks that can be constructed to permit routing
of messages to destinations specified by an IP address. For example, distributed hash tables can be
used to route messages to a node having a specific logical address, whose IP address is known in
advance.

Overlay networks have also been proposed as a way to improve Internet routing, such as through
quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as
IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they require
modification of all routers in the network.[citation needed] On the other hand, an overlay network can be
incrementally deployed on end-hosts running the overlay protocol software, without cooperation from
Internet service providers. The overlay has no control over how packets are routed in the underlying
network between two overlay nodes, but it can control, for example, the sequence of overlay nodes a
message traverses before reaching its destination.

For example, Akamai Technologies manages an overlay network that provides reliable, efficient content
delivery (a kind of multicast). Academic research includes End System Multicast and Overcast for
multicast; RON (Resilient Overlay Network) for resilient routing; and OverQoS for quality of service
guarantees, among others. A backbone network or network backbone is a part of computer network
infrastructure that interconnects various pieces of network, providing a path for the exchange of
information between different LANs or subnetworks.[1][2] A backbone can tie together diverse networks
in the same building, in different buildings in a campus environment, or over wide areas. Normally, the
backbone's capacity is greater than the networks connected to it.
Basic hardware components

All networks are made up of basic hardware building blocks to interconnect network nodes, such as
Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition, some method of
connecting these building blocks is required, usually in the form of galvanic cable (most commonly

114
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Category 5 cable). Less common are microwave links (as in IEEE 802.12) or optical cable ("optical
fiber").
Network interface cards
A network card, network adapter, or NIC (network interface card) is a piece of computer hardware
designed to allow computers to communicate over a computer network. It provides physical access to a
networking medium and often provides a low-level addressing system through the use of MAC
addresses.

Each network interface card has its unique id. This is written on a chip which is mounted on the card.
Repeaters
A repeater is an electronic device that receives a signal, cleans it of unnecessary noise, regenerates it,
and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can
cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are
required for cable that runs longer than 100 meters. Repeaters work on the Physical Layer of the OSI
model.
Hubs
A network hub contains multiple ports. When a packet arrives at one port, it is copied unmodified to all
ports of the hub for transmission. The destination address in the frame is not changed to a broadcast
address.[7] It works on the Physical Layer of the OSI model..
Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model.
Bridges broadcast to all ports except the port on which the broadcast was received. However, bridges
do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are
reachable through specific ports. Once the bridge associates a port and an address, it will send traffic
for that address to that port only.

Bridges learn the association of ports and addresses by examining the source address of frames that it
sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge
assumes that MAC address is associated with that port. The first time that a previously unknown
destination address is seen, the bridge will forward the frame to all ports other than the one on which
the frame arrived.

Bridges come in three basic types:


 Local bridges: Directly connect local area networks (LANs)
 Remote bridges: Can be used to create a wide area network (WAN) link between LANs.
Remote bridges, where the connecting link is slower than the end networks, largely have been
replaced with routers.

 Wireless bridges: Can be used to join LANs or connect remote stations to LANs.

115
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks of data
communication) between ports (connected cables) based on the MAC addresses in the packets. [8] A
switch is distinct from a hub in that it only forwards the frames to the ports involved in the
communication rather than all ports connected. A switch breaks the collision domain but represents
itself as a broadcast domain. Switches make forwarding decisions of frames on the basis of MAC
addresses. A switch normally has numerous ports, facilitating a star topology for devices, and
cascading additional switches.[9] Some switches are capable of routing based on Layer 3 addressing or
additional logical levels; these are called multi-layer switches. The term switch is used loosely in
marketing to encompass devices including routers and bridges, as well as devices that may distribute
traffic on load or by application content (e.g., a Web URL identifier).
Routers
A router is an internetworking device that forwards packets between networks by processing
information found in the datagram or packet (Internet protocol information from Layer 3 of the OSI
Model). In many situations, this information is processed in conjunction with the routing table (also
known as forwarding table). Routers use routing tables to determine what interface to forward packets
(this can include the "null" also known as the "black hole" interface because data can go into it,
however, no further processing is done for said data).
Firewalls
Firewalls are the most important aspect of a network with respect to security. A firewalled system does
not need every interaction or data transfer monitored by a human, as automated processes can be set
up to assist in rejecting access requests from unsafe sources, and allowing actions from recognized
ones. The vital role firewalls play in network security grows in parallel with the constant increase in
'cyber' attacks for the purpose of stealing/corrupting data, planting viruses, etc.

Network topology

Network topology is the layout pattern of interconnections of the various elements (links, nodes, etc.) of
a computer network.[1][2] Network topologies may be physical or logical. Physical topology means the
physical design of a network including the devices, location and cable installation. Logical topology
refers to how data is actually transferred in a network as opposed to its physical design.

Topology can be considered as a virtual shape or structure of a network. This shape does not
correspond to the actual physical design of the devices on the computer network. The computers on a
home network can be arranged in a circle but it does not necessarily mean that it represents a ring
topology.

Any particular network topology is determined only by the graphical mapping of the configuration of
physical and/or logical connections between nodes. The study of network topology uses graph theory.
116
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Distances between nodes, physical interconnections, transmission rates, and/or signal types may differ
in two networks and yet their topologies may be identical.

A local area network (LAN) is one example of a network that exhibits both a physical topology and a
logical topology. Any given node in the LAN has one or more links to one or more nodes in the network
and the mapping of these links and nodes in a graph results in a geometric shape that may be used to
describe the physical topology of the network. Likewise, the mapping of the data flow between the
nodes in the network determines the logical topology of the network. The physical and logical
topologies may or may not be identical in any particular network.
Basic topology types

The study of network topology recognizes seven basic topologies: [3]


 Point-to-point topology
 Bus (point-to-multipoint) topology

 Star topology

 Ring topology

 Tree topology

 Mesh topology

 Hybrid topology

This classification is based on the interconnection between computers — be it physical or logical.

The physical topology of a network is determined by the capabilities of the network access devices and
media, the level of control or fault tolerance desired, and the cost associated with cabling or
telecommunications circuits.

Networks can be classified according to their physical span as follows:


 LANs (Local Area Networks)
 Building or campus internetworks

 Wide area internetworks

Classification of network topologies

There are also two basic categories of network topologies: [4]


 Physical topologies
 Logical topologies

The shape of the cabling layout used to link devices is called the physical topology of the network. This
refers to how the cables are laid out to connect many computers to one network. The physical topology
you choose for your network influences and is influenced by several factors:

117
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Office Layout
 Troubleshooting Techniques

 Cost of Installation

 Type of cable used

Logical topology describes the way in which a network transmits information from network/computer to
another and not the way the network looks or how it is laid out. The logical layout also describes the
different speeds of the cables being used from one network to another.
Physical topologies
The mapping of the nodes of a network and the physical connections between them – the layout of
wiring, cables, the locations of nodes, and the interconnections between the nodes and the cabling or
wiring system.[1]
Classification of physical topologies
Point-to-point
The simplest topology is a permanent link between two endpoints (the line in the illustration above).
Switched point-to-point topologies are the basic model of conventional telephony. The value of a
permanent point-to-point network is the value of guaranteed, or nearly so, communications between the
two endpoints. The value of an on-demand point-to-point connection is proportional to the number of
potential pairs of subscribers, and has been expressed as Metcalfe's Law.

Permanent (dedicated)

Easiest to understand, of the variations of point-to-point topology, is a point-to-point communications


channel that appears, to the user, to be permanently associated with the two endpoints. Children's "tin-
can telephone" is one example, with a microphone to a single public address speaker is another. These
are examples of physical dedicated channels.

Within many switched telecommunications systems, it is possible to establish a permanent circuit.


One example might be a telephone in the lobby of a public building, which is programmed to ring only
the number of a telephone dispatcher. "Nailing down" a switched connection saves the cost of running a
physical circuit between the two points. The resources in such a connection can be released when no
longer needed, for example, a television circuit from a parade route back to the studio.

Switched:

Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up


dynamically, and dropped when no longer needed. This is the basic mode of conventional telephony.

Bus network topology

118
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

In local area networks where bus topology is used, each machine is connected to a single
cable. Each computer or server is connected to the single bus cable through some kind of
connector. A terminator is required at each end of the bus cable to prevent the signal from
bouncing back and forth on the bus cable. A signal from the source travels in both directions to
all machines connected on the bus cable until it finds the MAC address or IP address on the
network that is the intended recipient. If the machine address does not match the intended
address for the data, the machine ignores the data. Alternatively, if the data does match the
machine address, the data is accepted. Since the bus topology consists of only one wire, it is
rather inexpensive to implement when compared to other topologies. However, the low cost of
implementing the technology is offset by the high cost of managing the network. Additionally,
since only one cable is utilized, it can be the single point of failure. If the network cable breaks,
the entire network will be down.

Linear bus

The type of network topology in which all of the nodes of the network are connected to a
common transmission medium which has exactly two endpoints (this is the 'bus', which is also
commonly referred to as the backbone, or trunk) – all data that is transmitted between nodes
in the network is transmitted over this common transmission medium and is able to be received
by all nodes in the network virtually simultaneously (disregarding propagation delays)[1].

Note: The two endpoints of the common transmission medium are normally terminated with a
device called a terminator that exhibits the characteristic impedance of the transmission
medium and which dissipates or absorbs the energy that remains in the signal to prevent the
signal from being reflected or propagated back onto the transmission medium in the opposite
direction, which would cause interference with and degradation of the signals on the
transmission medium (See Electrical termination).

Distributed bus

The type of network topology in which all of the nodes of the network are connected to a
common transmission medium which has more than two endpoints that are created by adding
branches to the main section of the transmission medium – the physical distributed bus
topology functions in exactly the same fashion as the physical linear bus topology (i.e., all
nodes share a common transmission medium).

Notes:

1.) All of the endpoints of the common transmission medium are normally terminated with a
device called a 'terminator' (see the note under linear bus).

2.) The physical linear bus topology is sometimes considered to be a special case of the
physical distributed bus topology – i.e., a distributed bus with no branching segments.

3.) The physical distributed bus topology is sometimes incorrectly referred to as a physical tree
topology – however, although the physical distributed bus topology resembles the physical tree

119
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

topology, it differs from the physical tree topology in that there is no central node to which any
other nodes are connected, since this hierarchical functionality is replaced by the common bus.

Star network topology

In local area networks with a star topology, each network host is connected to a central hub. In
contrast to the bus topology, the star topology connects each node to the hub with a point-to-
point connection. All traffic that traverses the network passes through the central hub. The hub
acts as a signal booster or repeater. The star topology is considered the easiest topology to
design and implement. An advantage of the star topology is the simplicity of adding additional
nodes. The primary disadvantage of the star topology is that the hub represents a single point
of failure.

Notes

 A point-to-point link (described above) is sometimes categorized as a special instance


of the physical star topology – therefore, the simplest type of network that is based
upon the physical star topology would consist of one node with a single point-to-point
link to a second node, the choice of which node is the 'hub' and which node is the
'spoke' being arbitrary[1].

 After the special case of the point-to-point link, as in note 1.) above, the next simplest
type of network that is based upon the physical star topology would consist of one
central node – the 'hub' – with two separate point-to-point links to two peripheral nodes
– the 'spokes'.

 Although most networks that are based upon the physical star topology are commonly
implemented using a special device such as a hub or switch as the central node (i.e.,
the 'hub' of the star), it is also possible to implement a network that is based upon the
physical star topology using a computer or even a simple common connection point as
the 'hub' or central node – however, since many illustrations of the physical star
network topology depict the central node as one of these special devices, some
confusion is possible, since this practice may lead to the misconception that a physical
star network requires the central node to be one of these special devices, which is not
true because a simple network consisting of three computers connected as in note 2.)
above also has the topology of the physical star.

 Star networks may also be described as either broadcast multi-access or nonbroadcast


multi-access (NBMA), depending on whether the technology of the network either
automatically propagates a signal at the hub to all spokes, or only addresses individual
spokes with each communication.

Extended star

A type of network topology in which a network that is based upon the physical star topology has
one or more repeaters between the central node (the 'hub' of the star) and the peripheral or
'spoke' nodes, the repeaters being used to extend the maximum transmission distance of the
point-to-point links between the central node and the peripheral nodes beyond that which is

120
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

supported by the transmitter power of the central node or beyond that which is supported by
the standard upon which the physical layer of the physical star network is based.

If the repeaters in a network that is based upon the physical extended star topology are
replaced with hubs or switches, then a hybrid network topology is created that is referred to as
a physical hierarchical star topology, although some texts make no distinction between the two
topologies.

Distributed Star

A type of network topology that is composed of individual networks that are based upon the
physical star topology connected together in a linear fashion – i.e., 'daisy-chained' – with no
central or top level connection point (e.g., two or more 'stacked' hubs, along with their
associated star connected nodes or 'spokes').

Ring network topology

A network topoolgy that is set up in a circular fashion Data travels around the ring in one
direction, and each device on the right acts as a repeater to keep the signal strong as it travels.
Each device incorporates a receiver for the incoming signal and a transmitter to send the data
on to the next device in the ring The network is dependent on the ability of the signal to travel
around the ring. [5]

Mesh
The value of fully meshed networks is proportional to the exponent of the number of subscribers,
assuming that communicating groups of any two endpoints, up to and including all the endpoints, is
approximated by Reed's Law.
Fully connected mesh topology

The number of connections in a full mesh = n(n - 1) / 2


Fully connected

Note: The physical fully connected mesh topology is generally too costly and complex for
practical networks, although the topology is used when there are only a small number of nodes
to be interconnected.

Partially connected mesh topology

Partially connected

The type of network topology in which some of the nodes of the network are connected to more
than one other node in the network with a point-to-point link – this makes it possible to take
advantage of some of the redundancy that is provided by a physical fully connected mesh

121
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

topology without the expense and complexity required for a connection between every node in
the network.

Note: In most practical networks that are based upon the physical partially connected mesh
topology, all of the data that is transmitted between nodes in the network takes the shortest
path (or an approximation of the shortest path) between nodes, except in the case of a failure
or break in one of the links, in which case the data takes an alternative path to the destination.
This requires that the nodes of the network possess some type of logical 'routing' algorithm to
determine the correct path to use at any particular time.

Tree network topology

Also known as a hierarchical network.

The type of network topology in which a central 'root' node (the top level of the hierarchy) is connected
to one or more other nodes that are one level lower in the hierarchy (i.e., the second level) with a point-
to-point link between each of the second level nodes and the top level central 'root' node, while each of
the second level nodes that are connected to the top level central 'root' node will also have one or more
other nodes that are one level lower in the hierarchy (i.e., the third level) connected to it, also with a
point-to-point link, the top level central 'root' node being the only node that has no other node above it
in the hierarchy (The hierarchy of the tree is symmetrical.) Each node in the network having a specific
fixed number, of nodes connected to it at the next lower level in the hierarchy, the number, being
referred to as the 'branching factor' of the hierarchical tree.This tree has individual peripheral nodes.
1.) A network that is based upon the physical hierarchical topology must have at least three
levels in the hierarchy of the tree, since a network with a central 'root' node and only one
hierarchical level below it would exhibit the physical topology of a star.

2.) A network that is based upon the physical hierarchical topology and with a branching factor
of 1 would be classified as a physical linear topology.

3.) The branching factor, f, is independent of the total number of nodes in the network and,
therefore, if the nodes in the network require ports for connection to other nodes the total
number of ports per node may be kept low even though the total number of nodes is large –
this makes the effect of the cost of adding ports to each node totally dependent upon the
branching factor and may therefore be kept as low as required without any effect upon the total
number of nodes that are possible.

4.) The total number of point-to-point links in a network that is based upon the physical
hierarchical topology will be one less than the total number of nodes in the network.

5.) If the nodes in a network that is based upon the physical hierarchical topology are required
to perform any processing upon the data that is transmitted between nodes in the network, the
nodes that are at higher levels in the hierarchy will be required to perform more processing
operations on behalf of other nodes than the nodes that are lower in the hierarchy. Such a type
of network topology is very useful and highly recommended.
122
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Logical topology
The logical topology, in contrast to the "physical", is the way that the signals act on the network media,
or the way that the data passes through the network from one device to the next without regard to the
physical interconnection of the devices. A network's logical topology is not necessarily the same as its
physical topology. For example, twisted pair Ethernet is a logical bus topology in a physical star
topology layout. While IBM's Token Ring is a logical ring topology, it is physically set up in a star
topology.
Classification of logical topologies
The logical classification of network topologies generally follows the same classifications as those in the
physical classifications of network topologies, the path that the data takes between nodes being used to
determine the topology as opposed to the actual physical connections being used to determine the
topology
Notes:

1.) Logical topologies are often closely associated with media access control (MAC) methods
and protocols.

2.) The logical topologies are generally determined by network protocols as opposed to being
determined by the physical layout of cables, wires, and network devices or by the flow of the
electrical signals, although in many cases the paths that the electrical signals take between
nodes may closely match the logical flow of data, hence the convention of using the terms
'logical topology' and 'signal topology' interchangeably.

3.) Logical topologies are able to be dynamically reconfigured by special types of equipment
such as routers and switches.

Daisy chains

Except for star-based networks, the easiest way to add more computers into a network is by daisy-
chaining, or connecting each computer in series to the next. If a message is intended for a computer
partway down the line, each system bounces it along in sequence until it reaches the destination. A
daisy-chained network can take two basic forms: linear and ring.
 A linear topology puts a two-way link between one computer and the next. However, this was
expensive in the early days of computing, since each computer (except for the ones at each
end) required two receivers and two transmitters.
 By connecting the computers at each end, a ring topology can be formed. An advantage of the
ring is that the number of transmitters and receivers can be cut in half, since a message will
eventually loop all of the way around. When a node sends a message, the message is
processed by each computer in the ring. If a computer is not the destination node, it will pass
the message to the next node, until the message arrives at its destination. If the message is not
accepted by any node on the network, it will travel around the entire ring and return to the
sender. This potentially results in a doubling of travel time for data.

123
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Centralization

The star topology reduces the probability of a network failure by connecting all of the peripheral nodes
(computers, etc.) to a central node. When the physical star topology is applied to a logical bus network
such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from
any peripheral node to all peripheral nodes on the network, sometimes including the originating node.
All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the
central node only. The failure of a transmission line linking any peripheral node to the central node will
result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be
unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all
of the peripheral nodes also.

If the central node is passive, the originating node must be able to tolerate the reception of an echo of
its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central
node) plus any delay generated in the central node. An active star network has an active central node
that usually has the means to prevent echo-related problems.

A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in
a hierarchy. This tree has individual peripheral nodes (e.g. leaves) which are required to transmit to and
receive from one other node only and are not required to act as repeaters or regenerators. Unlike the
star network, the functionality of the central node may be distributed.

As in the conventional star network, individual nodes may thus still be isolated from the network by a
single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is
isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from
the rest.

In order to alleviate the amount of network traffic that comes from broadcasting all signals to all nodes,
more advanced central nodes were developed that are able to keep track of the identities of the nodes
that are connected to the network. These network switches will "learn" the layout of the network by
"listening" on each port during normal data transmission, examining the data packets and recording the
address/identifier of each connected node and which port it's connected to in a lookup table held in
memory. This lookup table then allows future transmissions to be forwarded to the intended destination
only.
Decentralization

In a mesh topology (i.e., a partially connected mesh topology), there are at least two nodes with two or
more paths between them to provide redundant paths to be used in case the link providing one of the
paths fails. This decentralization is often used to advantage to compensate for the single-point-failure
disadvantage that is present when using a single device as a central node (e.g., in star and tree
networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The
number of arbitrary forks in mesh networks makes them more difficult to design and implement, but
124
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

their decentralized nature makes them very useful. This is similar in some ways to a grid network,
where a linear or ring topology is used to connect systems in multiple directions. A multi-dimensional
ring has a toroidal topology, for instance.

A fully connected network, complete topology or full mesh topology is a network topology in which there
is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are n(n-1)/2
direct links. Networks designed with this topology are usually very expensive to set up, but provide a
high degree of reliability due to the multiple paths for data that are provided by the large number of
redundant links between nodes. This topology is mostly seen in military applications. However, it can
also be seen in the file sharing protocol BitTorrent in which users connect to other users in the "swarm"
by allowing each user sharing the file to connect to other users also involved. Often in actual usage of
BitTorrent any given individual node is rarely connected to every single other node as in a true fully
connected network but the protocol does allow for the possibility for any one node to connect to any
other node when sharing files.
Hybrids

Hybrid networks use a combination of any two or more topologies in such a way that the resulting
network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree
network connected to a tree network is still a tree network, but two star networks connected together
exhibit a hybrid network topology. A hybrid topology is always produced when two different basic
network topologies are connected. Two common examples for Hybrid network are: star ring network
and star bus network
 A Star ring network consists of two or more star topologies connected using a multistation
access unit (MAU) as a centralized hub.
 A Star Bus network consists of two or more star topologies connected using a bus trunk (the
bus trunk serves as the network's backbone).

While grid networks have found popularity in high-performance computing applications, some systems
have used genetic algorithms to design custom networks that have the fewest possible hops in
between different nodes. Some of the resulting layouts are nearly incomprehensible, although they
function quite well.

125
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

A Snowflake topology is really a "Star of Stars" network, so it exhibits characeristics of a hybrid network
topology but is not composed of two different basic network topologies being connected together.

Lesson 2: the World Wide Web (WWW) and the Internet

Introduction

In this lesson you will be introduced to the World Wide Web commonly referred to as WWW and the
internet also know as the net. You will also look at the history of both and what their uses are.

Lesson Objectives

At the end of this lesson you will be able to:

 Appreciate the history of the World Wide Web and the Internet.
 Learn what is needed to get on the Internet.
 Understand generally what in Internet service provider does.
 Know the rudimentary functions of a browser.
 Understand how to search the Internet.
 Appreciate the non-Web parts of the Internet.

126
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Appreciate the ongoing problems associated with the Internet.

DEFINITION OF INFORMATION TECHNOLOGY

Information Technology (IT) is the Technology which supports activities involving the creation, storage,
manipulation and communication of information, together with their related methods, management and
application. Therefore, information technology may be seen as the broadly based technology needed
to support information systems.

In other words, Information Technology is “the use of computers and other associated devices.
Actually that is what we do with information. When we say manipulate data we mean to use special
skills to achieve the intended objective. Associated devices in this case refers to hardware or
equipment that work hand in hand with a computer, we will look at that soon.

Information Technology has been said the Technology of the future, and more money is invested in this
Technology. It has made communications cheaper, faster, reliable, safer and effective. We are able
now to communicate message at the speed of sound! Imagine, in the case of Zambia, sending a
message from Chililabombwe to Choma, 20 years ago. It could take a lot of days and sometimes
months before it is delivered and sometimes if there is a funeral or wedding announcement, you will be
too late to attend.

Now with information Technology, we have passed those barriers! We are able to send electronic
messages (e-mail) from any part of the world within less than a minute! We are also able to pay bills or
check information on your bank out account using your cell phone. There is just more to information
Technology.

USE OF INFORMATION TECHNOLOGY

Offices
Since the development of micro computers in 1970s, the office has been undergoing rapid change.
The availability of cheap processing power has made it possible to automate many office functions,
such as message distribution, record keeping and the filing and classifying of documents.

One of the recent innovations in the office is the word processing. This could be a machine dedicated
to word processing tasks or a general purpose computer driven by a word processing program. A word
processing simplifies the task of preparing documents, typing letters and producing reports etc.

Education
Computers are being used in education in a variety of ways. In the most direct way, universities,
secondary schools and other institutions give courses in which students learn about computers and
related topics. However, computers are also widely used in the teaching of other subjects, for example,
in subjects like chemistry and physics, laboratory experiments can be simulated on the computer. The
experiments can be repeated many times having to use actual chemicals or equipment. Thus
potentially expensive or dangerous experiments can be performed with little danger or expense.

Computer Assisted Instruction (CAI) is the name given to a teaching system (program) which operates
on a drill and practice principles. For example when learning the meaning of English words, student
sits at a terminal and is presented with a word and its meaning. The student studies this and when
satisfied, tells the computer she is ready for questions. The computer may then present several
127
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

sentences in which the word is used. The student is asked to indicate the correct usage. The word can
be presented as many times as the student wishes. The computer does not get irritated or bored. This
type of program is useful where memorization of material is important.

Another type of system is computer Assisted Learning (CAL). This one tries to teach in the traditional
sense. The computer presents material tasks questions and based on the student’s performance,
determines whether to present material or review topics already covered. In this way, the students can
proceed at the pace which suits them.

In certain subjects, some concepts are difficult to explain using words and diagrams. In these cases,
the computer can be used to explain such concepts more vividly by using its graphics capability to
display pictures, images etc. But apart from its ability to teach a student an environment, in which
children can explore various words, discovering things for themselves.

Health
Computer have long been used to assist in hospital administration. Employee and patient records
have formed the bulk of the data processing activities. Computerisation has led to better and more
efficient services at these hospitals. But computers are also playing a more direct part in health care.

Computer controlled devices can be used to monitor a patient’s condition- things like temperature and
heart beat and sound an alarm if there is any change from normal. This means that nurses do not have
to keep a constant watch on the patient and the computer never falls asleep.
The linking of computers and telecommunications is already having a significant influence on the lives
of handicapped workers. One benefit is that it is now possible for them to stay at home and perform
office work on home computers. Work assignments can be sent along telephone lines, and the
completed assignments can be returned the same way.

Computers with vast storage of medical facts can help doctors with diagnosis. These facts form the
basis of what are called knowledge based systems or expert systems. An expert system is a computer
program designed to operate at the level of an expert in a particular field.

Another popular medical expert system is MYCILIN, developed at Stanford University. MYCILIN is
concerned with the diagnosis of blood and meningitis infections. It can also advise a doctor on a
course of treatment for the infection. Computers can quickly retrieve patient information stored on
secondary storage devices etc.
Home
A few years ago, no one had a computer at home. Today hundreds of thousands, even millions of
people own a home computer. The microchip made it possible to produce a computer that ordinary
people could afford. Many people use it for recreational activities, like playing games such as chess,
bridge, pac man and flight simulator, the component can even be someone on a distant computer.

Others use it as an educational tool to learn about various topics, such as foreign languages,
mathematics or even computer science. A popular use is for storing personal information such as birth
dates, addresses, telephone numbers, record album and income tax returns. It is also used as a home
accountant to keep track of expenses and to balance the family budgets.

The home computer can be programmed to turn on electrical appliances or light at a pre-set time. It
can also be used to monitor and control the temperature of a room by wiring it to air conditioning or
heat units.

128
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Nowadays, it is possible to access national and international databases via home computers and the
telephone.

ESSENTIAL EQUIPMENT USED IN INFORMATION TECHNOLOGY


a. MODEM

A modem stands for Modulator-Demodulator. When you are communicating using the Internet or
sending e-mail you can never successfully communicate without a MODEM. The purpose of the
modem is to convert digital signals to analog signals and vice versa. What we are talking about here is
that computers only understand ones and zeros or data. When you are sending electronic messages,
signals will pass through telephone lines, and telephone line analog signals, for the computer signal to
pass through, and it will be converted in to analogue signals first, and the equipment which is
responsible for that is the modem. The signal is converted into analog when it is received, it will again
get reconverted into digital form for the computer to understand. We call this process handshake!
Because usually it works for computer only in bigger situations, for instance to connect many
computers on the network the equipment which is used is a Server.

b. A fax

Server
A fax cannot be ignored as a tool in Information Technology. You can actually connect your fax to the
computer as an alternative to e-mail. A fax in simple can just be compared to a long distance
photocopier. You can send documents to any place on earth at a very reasonable speed just like e-mail.

Photocopier
Should you want to reproduce document in the original form a photocopier is the best equipment to use
in documenting.

Scanner

With a scanner you are able to capture imagery like maps, photos and even text in the computer so that
you can print them out or simply edit them or improve them using computer software. There are two
main types of scanners there are handheld scanners and flatbed scanners. Handheld scanners fit the
hands, the user scans the image using hands. A flatbed scanner is somehow different, because it is
bigger and sits on desk. However what is important is to know that the functions are the same, that is,
to capture the image!

129
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Various disks

A CD drive

We can use different types of disks to save and store documents as Secondary Storage. We can use
Magnetic tapes, floppy disks, and Recordable CDs and others to mention but a few.

Burners

Burners or CD-writers is the latest and brilliant technological invention that I have seen in recent years.
A Burner is actually equipment used to burn music files, data and even Videos and many more onto a
disk using a computer. It is called a burner because it really burns the CD (but not destroying it) and
saves or writes the files to disk. There are different types of burners! There is DVD burner or writer, and
then there is a standard burner which is used to write VCDs, data files and music. What we are just
talking about here is Format! Now, if for instance you want to make a VCD, you can just buy a standard
Recordable CD written (CD-R or CD-WR). CD-WR type of disk tells you that you can actually erase
what is on the CD Recordable you cannot delete the contents and so you need to actually make sure
that your are buying the right one and the choice here is yours. For DVDs or the disk you need to buy is
labelled DVD. The good part about CDs is that they give you more in terms of storage space.

Digital camera

A digital camera cannot be ruled out. It is very important in Information Technology. When we are
talking the Advertising industry, to prepare graphics for the billboard the photo is first of all, at present,
captured using a digital camera and then it is edited using a graphics software like Core Draw or
Photoshop and then just printed out. Even an excellent Camera like Samsung D-500 can be used in
this instance.

Video Camera

A Video Camera is very important also in Information Technology. We can use a Video Camera to
produce VCs and DVDs. But perhaps the most important contribution of A video Camera is to hold
Computer based meetings (Net meeting) or Video Conferencing, where people communicate using
Internet and can actually be seeing each other but very far apart. One could be in India, the other one
in Solwezi, Zambia but they see each live and direct. Currently the University of Namibia uses this
Technology in lecturing students who are very committed, and Lecturer cannot be there in person.
130
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Projectors
When we talk about computerized presentations, one cannot do away with a projector. A projector is
equipment used to view a computer presentation the projector screen for large audience. You can also
connect a projector to a Video Cassette or DVD play so that you can watch movies on big screen.

Microphones
Microphones are used very widely in the production of Music. To produce Music on a CD a microphone
is used to enter the sound into a Computer program.

ACTIVITY
1. What is information technology?
2. Why is information technology essential in every days life?
3. What is the use of the modem?

SUMMARY

Information technology is the use of computers and other associated devices, to store, manipulate and
handle data.

A modem stands for Modulator-Demodulator.

When you are communicating using the Internet or sending e-mail you can never successfully
communicate without a MODEM. The purpose of the modem is to convert digital signals to analogue
signals and vice versa

Computer Assisted Instruction (CAI) is the name given to a teaching system (program) which operates
on drill and practice principles.

SELF CHECK EXERCISE

Distinguish between the two methods.

1. Computer Assisted Instruction


(CAI).............................................................................................................
.......................................................................................................................................................
...................................................................................

2. Computer Assisted Learning (CAL)..................................................................................


.....................................................................................................................

3. List down essential equipment used in information technology.


.......................................................................................................................................................
.......................................................................................................................................................
.......................................................................................................................................................
..............................................................................................

4. Why should offices, education and you, need information


technology...................................................................................................

131
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

.......................................................................................................................................................
...................................................................................

SUGGESTED ANSWERS TO THE ACTIVITY

1. Information Technology is “the use of computers and other associated devices, to store,
manipulate and handle data”.
2. It is essential because it has made work easier, cheaper, reliable and effective.

3. The purpose of the modem is to convert digital signals to analogue signals and vice versa

SUGGESTED ANSWERS TO SELF TEST EXERCISE

1. The difference between the two are that:


Computer Assisted Instruction (CAI) operates on a drill and practice
principles.

2. Computer Assisted Learning (CAL) presents material tasks questions based on the
student’s performance and determines whether to present new materials or review topics
already covered.

3. Modem, fax, scanner, burners, digital camera, video camera, projectors, microphones.

4. Offices, education and I, need information technology because technology has made
communications, office work, teaching, studies cheaper, faster, reliable safer and effective..

Lesson 2. COMPUTER IN BUSINESS

INTRODUCTION
This lesson talks about the role of computers in business, how it has benefited the stakeholders and
other users of computers.

OBJECTIVES
At the of this lesson you should be able to:
 Understand the role of computers in business.
 Know different instruments used in business.

BANKING
132
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

One of the earliest uses of a computers was in the banking industry. Initially, a typical bank had a
central computer centre which processed the data from various branches. For example, the
withdraw/deposit slips collected by a branch during the day would be forwarded to the computer
centre (usually located at the head office) when the branch closed to the public. The customers’
accounts would be updated and a report on the current status of the accounts would be prepared for
use the next (banking) day.

Nowadays, the clerk or teller enters transaction directly via a terminal. The terminal is connected to the
computer which can access files of customers. Accounts can be queried and/updated in a few
seconds. Because the clerk does not have to look through volumes of paper to get the status of a
customer’ account, service is faster. Let us see how this works.

Suppose that a customer named Keith Clark wishes to make a withdrawal of K100,000 from his
account number 76543210. On reaching, the clerk, he would state his request. The clerk would type
the account number 76543210 on her terminal. The company would accept (we say read) this number
and would use it to retrieve Keith Clark’s banking record such as his name, address, telephone number,
occupation, employer, balance in the account, e.tc.

In this situation, the clerk only needs to know if this is in fact, Keith Clark’s account number and what
the balance in the account is. The computer would then display this information on the screen. Also
coming into popular use is the Automated Teller Machine (ATM) also called a Bank Cash Point. This is
the computer controlled device at which a customer can make withdrawals or deposits, check the
balance in her or even request an appointment with the loans officer, all without involving a human
operator. In order to use the system, a customer is issued with a plastic card which is coded with her
name and account number (PIN) which does not appear on her card. This number is stored as part of
her customer record which can be accessed by the computer to monitor and/or control the processes
which convert input materials into useful products.

This aspect of computer use is called process control. Various instruments relay information such as
temperature and pressures to the computer. The computers monitors these values and if any
abnormalities are found can either,

(a) direct the regulating devices to make the necessary adjustments.

(b) When the temperature falls to 800 c switch is turned on.

Temperature is continuously before it can be processed by the micro processor, it must be converted to
a digital form. This is accomplished by the analog to digital converter at (b).

Retailers
Computers are becoming more and more widely used in retail stores. One of the common uses is the
supermarket industry, which uses computers to help the check out procedure at the counter. The
goods are identified by the means of barcodes. These are a series of light and dark bars which
uniquely identify the product, for example, a 170 gram tin of Nestle cream.

When the cashier passes the item over the bar, the reader recognises the barcode, converts it to a
number (the item number) and stores the number 2495, say in the computers memory, looking for one
with the number 2495. The record with this number is retrieve and stored in the computers memory.
The computer can then pass the name and the price to the register to be printed. The time between
reading the barcode and printing the name and price is only a few seconds.
133
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

The computer can also subtract 1 from the quality in stock, making it 18. The updated record is then
written back to its original location in secondary storage. The system can automatically keep track of
the number of items in stock and if the number falls below a specified level can indicate that an order
needs to be replaced.

With a system like this, we can be certain that correct prices are printed on the receipt, provided, of
course, that care was taken to ensure that correct prices were stored in the item records in the first
place. The important feature here is that, the cashier does not have to enter the price of the item, so
the possibility of entering an incorrect value is eliminated.

ACTIVITY
1. The earliest to use the computers for business was the ............................

2. What did they use these computers for in their business?.................

SUMMARY
 One of the earliest uses of a computers was in the banking industry.

 Nowadays, the clerk or teller enters transaction directly via a terminal. The terminal is
connected to the computer which can access files of customers.

 When the cashier passes the item over the bar, the reader recognises the barcode, converts it
to a number (the item number) and stores the number 2495, say in the computers memory,
looking for one with the number 2495.

SELF CHECK EXERCISE


What is the other word used for Automated Teller Machine? Write and briefly explain about how it
works..................................................................................
.............................................................................................................................
SUGGESTED ANSWERS TO THE ACTIVITY AND SELF CHECK EXERCISE

ACTIVITY

 The first to use the computers were the banking industry.

SELF CHECK EXERCISE


The other word for Automated Teller machine is ATM

UNIT 4: HARDWARE SECURITY

Lesson 1: Information System Control

INTRODUCTION

134
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

With the proliferation of computers and computer systems, it has become apparent that information
systems should be secured and threats to personal privacy rights avoided. For this to occur, in addition
to other factors, legislation to prevent this should be place. If you own something that you value, you
look after it. You keep it somewhere safe, you regularly check to see that it is in good condition and you
don’t allow it to upset others. Information is a valuable possession and it deserves similar care.

OBJECTIVES
At the end of this lesson you should be able to:
 Define the term security.
 Understand the difference between security and control.
 Know measures that should be taken into consideration.
 Know types information system controls

DEFINITION OF SECURITY
Security in information management terms, means the protection of data from accidental or deliberate
threats which might cause unauthorised modification, disclosure, or destruction of data, and the
protection of the information system from the degradation or non availability of services. Security refers
to technical issues related to computer system, psychological and behavioural favours in the
organizations and its employees, and protection against the unpredictable occurrences of the natural
world.

Security may be defined as ‘protection against attack or failure”. Computer based systems are
designed to perform particular functions or provide particular services and any loss of security may
result in the inability or failure to do these things in the way intended. The number of possible causes of
loss of security can be divided into two very basic types; accidental risks and deliberate risks.

In this context, risk means an occurrence or activity which could result in a loss of security. The choice
of a counter measure for a particular risk depends on the basic type which it belongs.

The basic type of risks may be conveniently classified by describing their primary effect in one of the
following:

 Interruption: of data preparation


 Disclosure: of confidential or proprietary information e.tc.
 Corruption: of stored data or injury to personnel etc
 Removal: of equipment or information etc.
 Destruction: of programs or storage media, etc.

Security can be divided into a number of aspects indicated below:


Prevention: It is in practice impossible to prevent all threats costs effectively.
Detection: Detection techniques are often combined with prevention techniques; a log can be
maintained of authorised attempts to gain access to a computer system. Deterrence as an example,
computer misuse by personnel can be made grounds for dismissal.

Recovery procedures: If the threat occurs, its consequences can be contained (e.g. check point
programs).

135
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Correction procedures: these ensure vulnerability is dealt with (for example, by instituting stricter
controls).
Threat avoidance: This might mean changing the design of the system.
THREATS TO SECURITY, PRIVACY AND CONFIDENTIALITY IN ITS OPERATIONS
The main issues that confront in relation to securing data/information are as follows:

Confidentiality: Information should be protected from unauthorised internal users, external hackers
and from being intercepted during transmission on communication networks by making it unintelligible
to the hacker. The content should be transformed in such a way that it is not decipherable by anyone
who does not know the information algorithm.

Integrity: On retrieval or receipt at the other end of a communication network the information should
appear exactly as was stored or sent. It should be possible to generate an alert on any modification,
addition or deletion to the original content. Integrity also precludes information play – a fresh copy of
the data is generated/re-sent using the authorisation features of the earliest authentic message.
Suitable mechanisms are required to ensure end-to –end message content and copy authentication.

Availability: The information that is being stored or transmitted across communication network should
be available whenever required and to whatever extent as desired within pre-established time
constraints. Network errors, power outages, operational errors, application software errors, hardware
problems and viruses are some of the causes of unavailability of information. The mechanisms for
implementation of counter measures to these threats are available but are usually beyond the scope of
end-to-end message security.

Authenticity: It should be possible to prevent any person or object from masquerading as some other
person or object. When a message is received it should therefore be possible to verify whether it has
indeed been sent by the person or object to be the originator. Similarly, it should also be possible to
ensure that the message is sent to the person or object for which it is meant. This implies the need for
reliable identification of the originator and recipient of data.

Non- reputability: After sending/authorising a message, the sender should not be able to at later
date, deny having done so. Similarly, the recipient of a message should not be able to deny receipt at a
later date. It should, be possible to bind messages and message acknowledgements with their
originators.

TYPES OF INFORMATION SYSTEM CONTROLS


The principle general controls for computerised systems are:

 System implementation controls: ensure that the entire systems development process is
properly managed.

 Software Controls: Prevent unauthorised changes to computer programs and ensure reliability
of systems software.

 Physical hardware controls: ensure processing in the event of hardware malfunctioning


breakdown.

 Computer operations controls: Monitor computer operations and check for errors.

136
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

 Data security controls: Prevent unauthorised changes or access to data.

 Administrative disciplines: Mechanisms for ensuring that controls are enforced and monitored
by management.

CONTROLLED ACCESS TO INFORMATION – ADMINISTRATRATIVE CONTROLS

Denying access to the system to all who should not have access can prevent most computer crimes.
The two widely used mechanisms for controlling access to computer systems are identification codes
and passwords.

The first way is to have each user appropriately authorised to access the computer or network in
question. This involves assigning users unique Ids, or identification codes that they must use to access
the network. The second control mechanism is to issue each user a password.

With these controls in place, in order to access a computer system, a user must have a valid ID and a
password. The password can be suitably encrypted within the computer system so that the user alone
creates his or her unique password. As an additional measure of security, users should change their
passwords, frequently and make sure that the password is not something that can easily be guessed.

To use such a system, you are first prompted for a user ID. If the ID is accepted, you are then
prompted for password. If it matches the record password for the ID, then you are allowed access to
the system. If the password does not match, you are either disconnected or prompted for the correct
password. Usually, after a limited number of unsuccessful tries, the unauthorised user will be locked off
of the system.

CONTROOLED ACCESS TO FILES – SEGREGATION OF FUNCTIONS


The second step in creating a secure system is to control access to the files of data themselves.
Combinations of password and access rights are usually used. Access rights provides that only the
creater (owner) of a given file or certain restricted classes of users may read or write to it.

DATA ENCRYPTION
Another security technique is to use data encryption in which data is processed with a secret key to
render it unintelligible except to the receiver of the file who holds the necessary key to decrypt the data.
Such a scheme is regularly used in military, banking and telecommunications environments. This
encryption/de-encryption process can be done with either hardware or software, but it can substantially
slow the rate of data transfer due to the additional encrypting and de-encrypting task. This technique
not only shields the data from unauthorised users, but it also carries with it a measure of the integrity of
the data. If encrypted data has been tempered with then the de-encrypting process will fail.

PHYSICAL CONTROL
The physical environment quite obviously has a major effect on information system security, and so
planning it properly is a vital precondition of an adequate security plan. Proper attention must be given
to things such as a humidity control, fire detection and protection devices, power failures and outages,
water damage, safekeeping of documents, and so on. A key to protection is proper backup, with some
firms keeping critical master files in underground location in areas, which are relatively immune from
physical disasters. The main physical security threats are fire, water. Weather, lighting, terrorist activity
and accidental damage.

137
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Physical access control aims to prevent intruders getting near the computer equipment or storage
media. Some of the methods of controlling human access range from:
 Personnel (security guards)

 Mechanical devices (e.g. keys, whose issue should be recorded).

 Electronic identification devices (e.g. card-swipe system).

The best form of access control would be one which recognises individuals immediately without the
need for personnel or cards, which can be stolen. However, machines which can identify a person’s
fingerprints or scan the pattern of a retina, are far too expensive for many organisations.

Guidelines for security against physical threats that should be applied within the office include:
 Fireproof cabinets should be used to store files, or lockable metal packs for flash disks.

 If files contain confidential data, they should be kept in a safe.

 Computers with lockable keyboards are sometimes used. Computer terminals should be sited
carefully, to minimize the risk of unauthorised use.

 If computer printout is likely to include confidential data, it should be shredded before it is


eventually thrown away after use.

 Flash disks should not be left lying around an office. They can get lost or stolen.

 The computer’s environment (humidity, temperature, dust etc) should be properly controlled.

For example, a proper fire safety plan is an essential feature of security procedures, in order to prevent
fire, detect fire, and put out the fire. Fire safety includes: site preparation, detection extinguishing and
training.

Fire Walls
For the purpose of business information exchange, end – to- end security mechanisms are critically and
have to be implemented. In the larger context of electronic data transfer, it is extremely important to
secure internal information systems from being attacked from outside i.e. by hackers. Firewalls are
built to protect the internal network of an organisation from attack originating from the internet.
Firewalls can be implemented in many ways. Some of these are:

 Establishing rules to decide which packets, depending on the originating IP, address should be
allowed to pass into the organisation’s network.

 Establishing of proxy servers, so that internal client requests for accessing external services
are rooted through the proxy server. This ensures that the client and the external server are
not in direct communication with each other.

 Establishment of an additional network as a buffer between the internal and external networks.

Audit and Reliability

138
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Users often have a tendency to trust systems more than they should. To the extent that they frequently
believe the results produced through the computer based information system with sufficient scepticism.
Therefore, the need to ensure that adequate controls are included in the system is an essential step in
achieving information security. Auditors must have the ability to validate the reports and output to test
the authenticity and accuracy of data information.

System reliability means that the data are reliable, that they are accurate and believable. It also
includes the element of security, which the analyst evaluates by determining the method and suitability
of protecting the system against unauthorised use. Ensuring that the systems have passwords is not
sufficient access protection. Multiple levels of passwords are obtained needed to allow different staff
members’ access to those files and databases or capabilities they need. Since not all persons require
the same level of access, many security systems utilize multiple levels of passwords that control the
level of entry into the system an individual is allowed. The technical security features are not adequate
if passwords or other methods are displayed whenever they are used. The passwords can be read
from the screen by anyone nearby.

Recovery from a Security Loss


Contingency planning
A contingency may be as an unscheduled interruption of data processing capability, which necessitates
measures beyond the scope of normal day-to-day operating procedures. Contingency planning is
concerned with the preparation and execution of plans to deal with contingencies. Such plans involve:

 The introduction of standby mode of operation, i.e. the provision of alternative resources or
capability in the event of an existing resource or capability becoming unavailable.

 The recovery of a normal mode of operation i.e. the methods and means to be used to return to
a normal state of capability or resource availability.

Procedures to deal with the unexpected, when it does occur, include:

 Recovery/restart procedures which deal with steps to bring the system into function after the
system have been stopped, either by accident or intentionally.

 Fall back or standby procedures which cover the activities to take when the system is not
available for a period of time. This might require manual recording of transactions or the
relocation of processing to another place.

 Backup procedures which deal with the copying of files, documentation, and other items that
are needed should be disrupted.

RECOVERY PROCEDURES
When attempting to recover from an interruption in processing, where it has been caused by the failure
of an individual item or a major disaster, the aim is to restore a normal mode of operation as soon as
possible. The first step is to introduce the planned means of stand by operation as a temporarily
measure. In those instances where standby is provided by duplication this will effectively complete the
recovery process. The easy and extent to which individual standby measures can be made to effect
recovery, or at least minimise the problems involved, are matters which should be fully explored at both
the systems design stage and when planning the contingency procedures.
139
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

ACTIVITY
1. What is security?.....................................................................................
2. The basic types of risks may be conveniently classified as?
i._________________________________________________________

ii_________________________________________________________

iii________________________________________________________

iv________________________________________________________

v________________________________________________________

SUMMARY
 Security in information management terms, means the protection of data from accidental or
deliberate threats which might cause unauthorised modification, disclosure, or destruction of
data, and the protection of the information system from the degradation or non availability of
services.

 Security can be defined as protection against attack or failure.

The basic type of risks may be conveniently classified by describing their primary effect in one of the
following:
SUGGESTED ANSWERS TO THE ACTIVITY AND SELF CHEK EXERCIE

ACTIVITY

1. Security can be defined as protection against attack or failure.

2. Interruption: of data preparation


Disclosure: of confidential or proprietary information e.tc.
Corruption: of stored data or injury to personnel etc
Removal: of equipment or information etc.
Destruction: of programs or storage media, etc.

SELF CHECK EXERCISE

1. What are the main issues that confront in relation to securing


data/information?...........................................................................
……………………………………………………………………………………………………………..
……………………………………………………………………………………………………………..

140
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

2. How do you call the protection of data from accidental deliberate threats which might
cause unauthorised person modification, disclosure or destruction of
data?...................................................
………………………………………………………………………………………………………………

3. List the five basic types of risks that you have learnt, describe their primary effects.

i………………………………………………………………………………………………………….

ii…………………………………………………………………………………………………………

iii………………………………………………………………………………………………………….

iv…………………………………………………………………………………………………………

v…………………………………………………………………………………………………………

ANSWERS TO THE ACTIVUTY AND SELF CHECK EXERCISE

ACTIVITY
1. Security in information management terms, means the protection of data from accidental or
deliberate threats which might cause unauthorised modification, disclosure, or destruction of
data, and the protection of the information system from the degradation or non availability of
services.

 Interruption
 Disclosure
 Corruption
 Removal
 Destruction

SELF CHECK EXERCISE

1. Confidentiality

2. Integrity

3. Availability

4. Authenticity

1. Non- reputability
141
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

i. Security

 Interruption: of data preparation


 Disclosure: of confidential or proprietary information e.tc.
 Corruption: of stored data or injury to personnel etc
 Removal: of equipment or information etc.
 Destruction: of programs or storage media, etc.

Lesson 2. LEGAL ISSUES

INTRODUCTION
In this lesson we will talk about legal issues for the rapid development of information and
communications technology over the last few days.
It also intends to provide an awareness of some issues involved.

OBJECTIVES
At the end of this lesson you should be able to:
 Know the range of new social and legal challenges.
 Understand the awareness of some legal issues involved.
 Know several organisations involved in the monitoring the use of software to ensure that
breaches of copyrights are avoided.

The rapid development of information and communication technology over the last few years has
presented a range of new social and legal challenges. Some of the ethical questions that are raised
have already been considered in the lesson on the internet. However, the innovatory opportunities for
learning, working and leisure presented by ICT have further consequences for the legal framework in
which they operate and the rapid pace of change has added on an additional layer of complexity to
these tangled arrangements.

This lesson intends to provide an awareness of some of the issues involved. However, it is not
intended to be comprehensive guide, and if you have a concern over a particular circumstance in which
the legitimacy of an action might be debated, it would be sensible for you to seek further advice,
perhaps from the LEA or appropriate professional body.

COPYRIGHT
Even before the advent of computers, the question of copyright was a vexed one for schools. The
current legislation in the united kingdom is, though, unequivocal and thorough. It applies to everything
published (including text and artistic production which might itself be literacy, musical, graphic or
dramatic in nature.

It intends too, other technological media apart from computers such as films, audio and television
programmes. In short the likelihood is that anything you might want to copy is covered by the
legislation .
As a teacher you will want to ensure that your own position with regard to copyright is sound and
recognise that, in this area of model for the children in your class.
142
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Some of the confusion over copyright for schools may originate in the concept of “fair” dealing, which is
embodied in the copyright, designs and patent Act 1988 (This Act still forms the basis of legislation in
this country, though its impact has been modified by EU harmonisation). BECT as helpful paper (BECT,
200ib) summarises this idea succinctly.

Fair dealing permits certain acts without requiring the permission of the copyright holder. These include
what is reasonable for private studying and research. Making multiple copies for classroom use has
been established as being outside these definitions. The inference is clear. Educational institutions are
not many way is exempt from copyright legislation. However, many schools and colleges have paid a
fee to the Copyright Licensing Agency (CLA).

This provides limited rights to make copies of some copyright materials. Copying is restricted to works
published by organisations that have joined the scheme but this in fact covers most British (a major
exemption however, is printed music). More restrictive, however, is the scope of the CLA scheme in
applying only to printed materials. The exact allowance for copying (usually photocopying) differs
between institutions and has been varied in recent years. It would be sensible to check your school
current agreement to ascertain precisely what is permitted.

If we switch our forces more particularly to ICT, the same principles apply. In practice, a computer
program should be regarded as a literacy work, with its authors and distributors enjoying the science
the same rights and privileges as if they had produced a book. Software publishers are usually very
careful to stipulate what the purchaser has bought. In general buying software only gains title to a
licence to install and use the software not to the software itself. The text of the licence will give details
about how many copies of the software may be made.
The starting point is that only one will be allowed in order that data may be transferred to a computer’s
hard disk. However, It is often possible to buy at economic rates licences to cover a whole site or
multiple stations within a purchasing institution.

Teachers have an obligation to ensure that software licensing conditions are not breached in their
schools. In some circles the conditions under which software has been purchased are broken with the
excuse that educational, illegal copies are not hurting anyone because the transmission is electronic,
no physical material is appropriated. However, the disk and packaging are negligible elements of the
software bundle. What is at stake is the author’s intellectual property and unauthorised copy
represents its theft. Apart from the ethnic argument against software theft, there is also the highly
pragmatic consideration that especially amongst smaller educational publishers, loss of revenue will
jeopardise future development of materials.

Several organisations are involved in monitoring the use of software to ensure that breaches of
copyright are avoided. For most of these in the United Kingdom is the Federation Against Software
Theft (FAST), which has its website at htt:/w.w.w.fast.uk. FAST do not simply act to police. Software
use – they are a valuable source of advice and support to computer.

The internet poses considerable challenges to the enforcement of copyright law, not least in the
difficulty of monitoring all the materials that is held and is available for download. Nevertheless,
copyright laws do apply even if copyright ownership is not declared, it should be reckoned as
applicable, unless its rightful owner has explicitly placed the material in the public domain.

Teachers will need to determine how best to introduce children to the concept of copyright. While this is
a more urgent question in secondary schools , the increase in computer use, particularly at home, by
143
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

primary age children means that plagiarism from the internet or CD ROM may well arise in your
classroom. While it is unlikely to be appropriate to introduce a formal system of referencing to young
children, spending time to explain why sources should be acknowledged will only be to their long term
benefit.

DATA PROTECTION
Teachers and schools need to know if their obligations regarding data confidentiality. These duties are
enshrined in data protection legislation. In essence this data back to the data protection Act of 1984,
which was introduce in response to growing concern about the impact of advancing computer
technology on personal privacy. Considering its vintage, it was far sighted, providing both rights for
individuals and a demand on organisations (or individuals) who held and use personal information to
adopt responsible practices, even for data such as names and addresses, which may not be
considered to be particularly sensitive.

Ironically the Act was limited to information held on computer which exempted data records held by
traditional methods on hard copy. Another difficulty with the 1984 Act which perhaps could not be
foreseen was that the growing internationalisation of computer use through the internet would facilitate
transfer of data locations outside UK jurisdiction. Partly to improve the regulation in these areas, but
also partly in response to an EU directive issued in September 1990 aimed at harmonising legislation
across the member states, the 1984 Act has recently been replaced by a revision, formulated in 1998,
the new Act took effect in March 2000.

Like the original Act, the current legislation is based on eight principles. The detail of these have been
rearranged from the earlier requirements, primarily to accommodate the new eighth principle. The list
has been summarised by the data protection commissioner as follow:-

Anyone processing personal data must comply with the eight enforceable principles of good practice.
They say that data must be:

 Fairly and lawfully processed.


 Processed for limited purposes.
 Adequate, relevant and not excessive, accurate.
 Not kept longer than necessary.
 Processed in accordance with the data subject’s rights.
 Secure.
 Not transferred to countries without adequate protection.
 There remain exemptions the Act. These include personal data held in connection with
domestic or recreational matters and personal data that the law requires the user to make
public, such as electoral register. On the other hand, the principles provide for individuals to
be able to access data held about themselves, and to insist on the data being corrected if it is
erroneous.
ACTIVITY.
1. Before the advent of computers, the question of copyright
was………………………………….................................................................
2. As a teacher you will want to ensure that your position with regard to …………………….. is
sound and recognise that in the area of moral behavioural as in others,
………………..................................................
for the children in your class.

144
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

3. Teachers will need to determine how best to introduce children to the concept of
…………………………………..

SUMMARY
1. The innovatory opportunities for learning, working and leisure presented by ICT have further
consequences for the legal framework in which they operate and the rapid pace of change has
added on an additional layer of complexity to these tangled arrangements.

2. Copying is restricted to works published by organisations that have joined the scheme but this
in fact covers most British (a major exemption however, is printed music).

3. Teachers have an obligation to ensure that software licensing conditions are not breached in
their schools.

4. Teachers and schools need to know if their obligations regarding data confidentiality. These
duties are enshrined in data protection legislation.

SELF CHECK EXERCISE


1. Why do many schools and colleges pay a fee to the Licensing Agency (CLA)?
………………………………………………………………………………………………………………
2. Why should the producers of software become more careful when producing the
software?.....................................................................................................
……………………………………………………………………………………………………………

3. Why should teachers know their obligations regarding data confidentiality?


………………………………………………………………………………………………………………
………………………………………………………………………………………
ANSWERS TO THE SUGGESTED

ACTIVITY

1. A vexed one for schools


2. Copyright
3. Copyright

SELF CHECK EXERCISE

1. This provides limited rights to make copies of some copyright materials.

2. To stipulate what the purchaser has bought.

3. In response to the growing concern about the impact of advancing computer technology on
personal privacy.

UNIT. 5. HEALTH ISSUES

145
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

Lesson 1: Health And Safety

INTRODUCTION
In this lesson we will examine some of the programmatic consequences of having computers in the
classroom.

OBJECTIVES
At the end of this lesson you should be able to:
 Know the considerations of health and safety that must apply to the use of computers and other
aspects of ICT in schools

 Know the current legislation issued by the health and safety executive.

 Know the position of setting up the computers in a classroom.

HEALTH AND SAFETY


Foremost among these practical implications is the considerations of health and safety that must apply
to the use of computers and other aspects of ICT in schools. The professional standards for QTS
(DFES 2002) demand that those qualifying to teach, select and prepare resources and plan for their
safe and effective organisation and organise and manage the physical teaching space, tools, materials,
text and other resources safely and effectively. The current health and safety legislation resides
primarily in documents issued by the health and safety executive but teachers need to think beyond the
strictly legal to consider what constitutes good practice.

ROOM LAYOUT
Very few primary schools were designed from the outset to accommodate computers, and so in many
cases the setting of equipment is at best a compromise with the overall physical arrangements of the
classroom. The conversion in the primary school until recently has been for computers to be dispersed
around the school, usually one, two or possibly three to each classroom. Where they are positioned
amongst the desks, tables, chairs, bookshelves and cupboards will to some extent be dictated by
circumstances but even so there are guidelines worth following. None of these is more than common
sense but they bear summarising.

It is important that the computer is positioned so that the monitor is not subject to reflection, either from
artificial light or from sunshine. Not only is glare distracting, but it may also cause headaches and
strain if users are subjected to it for a long time. The other area of classroom to be avoided for reasons
of electrical safety is a site near a sink or other water supply.

The computer should be as close to a power supply as possible. This will minimise confusion if
anything goes wrong and the power has to be turned off quickly. More physically, but just as important,
keeping the length of electrical flex to a minimum reduces proportionately the risks of someone tripping
over it. If the cable supplied is too long and an alternative of more appropriate length is unavailable, it
is preferable to loop and tape or tie the surplus together (to minimise the building up of a magnetic field
the loop should not be too tight and should not be close to the equipment for the same reasons).

146
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

For convenience many schools have bought from trolleys so that computers can be moved easily from
one room to room. These are fine, so long as they are designed well and have sufficient area for the
footprint of your equipment. The dimension do merit checking, as trolleys may not have convenient
workspace for overflow especially if the expectation is for more than one child to be using the computer
at any time. Again attention must be paid to ensure that cable is not flatting loosely around the system.

The question of space often becomes even more acute in those schools that have decided to dedicate
a room to an ICT suite. The challenge is to accommodate children and computers in a room that was in
all likelihood only designed with children in mind. The result can be rows of computers arranged
around the wall with little space for writing or source material or a co-worker!. Additionally, turning to
face the teacher is hard for children.

Possibly associated with a lack of space may be problems caused by excessive heat and humidity.
Computers give of significant amount of heat and fumes, especially if grouped together, consideration
of improved ventilation may become important. Air conditioning would be ideal, but of course is very
expensive. However, the whole process of designing and equipping a computer room is not a job for
amateurs and most schools now decide so bring in external contractors when contemplating this
development.

SETTING UP COMPUTER
Linking the computers that make up a single computer system, one that is either new or has been
dismantled for a move is not difficult and there is reason for a teacher to be disconcerted by this task.
However, as with all electrical equipment, there is a potential hazard if care is not taken and the obvious
precaution is to ensure that the equipment is disconnected at the mains before making any adjustments
to the cables that make up the system.

Computers in operation are not drawing much power from the mains, which means that a single
computer, monitor and peripherals can share just one socket without problem. However, a gang, rather
than a splitter (which is likely to become dislodged if leads are accidentally bumped), should provide
this access. Also one gang per socket is the rule and gang s should not be daisy chained. In a room in
which several stations are planned, professional advice should be sought. Both tidiness and safety will
dictate that mains leads and any data cables for networking, should be concealed in the trunking with
as little flex as possible using on the work surface.

The mains lead to the computer usually provides a kettle type connection. This is usually effective and
safe as long as the lead has been pushed firmly home onto the projecting pins, some monitors are
provided with a lead with similar connections so that they can draw power from the computer. However,
it is preferable for the monitor to have its own separate earthed plug into the gang or wall socket. Data
from the computer to the monitor is run through in an RGB cable. RGB representing red and blue., the
colour used to make up the picture on the screen.

Danger Children at Work


Teachers must be clear that while ultimate legal duty for health and safety in the classroom lies with
employees, in practice they are responsible for the safety operation of computers in their classrooms. A
cable that has not been correctly attached is a potential hazard and on a daily basis only on the teacher
is in a position to check this, for the same reason, non of the setting up discussed in the previous
section should be delegated to primary age children.

Apart from electrical safety, computer equipment is often heavy and should be moved by an adult. Yet
children are going to be the main computer users in your classroom, and so need to know to work
147
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

efficiently and safety at the computer themselves. For older key stage 2 children this may extend to
switching the computer on themselves and perhaps even t9o switching on at the mains too. However,
the teacher must be on hand to supervise either of these tasks.

If power to the monitor is organised from a separate socket then it is preferable to switch on the monitor
before the computer to protect the latter surge. Likewise, on finishing work, should ideally be switched
off before the peripherals. If work on the computer is to be resumed after a short while it is preferable
not to switch it off in order to minimise the cooling and reheating of computers.

Good posture should be encouraged at the computer. Slouching, while generally, in attractive and in
healthy, will also be detrimental to the relationship of the user’s wrists with the keyboard. The lower
arms should be held off the desk and be about horizontal. Children in the classroom are unlikely to be
using a computer continuously long enough to be susceptible to repetitive strain in injury but if they can
be encouraged to sit properly at the computer from early on it will stand the in good stead for the future.
The relationship of the dimensions of the chair to the bench is clearly important here, but since even
within a class there will be a considerable range of heights on ergonomic chair with adjustable height
and backrest is recommended. This will enable children to sit with their feet on the floor and with their
thighs parallel to the ground.

ACTIVITY
1. It is important that the computer room is positioned so that the monitor is
not………………………………………………………………………………………………………
2. List down some of the effects that are caused by computer you have read in this
lesson………………………………………………………………………………..
3. How should they be avoided? ……………………………………………………………

SUMMAR
As a teacher, you are responsible for the safe operation of the computers in the classroom, and as such
should be able to identify potential hazards in order to minimise risks.

You may find it useful to need the excellent leaflet published by the health and safety Executive before
making decisions about how to set up and use computers with your class.

Good posture should be encouraged on the computer.

SELF CHECK EXERCISE

1. What should be done in order to minimise the confusions that may occur if anything goes
wrong to the computer?
………………………………………………………………………………………………………
2. What does the term ICT stand for?....................................................
……………………………………………………………………………………………………..
3. Computers give significant amount of heat and fumes, especially, if grouped together. To avoid
this, what is required to be done?
……………………………………………………………………………………………………….
SUGGESTED ANSWERS TO THE ACTIVITY AND SELF CHECK EXERCISE

ACTIVITY
148
ZAMCOL Secondary Diploma: COMPUTER STUDIES Module 1

1. Users should not be subjected to the reflection of sun light.

2. Headache, glare and strain.

SELF CHECK EXERCISE

1. Computer should be switched off.

2. Information Communication Technology

3. Consideration Of Improved Ventilation


Air condition would be ideal.

149

Você também pode gostar