Você está na página 1de 406

COMPUTER SYSTEMS 322/13/SO1

THEORY

1.1 INTRODUCTION TO COMPUTER SYSTEMS

At the end of the unit the student should be able to:


1.1.1 identify the major components of a computer (PC & Macintosh):
- Motherboard (ROM, RAM, Processor, Chipsets)
- Casing
- Power Supply Unit
- Peripheral Control Cards
- Monitor
- Keyboard
- Mouse
- Interconnecting Cables
- Serial, Parallel And Game Ports
- describe briefly the function of the units listed above

http://www.comptechdoc.org/hardware/pc/begin/hwcomputer.html
http://www.buildcomputers.net/

1. MOTHERBOARD (ROM, RAM, PROCESSOR, CHIPSETS)

 Motherboard / Mainboard / System Board- The main circuit board of a computer. It


contains all the circuits and components that run the computer.

 Mother board is the main board of the computer system to accommodate all the
components of the computer

 The motherboard is a printed circuit board that is the foundation of a computer, located on the
backside or at the bottom of the computer chassis.
 Thus, a motherboard is the data and power infrastructure for the entire computer

Primary Functions Of The Motherboard

1. The motherboard acts as the central backbone of a computer on which other modular
parts are installed such as the CPU, RAM and hard disks.

2. The motherboard also acts as the platform on which various expansion slots are available
to install other devices / interfaces.

Page 1 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

3. The motherboard is also responsible to distribute power to the various components of the
computer.

4. They are also used in the coordination of the various devices in the computer and
maintain an interface among them.

Other

1. A motherboard contains a socket into which one or more processors can be attached.
2. It has slots for peripheral cards such as video cards, sound cards or networking cards.
3. It includes a chipset that acts as an interface between all of a computers subsystems.
4. It holds the ROM or permanent memory used by the BIOS (Basic Input/Output System) -
- a bit of memory that doesn't get erased when the computer is turned off because it
contains the instructions that reminds the computer what to do when it gets turned back
on.
5. In addition, a motherboard includes a clock generator that is a sort of electronic
metronome that the computer uses to synchronize various operations.
6. It also holds the more active memory RAM that the machine uses when it runs software.
7. Finally, the motherboard has slots for expansion cards and power connectors which
provide power to various components (high speed graphics cards and disk drives get their
power directly from the power supply).

Major components of the motherboard are;

- Processor (CPU),
- RAM slots(SIMM,DIMM,RIMM),
- ROM BIOS,
- Chipset,
- NORTH Bridge,
- South Bridge,
- PCI (Peripheral Component Interconnect)slots,
- AGP(Accelerated Graphics Port) slots,
- IDE Connector slots,
- FDD Connector slots,
- Heat Sink,
- Cooling Fan ,
- Power Connector.

Page 2 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Block diagram of a modern motherboard, which supports many on-board peripheral functions as
well as several expansion slots

TYPES OF SYSTEM BOARDS

- Motherboards can be divided into different categories based on the “FORM FACTOR” of
the motherboard.

FORM FACTOR: The FORM FACTOR of the motherboard describes it`s general shape, what
sorts of cases and power supplies it can use, and it`s physical organization. The motherboard
form factor describes its general shape, the type of case and power supply it can use, and its
physical organization (layout of the motherboard). Over time, in the computer industry, we
have had a number of different motherboard form factors being developed.

Page 3 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Types of system boards

MOTHERBOARD FORM FACTORS:


 Determines general layout, size and feature placement on the motherboard.
 Form factors such as physical size, shape, component placement, power supply connectors etc.
 Various form factors of motherboards are AT, Baby AT, ATX, Mini-ATX, Micro-ATX, Flex
ATX, LPX and Mini LPX and NLX.
Seven types of system board designs are listed: AT, Baby AT, ATX, BTX, LPX, NLX, and
Backplane. Each has variations.
1. AT
 AT system boards, also called full AT boards, measure 12 inches by 13.8 inches.
 They have two main connectors for power: P8 and P9.
 AT system boards have connections for -5, +5, -12, and +12 volt lines from the power
supply.
 AT boards are larger than the other styles listed here. It may be recognized by its size,
and the placement of the processor in front of the expansion bus slots, which puts it in the
way of longer cards.
 AT power supplies blow air into the system.
2. Baby AT
 The smaller version of the AT board is called the Baby AT. The board is 13 inches by
8.7 inches.
 The processor is still in the way of the expansion slots.
 A problem with this design is that devices mounted in the case often have to string cables
all the way across the motherboard to connect to it.
3. ATX
 ATX system boards measure 12 inches by 9.6 inches.
 ATX system boards have one main connector for power: P1. The originals had 20 pins,
but later models have 24. (In between, there were models that had a separate 12 volt
connector just for the processor. This was incorporated intothe 24 pin design.)
 The processor on an ATX board is beside the expansion slots, not in front of them.
 ATX system boards have connections for +3.3 volts in addition to the voltages available
on the AT board. Newer models of processors typically use less power.
 ATX power supplies blow air out of the system.
 ATX cases may fit Baby AT boards, but the reverse is not true.
 ATX boards have a soft switch. Operating systems such as Windows 98, 2000, and XP
can turn the power off when shutting down.
 Smaller versions of the ATX board are called the Mini ATX, Micro ATX, and Flex
ATX.
4. BTX
 Balanced Technology Extended (BTX) system boards focus on better airflow through
the case, eliminating the need for a fan directly on the processor.
 BTX system boards have one main connector for power, a 24 pin P1.
 BTX system boards are made to support Serial ATA, USB 2, and PCI Express.
 BTX power supplied blow air out of the system. A BTX box's main characteristic that
that all components are oriented to have air passing directly over them.
5. NLX
 The NLX form factor is for low cost, low profile computers.
 NLX system boards have only one expansion slot. It is used for a riser card, which may
contain other expansion slots, and connectors for floppy or hard drives.

Page 4 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 NLX boards will have video circuits included on the board.


6. LPX and Mini-LPX
 LPX boards use a riser card, like the NLX board.
 LPX boards are low end boards, unsuited for newer processors due to heat and size.
 LPX boards are frequently changed by a manufacturer to make them proprietary. This
means that parts can only be obtained from that manufacturer.
7. Backplane Systems
 A backplane is not a motherboard. It typically only holds expansion slots, one of which
will be used for a mothercard.
 Active backplanes have some slots, buffers, and driver circuits.
 Passive backplanes have no circuits, just a slot for the mothercard.
 These systems are not for personal computers, but are used in rack systems.
Types of Motherboard Form Factors
1.AT and Baby AT (Advanced Technology)

• In the early days of the computer, the AT and baby AT form factors were the most common
motherboard form factors. These two variants differ primarily in width: the older full AT board is 12"
wide. It is an obsolete motherboard form factor only found in older machines, 386 class or earlier.
• One of the major problems with the width of this board (aside from limiting its use in smaller cases)
is that a good percentage of the board "overlaps" with the drive bays. This makes installation,
troubleshooting and upgrading more difficult.
• A Baby AT motherboard is 8.5" wide and 13" long. The reduced width means much less overlap in
most cases with the drive bays, although there usually is still some overlap at the front of the case.
• Baby AT motherboards are distinguished by their shape, and usually by the presence of a single, full-
sized keyboard connector soldered onto the board. The serial and parallel port connectors are almost
always attached using cables (ribbons) that go between the physical connectors mounted on the case,
and pin "headers" located on the motherboard. Most of the boards use AT power supplies and the
system units tend to be tower casing.

AT Motherboard

AT motherboard. Note: at the top right hand corner of the board, we have the AT keyboard port | Source

Page 5 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Advantages of the Baby AT Motherboard Design

1. The size of 8.5” by 10” makes it easier to design smaller desktop PCs
2. Most of the board is easily accessible for upgrades and expansion

Disadvantages of the Baby AT design

1. CPU location- with the processor and heat sink in place, it is difficult to fit a long expansion card
into one of the expansion slots. This is the main problem encountered with the AT-style
motherboard-the CPU can get in the way of the expansion cards.
2. Motherboard mounting - some system cases are not drilled or punched to support all the
mounting holes on a Baby AT mother-board. Therefore, the front edge of the system board tends
to be left unsupported and over time this edge can warp (bend) leading to loose components and
expansion cards causing intermittent problems.

2. ATX and Mini ATX Motherboard Form Factors


Full-ATX – (12" wide x 9.6" deep) / Mini-ATX – (11.2" wide x 8.2" deep).

The ATX, Created by Intel in 1995, was developed as an evolution of the Baby AT form factor
and was defined to address four areas of improvement: -

 Enhanced ease of use


 Better support for current and future I/O
 Better support for current and future processor technology, and
 Reduced total system cost.

The ATX is basically a Baby AT rotated 90 degrees and providing a new mounting configuration
for the power supply. The processor is relocated away from the expansion slots, (unlike Baby
AT) allowing them to hold full length add-in cards.

The longer side of the board is used to host more on-board I/O ports. The ATX power supply,
rather than blowing air out of the chassis, as in most Baby AT platforms, provides air-flow
through the chassis and across the processor.

Page 6 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

ATX Motherboard

ATX-Motherboard

Some Improvements of the ATX Motherboard Form Factor

 Integrated I/O Port Connectors: Baby AT motherboards use headers which stick up from the
board, and a cable that goes from them to the physical serial and parallel port connectors mounted
on to the case. The ATX has these connectors soldered directly onto the motherboard.
 Integrated PS/2 Mouse Connector: ATX motherboards have the PS/2 port built into the
motherboard.
 Reduced Drive Bay Interference: Since the board is essentially "rotated" 90 degrees from the
baby AT style, there is much less "overlap" between where the board is and where the drives are
thus making it easy to access the board, and fewer cooling problems.
 Reduced Expansion Card Interference: The processor socket/slot and memory sockets are
moved from the front of the board to the back right side, near the power supply. This eliminates
the clearance problem with baby AT style motherboards and allows full length cards to be used in
most (if not all) of the system bus slots.
 Better Power Supply Connector: The ATX motherboard uses a single 20-pin connector instead
of the confusing pair of near-identical 6-pin connectors on the baby AT form factor.
 "Soft Power" Support:The ATX power supply is turned on and off using signaling from the
motherboard, not a physical toggle switch. This allows the PC to be turned on and off under
software control, allowing much improved power management.
 3.3V Power Support: The ATX style motherboard has support for 3.3V power from the ATX
power supply.
 Improved Design for Upgradability: In part because it is the newest design, the ATX is the
choice "for the future". More than that, its design makes upgrading easier because of more
efficient access to the components on the motherboard.

Page 7 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

3.MicroATX Motherboard Form Factor

This form factor was developed as a natural evolution of the ATX form factor to address new
market trends and PC technologies. MicroATX supports:

 Current processor technologies


 The transition to newer processor technologies
 AGP high performance graphics solutions
 Smaller motherboard size
 Smaller power supply form factor

4.Flex ATX

This is a subset of MicroATX developed by Intel in 1999. It allows more flexible motherboard
design, component positioning and shape. Can be smaller than regular microATX.

 Supports current socketed processor technologies


 Smaller motherboard size
 ATX 2.03 I/O panel
 Same mounting holes as microATX
 Socket only processors to keep the size small

5.LPX and Mini LPX (Low Profile casing eXtended)

Note: These Motherboard Form Factors are obsolete.

The LPX motherboard form factors are designed to be used in small Slimline or "low profile"
cases typically found on low profile desktop systems. The primary design goal behind the LPX
form factor is reducing space usage (and cost).

The most distinguishing feature is the riser card that is used to hold expansion slots. The riser
card of the LPX motherboard form factor is situated at the center of the motherboard. Expansion
cards plug into the riser card; usually, a maximum of just three. This means that the expansion
cards are parallel to the plane of the motherboard.

This allows the height of the case to be greatly reduced, since the height of the expansion cards is
the main reason full-sized desktop cases are as tall as they are. The problem is that you are
limited to only two or three expansion slots!

While the LPX form factor can be used by a manufacturer to save money and space in the
construction of a custom product, these systems suffer from non-standardization, poor
expandability, poor upgradability, poor cooling and difficulty of use for the do-it-yourself.

Page 8 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

LPX Motherboard Form Factor

LPX Motherboard with a riser card mounted. | Source

The riser card, where adapter cards are connected

6.NLX (New Low profile eXtended) Form Factor

Note: This motherboard form factor is also obsolete.

Page 9 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The need for a modern, small motherboard standard led to the development of the new NLX
form factor. In many ways, NLX is similar to LPX. Also like ATX, the NLX standard was
developed by Intel Corporation in 1998.

NLX still uses the same general design as LPX, with a smaller motherboard and a riser card for
expansion cards. The riser card is pushed to one extreme edge of the motherboard.

NLX makes the following main changes: -

 Revised design to support larger memory modules and modern DIMM memory packaging.
 Support for the newest processor technologies, including the new Pentium II using SEC
packaging.
 Support for AGP video cards.
 Better thermal characteristics, to support modern CPUs that run hotter than old ones.
 More optimal location of CPU on the board to allow easier access and better cooling.
 More flexibility in how the motherboard can be set up and configured.
 Enhanced design features, such as the ability to mount the motherboard so it can slide in or out of
the system case easily.
 Support for desktop and tower cases.

NLX Motherboard Form Factor

Page 10 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

THE CHIPSET
 The important component of the motherboard is the chipset
 It is formed of different integrated circuits (ICs) attached to the mother board that
control how the system interact with the CPU and the motherboard.
 A chipset is a set of electronic components in an integrated circuit that manages the data flow
between the processor, memory and peripherals.
 The chipset is embedded in the motherboard.
 Chipsets are usually designed to work with a specific family of microprocessors.
 Because it controls communications between the processor and external devices, the chipset plays
a crucial role in determining system performance.
 Because the chipset defines the types and limits of most connections between the CPU and
peripherals, it may be the most important consideration in a motherboard.

• The term chipset often refers to a specific pair of chips on the motherboard: the
northbridge and the southbridge.

THE NORTHBRIDGE CHIP

• links the CPU to very high-speed devices, especially RAM and graphics controllers.
• The Northbridge connects the Southbridge to the CPU and is commonly known as the
memory controller hub.
• The Northbridge handles a computer's faster interaction requirements and controls
communication between the CPU, RAM, ROM, the basic input/output system (BIOS),
the accelerated graphics port (AGP) and the Southbridge chip.
• The Northbridge links I/O signals directly to the CPU.
• The CPU uses the Northbridge frequency as a baseline for determining its operating
frequency.

Page 11 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

THE SOUTHBRIDGE CHIP

• connects to lower-speed peripheral buses (such as PCI or ISA).


• The southbridge, which is not directly connected to the CPU, is also known as the
input/output controller hub.
• Southbridge handles the motherboard's slower connections, including input/output (I/O)
devices and computer peripherals like expansion slots and hard disk drives.
• In many modern chipsets, the southbridge contains some on-chip integrated peripherals,
such as Ethernet, USB, and audio devices.

Functions of the chipset

• It controls the bits that flow between CPU & devices.


• It controls the system memory.
• It manages data transfers between CPU Memory & input output devices
• Provides support for expansion bus and power management features of the system
• Controls memory cache, external buses, peripherals

The 5 Most Important Things to Consider When Choosing a Motherboard

#1 CPU Socket
#2 Chipsets
#3 Form Factor/Size
#4 Integrated Add-ons
#5 Brand

CHOOSING A MOTHERBOARD

1. Form Factor. The form factor is a set of standards that include the size and shape of the
board, the arrangement of the mounting holes, the power interface, and the type and
placement of ports and connectors. Generally, you should choose the case to fit the mobo, not
vice-versa. But if there is a case that you simply must use (either because it's the one you
happen to have or because you really, really like that case), then make sure the motherboard
you choose is of a compatible form factor.
2. Processor support. You must select a mobo that supports the type and speed of processor
you want to use and has the correct type of socket for that processor.
3. RAM support. Make sure that the motherboard you select supports enough RAM of the type
(DDR-SDRAM, DDR2-SDRAM, RDRAM, etc.) that you want to use. Most motherboards
manufactured as of this writing can support at least 4 Gig of RAM, with DDR2 being the
most popular type because of its speed and relatively low cost. Most DDR motherboards also
support dual channel DDR, which can further improve performance. But to take advantage
of dual-channel, the RAM sticks must be installed in matched pairs, and the mobo must
support it.

Page 12 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

4. Chipset. The chipset pretty much runs the show on the motherboard, and some chipsets are
better than others. The chipset cannot be replaced, so the only way to solve problems caused
by a bad chipset is to replace the mobo. Read the reviews of other motherboards using the
same chipset as the one you are considering to see if a lot of people have reported problems
with it.
5. SATA support. There's really very little reason not to use SATA drives these days. They're
priced comparably to EIDE drives, but deliver much higher data transfer. But to use SATA,
your motherboard must have SATA support. (Well, you can actually install aftermarket
SATA expansion cards, but why do that on a new computer?)
6. Expansion Slots and Ports. How many of each type of expansion slot are included? Will
they be enough to meet your current and future needs? How about Firewire support? And
does it have enough USB slots for all the peripherals you want to dangle off of it?
7. Reputation. Search the newsgroups to see if others have found the board you are considering
to be a lemon. One excellent Web resource for motherboard research is Motherboards.org.
When choosing a motherboard, reliability is the most important factor. Replacing a failed
motherboard requires essentially disassembling the entire computer, and may also require
reinstalling the operating system and applications from scratch.
8. Compatibility. Most motherboards include drivers for all recent Windows versions, but
check the documentation just to be sure. If you plan to use the board for a computer running
another operating system (Linux, UNIX, BSD, etc.) first check the with the motherboard
manufacturer to see if it is compatible, and then search the hardware newsgroups for the OS
you will be using to see how that particular board has worked out for others.
9. On-Board Features. Do you want integrated audio or video? If you don't plan on using the
computer for graphics, multimedia, or gaming, then you may be able to save money by
buying a motherboard with less-than-spectacular integrated audio and/or video.
10. RAID Support. RAID (Redundant Array of Independent Disks) is a set of protocols for
arranging multiple hard drives into "arrays" to provide fault tolerance and/or increase the
speed of data access from the hard drives. Many motherboards have RAID controllers built-
in, saving you the cost of installing an add-on RAID controller.
11. Cost. Even if you are on a budget, the motherboard is not the place to cut corners. Try a less
fancy case, instead. A good motherboard is more important than neon lights. But at the same
time, the fact that one mobo costs twice as much as another doesn't mean it is twice as good.
By searching newsgroups and reading hardware reviews, you're likely to find some
inexpensive boards that perform as well as (or even better than) boards costing a great deal
more.

PRECAUTION MEASURES YOU WOULD CONSIDER WHEN UPGRADING THE


MOTHERBOARD

To remove the motherboard, do the following

1. If you have access to your personal data, back it up before continuing this procedure.
2. Disconnect every single cable in the back of the computer. This includes the power cord. Do not
leave anything connected.
Page 13 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

3. Push the power button (with the power cord disconnected) This will drain any power built up in
the system. VERY IMPORTANT!!!
4. Open computer case (you will need to consult the manual on this since there is numerous types of
cases and each have their own way of opening)
5. DO NOT BLOW ON ANY DUST YOU SEE (Your breath has moisture and electronics do not
like it)
6. Touch a metal part of the computer. Also if you have a static wrist band wear it.
7. Remove Memory
8. Remove PCI cards (you may or may not have any PCI cards. If you don't then skip to next step)
9. Remove video cards (You may not have a video card in the system. It may be built into the
motherboard. If that is the case, just skip to the next step)
10. Disconnect all IDE cables plugged into the motherboard .
11. Disconnect any SATA cables that is connected. (Your system may not have SATA cables)
12. Disconnect power cable that is plugged into the motherboard .
13. Remove the processor/CPU
14. Disconnect the control panel connector (cable that goes from the power button to the
motherboard)
15. Disconnect any other cables that are connected to the motherboard.
16. You will not have to remove the hard drive or CD-ROM from the computer unless they are in the
way. On most computer cases they are out of the way.
17. On some systems, you have to remove the power supply. If that is the case for your system, then
remove the power supply.

To install the motherboard do the following

1. Disconnect every single cable in the back of the computer. This includes the power cord. Do not
leave anything connected.
2. Push the power button (with the power cord disconnected) This will drain any power built up in
the system.
3. Open computer case (you will need to consult the manual on this since there is numerous types of
cases and each have their own way of opening)
4. DO NOT BLOW ON ANY DUST YOU SEE (Your breath has moisture and electronics do not
like it)
5. Touch a metal part of the computer. Also if you have a static wrist band wear it.
6. Install the motherboard
7. Install the processor/CPU
8. Install the power supply if you had to remove it to remove the motherboard
9. Install the power connections to the motherboard from the power supply
10. Connect the control panel connector (cable that goes from the power button to the motherboard)
11. Plug the power cord in the system
12. Turn the computer on.
13. You should see signs of life. Most systems have beep codes. If your system has beep codes, check
the beep code to see what error you are getting. You should be getting one of the following beep
codes... beeps indicating no memory installed, beeps indicating incorrect memory installed, beeps
indicating no video card detected or other memory beeps. (The exact beeps or indications vary
depending on system). Also some systems have small LED lights in the back to indicate what
step in the P.O.S.T. failed.
14. If you are getting the proper beep codes and/or LED lights, then install the memory.
15. The beeps should change or the system may stop beeping

Page 14 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

16. If you have an integrated video card, connect the monitor and see if you get video. If you don't
than you may have a bad monitor, processor/CPU or cable not connected properly or possible
hardware issue
17. If everything is working OK, then install one piece of hardware at a time and see if the system
works. If the system fails then you know the last item crashed the system and you will know what
that device is. If you install more than one piece than you won't know what item caused the issue.
Sometimes more than one device goes bad or a bad device will take out another device with it.
18. If all devices are connected and it is working then the computer is setup correctly.

EXPANSION SLOT

Alternatively referred to as a bus slot or expansion port, an expansion slot is a connection or


port located inside a computer on the motherboard or riser board that allows a computer
hardware expansion card to be connected. For example, if you wanted to install a new video card
in the computer, you'd purchase a video expansion card and install that card into the compatible
expansion slot.

Below is a listing of some of the expansion slots commonly found in IBM compatible computers,
as well as other brands of computers and the devices commonly associated with those slots.
Clicking on any of the links below will provide you with additional details about each expansion
slot.

Motherboard expansion slots

Research on these in detail including pictures

 AGP - Video card


 AMR - Modem, Sound card
 CNR - Modem, Network card, Sound card
 EISA - SCSI, Network card, Video card
 ISA - Network card, Sound card, Video card
 PCI - Network card, SCSI, Sound card, Video card
 PCI Express - Video card, Modem, Sound Card, Network Card
 VESA - Video card

Many of the above expansion card slots are obsolete. You're most likely only going to encounter
AGP, PCI, and PCI Express when working with computers today. In the picture below is an
example of what expansion slots may look like on a motherboard. In this picture, there are three
different types of expansion slots: PCI Express, PCI, and AGP.

 Today, AGP has been replaced by PCI Express.


 Today, AMR is no longer found or used with any modern motherboard.
 Today, this slot is no longer found on motherboards and has been replaced with PCI only
motherboards and motherboards with PCIe.

Page 15 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 Although the EISA bus is backward compatible and not a proprietary bus, it never
became widely used and is no longer found in computers today.

Page 16 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page 17 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
(CPU) PROCESSOR
 The processor is the main “brain” of a computer system.
 It performs all of the instructions and calculations that are needed and manages the flow
of information through a computer.
 It comes in different form factors, and each style requiring a particular slot on the
motherboard
 Major CPU manufacturers are: Intel, AMD, and Cyrix
 Most CPU or processor sockets today are (ZIF) Zero Insertion Force socket, the ZIF
socket was designed by Intel and included a small lever to insert and remove the computer
processor. Using the lever allows you to add and remove a computer processor without any tools
and requires no force (zero force).
 All processor sockets from the Socket 2 and higher have been a ZIF socket design.
 Processors are continually evolving and becoming faster and more powerful. The speed
of a processor is measured in megahertz (MHz) or gigahertz (GHz).

 There are two types of fundamental CPU architecture: complex instruction set computers (CISC)
and reduced instruction set computers (RISC). CISC is the most prevalent and established
microprocessor architecture, while RISC is a relative newcomer.

CISC and RISC

RISC and CISC Processors


RISC Processor

It is known as Reduced Instruction Set Computer. It is a type of microprocessor that has a limited
number of instructions. They can execute their instructions very fast because instructions are
very small and simple.

RISC chips require fewer transistors which make them cheaper to design and produce. In RISC,
the instruction set contains simple and basic instructions from which more complex instruction

Page 18 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
can be produced. Most instructions complete in one cycle, which allows the processor to handle
many instructions at same time.

In this instructions are register based and data transfer takes place from register to register.

CISC Processor

 It is known as Complex Instruction Set Computer.


 It was first developed by Intel.
 It contains large number of complex instructions.
 In this instructions are not register based.
 Instructions cannot be completed in one machine cycle.
 Data transfer is from memory to memory.
 Micro programmed control unit is found in CISC.
 Also they have variable instruction formats.

Difference Between CISC and RISC


Architectural Reduced Instruction Set
Complex Instruction Set Computer(CISC)
Characterstics Computer(RISC)

Instruction size and Large set of instructions with variable formats Small set of instructions with
format (16-64 bits per instruction). fixed format (32 bit).

Data transfer Memory to memory. Register to register.

Most micro coded using control memory


Mostly hardwired without
CPU control (ROM) but modern CISC use hardwired
control memory.
control.

Instruction type Not register based instructions. Register based instructions.

Memory access More memory access. Less memory access.

Clocks Includes multi-clocks. Includes single clock.

Instructions are reduced and


Instruction nature Instructions are complex.
simple.

Central Processing Unit Architecture operates the capacity to work from “Instruction Set
Architecture” to where it was designed. The architectural designs of CPU are RISC (Reduced
instruction set computing) and CISC (Complex instruction set computing). CISC has the ability to
Page 19 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
execute addressing modes or multi-step operations within one instruction set. It is the design of
the CPU where one instruction performs many low-level operations. For example, memory
storage, an arithmetic operation and loading from memory. RISC is a CPU design strategy based
on the insight that simplified instruction set gives higher performance when combined with a
microprocessor architecture which has the ability to execute the instructions by using some
microprocessor cycles per instruction.

This article discusses about the RISC and CISC architecture with suitable diagrams.

 Hardware of the Intel is termed as Complex Instruction Set Computer (CISC)


 Apple hardware is Reduced Instruction Set Computer (RISC).

What is RISC and CISC Architectures

Hardware designers invent numerous technologies & tools to implement the desired architecture
in order to fulfill these needs. Hardware architecture may be implemented to be either hardware
specific or software specific, but according to the application both are used in the required quantity.
As far as the processor hardware is concerned, there are 2 types of concepts to implement the
processor hardware architecture. First one is RISC and other is CISC.

CISC Architecture

The CISC approach attempts to minimize the number of instructions per program, sacrificing the
number of cycles per instruction. Computers based on the CISC architecture are designed to
decrease the memory cost. Because, the large programs need more storage, thus increasing the
memory cost and large memory becomes more expensive. To solve these problems, the number of
instructions per program can be reduced by embedding the number of operations in a single
instruction, thereby making the instructions more complex.

Page 20 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

CISC Architecture

 MUL loads two values from the memory into separate registers in CISC.
 CISC uses minimum possible instructions by implementing hardware and executes operations.
 Instruction Set Architecture is a medium to permit communication between the programmer and
the hardware. Data execution part, copying of data, deleting or editing is the user commands used
in the microprocessor and with this microprocessor the Instruction set architecture is operated.
 The main keywords used in the above Instruction Set Architecture are as below

Instruction Set: Group of instructions given to execute the program and they direct the computer
by manipulating the data. Instructions are in the form – Opcode (operational code) and Operand.
Where, opcode is the instruction applied to load and store data, etc. The operand is a memory
register where instruction applied.

Addressing Modes: Addressing modes are the manner in the data is accessed. Depending upon
the type of instruction applied, addressing modes are of various types such as direct mode where
straight data is accessed or indirect mode where the location of the data is accessed. Processors
having identical ISA may be very different in organization. Processors with identical ISA and
nearly identical organization is still not nearly identical.

CPU performance is given by the fundamental law

Page 21 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Thus, CPU performance is dependent upon Instruction Count, CPI (Cycles per instruction) and
Clock cycle time. And all three are affected by the instruction set architecture.

Instruction Count of the CPU

This underlines the importance of the instruction set architecture. There are two prevalent
instruction set architectures

Examples of CISC PROCESSORS

IBM 370/168 – It was introduced in the year 1970. CISC design is a 32 bit processor and four 64-
bit floating point registers.
VAX 11/780 – CISC design is a 32-bit processor and it supports many numbers of addressing
modes and machine instructions which is from Digital Equipment Corporation.
Intel 80486 – It was launched in the year 1989 and it is a CISC processor, which has instructions
varying lengths from 1 to 11 and it will have 235 instructions.

CHARACTERISTICS OF CISC ARCHITECTURE

 Instruction-decoding logic will be Complex.


 One instruction is required to support multiple addressing modes.
 Less chip space is enough for general purpose registers for the instructions that are 0operated
directly on memory.
 Various CISC designs are set up two special registers for the stack pointer, handling interrupts, etc.
 MUL is referred to as a “complex instruction” and requires the programmer for storing functions.

RISC Architecture

RISC (Reduced Instruction Set Computer) is used in portable devices due to its power efficiency.
For Example, Apple iPod and Nintendo DS. RISC is a type of microprocessor architecture that
uses highly-optimized set of instructions. RISC does the opposite, reducing the cycles per
instruction at the cost of the number of instructions per program Pipelining is one of the unique
feature of RISC. It is performed by overlapping the execution of several instructions in a pipeline
fashion. It has a high performance advantage over CISC.
Page 22 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RISC Architecture

RISC processors take simple instructions and are executed within a clock cycle

RISC ARCHITECTURE CHARACTERISTICS

 Simple Instructions are used in RISC architecture.


 RISC helps and supports few simple data types and synthesize complex data types.
 RISC utilizes simple addressing modes and fixed length instructions for pipelining.
 RISC permits any register to use in any context.
 One Cycle Execution Time
 The amount of work that a computer can perform is reduced by separating “LOAD” and “STORE”
instructions.
 RISC contains Large Number of Registers in order to prevent various number of interactions with
memory.
 In RISC, Pipelining is easy as the execution of all instructions will be done in a uniform interval
of time i.e. one click.
 In RISC, more RAM is required to store assembly level instructions.
 Reduced instructions need a less number of transistors in RISC.
 RISC uses Harvard memory model means it is Harvard Architecture.
 A compiler is used to perform the conversion operation means to convert a high-level language
statement into the code of its form.

Page 23 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
RISC & CISC Comparison

Comparison between CISC & RISC

MUL instruction is divided into three instructions


“LOAD” – moves data from the memory bank to a register
“PROD” – finds product of two operands located within the registers
“STORE” – moves data from a register to the memory banks
The main difference between RISC and CISC is the number of instructions and its complexity.

RISC Vs CISC

SEMANTIC GAP

Both RISC and CISC architectures have been developed as an attempt to cover the semantic gap.

Page 24 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Semantic Gap

With an objective of improving efficiency of software development, several powerful


programming languages have come up, viz., Ada, C, C++, Java, etc. They provide a high level of
abstraction, conciseness and power. By this evolution the semantic gap grows. To enable efficient
compilation of high level language programs, CISC and RISC designs are the two options.

CISC designs involve very complex architectures, including a large number of instructions and
addressing modes, whereas RISC designs involve simplified instruction set and adapt it to the real
requirements of user programs.

CISC and RISC Design

Multiplication of two Numbers in Memory

If the main memory is divided into areas that are numbered from row1:column 1 to row 5 :column
4. The data is loaded into one of four registers (A, B, C, or D). To find multiplication of two
numbers- One stored in location 1:3 and other stored in location 4:2 and store back result in 1:3.

Page 25 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Multiplication of Two Numbers

The Advantages and Disadvantages of RISC and CISC

The Advantages of RISC architecture

 RISC(Reduced instruction set computing)architecture has a set of instructions, so high-level


language compilers can produce more efficient code
 It allows freedom of using the space on microprocessors because of its simplicity.
 Many RISC processors use the registers for passing arguments and holding the local variables.
 RISC functions use only a few parameters, and the RISC processors cannot use the call instructions,
and therefore, use a fixed length instruction which is easy to pipeline.
 The speed of the operation can be maximized and the execution time can be minimized.
Very less number of instructional formats, a few numbers of instructions and a few addressing
modes are needed.

The Disadvantages of RISC architecture

 Mostly, the performance of the RISC processors depends on the programmer or compiler as the
knowledge of the compiler plays a vital role while changing the CISC code to a RISC code
 While rearranging the CISC code to a RISC code, termed as a code expansion, will increase the
size. And, the quality of this code expansion will again depend on the compiler, and also on the
machine’s instruction set.
 The first level cache of the RISC processors is also a disadvantage of the RISC, in which these
processors have large memory caches on the chip itself. For feeding the instructions, they require
very fast memory systems.

Advantages of CISC architecture

 Microprogramming is easy assembly language to implement, and less expensive than hard wiring
a control unit.

Page 26 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 The ease of microcoding new instructions allowed designers to make CISC machines upwardly
compatible:
 As each instruction became more accomplished, fewer instructions could be used to implement a
given task.

Disadvantages of CISC architecture

 The performance of the machine slows down due to the amount of clock time taken by different
instructions will be dissimilar
 Only 20% of the existing instructions is used in a typical programming event, even though there
are various specialized instructions in reality which are not even used frequently.
 The conditional codes are set by the CISC instructions as a side effect of each instruction which
takes time for this setting – and, as the subsequent instruction changes the condition code bits – so,
the compiler has to examine the condition code bits before this happens.

Thus, this article discusses about the RISC and CISC architectures; features of the RISC and CISC
processor architecture; advantages and drawbacks of RISC and CISC, and comparison between
the RISC and CISC architectures . For more information regarding the RISC and CISC
architectures, or electrical and electronics projects please visit the link www.edgefxkits.com. Here
is a question for you, what are the latest RISC and CISC processors?

CISC RISC

Large instruction set Compact instruction set


Complex, powerful instructions Simple hard-wired machine code and control unit
Instruction sub-commands micro-coded in on board Pipelining of instructions
ROM
Compact and versatile register set Numerous registers
Numerous memory addressing options for operands Compiler and IC developed simultaneously
CISC RISC
 Emphasis on hardware  Emphasis on software
 Includes multi-clock  Single-clock
 complex instructions  reduced instruction only
 Memory-to-memory: “LOAD” and  Register to register: “LOAD” and
“STORE” incorporated in instructions “STORE” are independent instructions
 high cycles per second, Small code sizes  Low cycles per second, large code sizes
 Transistors used for storing complex  Spends more transistors on memory
instructions registers

Factors used to rate processors:


– System bus speeds supported; e.g., 1066 MHz

Page 27 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
– Processor core frequency in gigahertz; e.g., 3.2 GHz
– Type of RAM, motherboard, and chipset supported

There are many factors, which affect how fast your computer can process data and
instructions:

 The amount of RAM memory


 The speed and generation of your CPU (the system clock)
 The size of the Register on your CPU
 The Bus type and speed
 The amount of Cache memory

• Three basic components:


– Input/output (I/O) unit
– Control unit
– One or more arithmetic logic units (ALUs)
• Registers: high-speed memory used by ALU
• Internal cache: holds data to be processed by ALU

Page 28 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RAM & ROM

Data retention

 This is the most noteworthy difference between these two forms of memory. ROM is a
form of non-volatile memory, which means that it retains information even when the
computer is shut down. RAM, on the other hand, is considered volatile memory. It holds
data as long as your computer is up and running. After that…

Working type

 You can retrieve and alter data that is stored in RAM, but you cannot do so in the case of
ROM. Data in ROM can only be read, but not altered or modified, hence the name ‘read-
only memory’.

Accessibility

 Information stored in ROM is not as easily accessible as information in RAM. Information


stored in ROM is not easily altered or reprogrammed either, whereas in the case of RAM,
data can be accessed randomly, in any order, from any location.

Speed

 RAM trumps ROM in terms of speed; it accesses data much faster than ROM, and boosts
the processing speed of the computer.

Page 29 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Physical appearance

 RAM is a thin rectangular chip that you can find inserted in a slot on the motherboard,
whereas ROM is typically an optical drive made of magnetic tapes. Furthermore, RAM is
usually bigger than ROM.

Storage Capacity

 A ROM chip usually stores no more than a few megabytes of data (4 MB ROM is quite
common these days). In contrast, a RAM chip can store as much as 16 Gigabytes’ (or more)
worth of information.

Ease of writing data

 It’s easier to write data in RAM than in ROM, since the latter is a place for storing very
limited, albeit immensely important and permanent information.
 The next time you find yourself in a circle of computer geeks, make sure that you bring
this information to the table. They might already know it, but they’ll certainly be
impressed!

Page 30 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RAM ROM
 Stands for Random-access Memory  Stands for Read-only memory
 Normally ROM is read only memory and
 RAM is a read and write memory it can not be overwritten. However,
EPROMs can be reprogrammed
 RAM is faster  ROM is relatively slower than RAM
 RAM is a volatile memory. It means that the  ROM is permanent memory. Data in
data in RAM will be lost if power supply is ROM will stay as it is even if we remove
cut-off the power-supply
 There are several types of ROM;
 There are mainly two types of RAM; static
Erasable ROM, Programmable ROM,
RAM and Dynamic RAM
EPROM etc.
 ROM usually stores instructions that are
 RAM stores all the applications and data
required for starting (booting) the
when the computer is up and running
computer
 Price of RAM is comparatively high  ROM chips are comparatively cheaper
 RAM chips are bigger in size  ROM chips are smaller in size
 Content of ROM are usually first
transferred to RAM and then accessed by
 Processor can directly access the content of
processor. This is done in order to be
RAM
able to access ROM content at a faster
speed.
 Storage capacity of ROM installed in a
 RAM is often installed with large storage.
computer is much lesser than RAM

ROM CHIP

 Located on the motherboard


 Contains instructions that can directly be accessed by the CPU.
 It contains basic instructions for booting the computer and loading the operating system.
 The contents cannot be changed or erased.
 It is non-volatile, which means, it retains information even if the computer is powered
off.
 It is sometimes called the firmware, but firmware is actually the software stored in a
ROM Chip.
 The memory from which we can only read but cannot write on it.
 This type of memory is non-volatile.
 The information is stored permanently in such memories during manufacture.
 A ROM, stores such instructions that are required to start a computer.
 This operation is referred to as bootstrap.
 ROM chips are not only used in the computer but also in other electronic items like
washing machine and microwave oven.

Page 31 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Following are the various types of ROM

1. MROM (Masked ROM)


The very first ROMs were hard-wired devices that contained a pre-programmed set of
data or instructions. These kind of ROMs are known as masked ROMs which are
inexpensive.

2. PROM (Programmable Read only Memory)


PROM is read-only memory that can be modified only once by a user. The user buys a
blank PROM and enters the desired contents using a PROM program. Inside the PROM
chip there are small fuses which are burnt open during programming. It can be
programmed only once and is not erasable.

3. EPROM(Erasable and Programmable Read Only Memory)


The EPROM can be erased by exposing it to ultra-violet light for a duration of up to 40
minutes. Usually, an EPROM eraser achieves this function. During programming, an
electrical charge is trapped in an insulated gate region. The charge is retained for more
than ten years because the charge has no leakage path. For erasing this charge, ultra-
violet light is passed through a quartz crystal window(lid). This exposure to ultra-violet
light dissipates the charge. During normal use the quartz lid is sealed with a sticker.

4. EEPROM(Electrically Erasable and Programmable Read Only Memory)


The EEPROM is programmed and erased electrically. It can be erased and reprogrammed
about ten thousand times. Both erasing and programming take about 4 to 10 ms (milli
second). In EEPROM, any location can be selectively erased and programmed.
EEPROMs can be erased one byte at a time, rather than erasing the entire chip. Hence,
the process of re-programming is flexible but slow.

Advantages of ROM

 Non-volatile in nature
 These cannot be accidentally changed
 Cheaper than RAMs
 Easy to test
 More reliable than RAMs
 These are static and do not require refreshing
 Its contents are always known and can be verified

Page 32 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

ROM on the Mother Board


• PROM (Programmable ROM) ,
• EPROM (Erasable PROM),
• EPROM can further be classified into the following categories
1) EEPROM(Electrically Erasable PROM),
2) UVEPROM(Ultra Violet Erasable PROM),
• ROM stores information that must be constantly available and cannot be changed.
• Non Volatile: Data does not lost, when computer is switched off.
• PROM requires a special hardware called as PROM Burner to update the BIOS.
• EPROM also requires a special hardware called as PROM Burner to update the BIOS.
• EEPROM erases the data by applying high voltage than normal voltage to update the
BIOS.
• UVPROM erases the data by applying UV rays to update the BIOS.

RAM (RANDOM ACCESS MEMORY)


• Temporary storage for data and programs that are being accessed by the CPU
• It is the working memory or primary memory
• RAM(Random Access Memory) is the internal memory of the CPU for storing data,
program and program result.
• It is read/write memory which stores data until the machine is working.
• Access time in RAM is independent of the address that is, each storage location inside the
memory is as easy to reach as other locations and takes the same amount of time.
• Data in the RAM can be accessed randomly but it is very expensive.
• RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is
a power failure.
• Hence a backup uninterruptible power system(UPS) is often used with computers.
• RAM is small, both in terms of its physical size and in the amount of data it can hold.

RAM is of two types


 Static RAM (SRAM)
 Dynamic RAM (DRAM)

Page 33 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Static RAM (SRAM)
• The word static indicates that the memory retains its contents as long as power is being
supplied. However, data is lost when the power gets down due to volatile nature. SRAM
chips use a matrix of 6-transistors and no capacitors. Transistors do not require power to
prevent leakage, so SRAM need not have to be refreshed on a regular basis.
• Because of the extra space in the matrix, SRAM uses more chips than DRAM for the
same amount of storage space, thus making the manufacturing costs higher. So SRAM is
used as cache memory and has very fast access.
• When cache memory is located on the motherboard, it either is located on individual
chips or on a memory module called cache on a stick.
• In static RAMs data storage elements are flip-flops.
• Static RAMs are more expensive than dynamic RAMs, so static RAMs are less appears
in PCs.
• Static RAMs maintain data till the power is off.

Static RAM Technologies

• Memory caching is a method used to store data or programs in SRAM for quick retrieval.
• Memory caching relies on SRAM chips to store data, And Cache controller to manage
the storage and retrieval of data from Cache memory.

Characteristic of the Static RAM


 It has long life
 There is no need to refresh
 Faster
 Used as cache memory
 Large size
 Expensive
 High power consumption

Page 34 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Dynamic RAM (DRAM)


• DRAM, unlike SRAM, must be continually refreshed in order to maintain the data. This
is done by placing the memory on a refresh circuit that rewrites the data several hundred
times per second. DRAM is used for most system memory because it is cheap and small.
All DRAMs are made up of memory cells which are composed of one capacitor and one
transistor.
Characteristics of the Dynamic RAM
 It has short data lifetime
 Need to be refreshed continuously
 Slower as compared to SRAM
 Used as RAM
 Lesser in size
 Less expensive
 Less power consumption

Dynamic RAM Technologies


• A memory bank is a location on the motherboard that contains slots for memory modules.
• Dynamic RAM (DRAM) needs to be refreshed every few milliseconds.
• DRAM is refreshed by the memory controller.
• In current PCs, DRAM is always stored in DIMM, and RIMM modules, which plug
directly into a bank on the mother board.
• Cost of DRAM is less compared to the SRAM.
• DRAM has the following types RAMs
1) SDRAM(Synchronous Dynamic RAM),
2) DDR(Double Data Rate RAM), and
3) RDRAM (Rambus Dynamic RAM).
SDRAM is further classified into following categories
1) Regular SDRAM,
2) SDRAM-II,
3) SLDRAM or SYNLNK RAM (Synchronous Link Dynamic RAM).

 FPM DRAM:

• Fast page mode dynamic random access memory was the original form of DRAM. It
waits through the entire process of locating a bit of data by column and row and then
reading the bit before it starts on the next bit. Maximum transfer rate to L2 cache is
approximately 176 MBps.

 EDO DRAM:

• Extended data-out dynamic random access memory does not wait for all of the
processing of the first bit before continuing to the next one. As soon as the address of the

Page 35 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
first bit is located, EDO DRAM begins looking for the next bit. It is about five percent
faster than FPM. Maximum transfer rate to L2 cache is approximately 264 MBps.

 SDRAM:

• Synchronous dynamic random access memory takes advantage of the burst mode
concept to greatly improve performance. It does this by staying on the row containing the
requested bit and moving rapidly through the columns, reading each bit as it goes. The
idea is that most of the time the data needed by the CPU will be in sequence. SDRAM is
about five percent faster than EDO RAM and is the most common form in desktops
today. Maximum transfer rate to L2 cache is approximately 528 MBps.

 DDR SDRAM:

• Double data rate synchronous dynamic RAM is just like SDRAM except that is has
higher bandwidth, meaning greater speed. Maximum transfer rate to L2 cache is
approximately 1,064 MBps (for DDR SDRAM 133 MHZ).

 RDRAM:

• Rambus dynamic random access memory is a radical departure from the previous
DRAM architecture. Designed by Rambus, RDRAM uses a Rambus in-line memory
module (RIMM), which is similar in size and pin configuration to a standard DIMM.
What makes RDRAM so different is its use of a special high-speed data bus called the
Rambus channel. RDRAM memory chips work in parallel to achieve a data rate of 800
MHz, or 1,600 MBps. Since they operate at such high speeds, they generate much more
heat than other types of chips. To help dissipate the excess heat Rambus chips are fitted
with a heat spreader, which looks like a long thin wafer. Just like there are smaller
versions of DIMMs, there are also SO-RIMMs, designed for notebook computers.

Page 36 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

MEMORY MODULE FORMATS

MAIN MEMORY-- SIMMS, DIMMS AND OTHER RAM TECHNOLOGIES

1. Single In-line Memory Module, SIMM:


 This type of DRAM or memory package holds up to eight nine RAM chips (8 in
Macs and 9 in PCs where the 9th chip is used for parity checking).
 Another important factor is the bus width, which for SIMMS is 32 bits.

2. Dual In-line Memory Module, DIMM:

 With the increase in data bus width, DIMMs began to replace SIMMs as the
predominant type of memory module.
 The main difference between a SIMM and a DIMM is that a DIMM has separate
electrical contacts on each side of the module, while the contacts on a SIMM are
on both sides are redundant.
 Standard SIMMs also have a 32-bit data bus, while standard DIMMs have a 64-
bit data bus.

3. Rambus In-line Memory Module, RIMM:


 This type of DRAM memory package is essentially the same as a DIMM but is
referred to as RIMMs because of their manufacturer and proprietary slot required

4. Small outline DIMM, SO-DIMM:


 This type of DRAM package is about half the size of the standard DIMM. Being
smaller they are used in small footprint PCs including laptops, netbooks, etc.,

5. Small outline RIMM, SO-RIMM:


 This type of DRAM package is a small version of the RIMM.

Page 37 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

SIMM Technologies

• SIMM is also called as Single Inline Memory Module.


• SIMMs was invented by JAMES CLAYTON of Wang laboratories in 1983.
• SIMMs are rated by speed, measured in nanoseconds.
• This speed is a measure of access time.
• Common SIMM speeds are 60, 70, or 80 ns.
• The smaller the speed rating is, the faster the chip.
• Available in 30pin, 72pin size.

Page 38 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
30 PIN SIMM

• 20 pins for Row or Column addressing.


• 8 pins for additional bus width.
• Measures 3.5 inches wide and 1 inch breadth.
• A notch on edge of package prevents you from improperly inserting a SIMM with bad
orientation.
• Two holes on either side of SIMM allow sockets to latch the module securely in the
place.
• 30 pin SIMMs can transfer 8 bits (9 bits in parity versions) of data at a time.
• 30 pin SIMM are mostly used in INTEL 286,386,486 systems.

72 PIN SIMMs

• Packs 4 bytes wide banks on a single module.


• Notch in the centre of SIMM prevent you from accidentally sliding a 30-pin SIMM into
72 pin socket.
• 72 pin SIMM won’t fit into 30 pin socket.
• Notch on left side prevent improper orientation of SIMM.

• Measures 4.25 inches wide and 0.38 inch thick Is often double-side to achieve higher
Capacities.
• 72 pin SIMMs can transfer 32 bits (36 bits in parity versions) of data at a time.
• 72 pin SIMMs are widely used in INTEL 486, PENTIUM, PENTIUM PRO&P-II
systems.
SIMMs use FPM & EDO technologies to access data.
1) FPM (Fast Page Mode): Improved on earlier memory types by sending the row
address just once for many access to memory near that row (Earlier memory types
required a complete row and column for each memory access).
2) EDO (Extended Data Out): EDO is an improvement over FPM Memory .It is faster
because it eliminates the 10ns delay while waited before it issuing the next memory
address.

DIMM Technologies

• DIMM is also called as Dual Inline Memory Module.


• A DIMM is a memory module that has pins on opposite sides of the circuit board that do
not connect and thus form two sets of contacts.
• Contain 168 or 184 pins.
• Hold between 8 MB and 2 GB of RAM.
• Newer DIMMs hold chips that use synchronous DRAM (SDRAM), which is DRAM that
runs in sync with the system clock and thus runs faster than other types of DRAM.
• 168 pins divided into 3 groups.
• First group runs from 1 to pin 10.
• Second group from 11 to pin 40.
• Third group from pin 41 to pin 84 and,
Page 39 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
• Pin 85 is an opposite sign.
• A symmetrical arrangement of notches prevents improper insertion into its socket.
• Measures 5.25 inch wide and one inch breadth.
• DIMM closely resembles SIMM But physically larger.

DIMM uses EDO &BEDO technologies to access data.

1) EDO (Extended Data Out): EDO is an improvement over FPM Memory .It is faster
because it eliminates the 10ns delay while waited before it issuing the next memory
address.
2) BEDO (Burst Extended Data Out): BEDO is a refined version of EDO which reduces
memory access time than EDO.BEDO is not popular because INTEL does not
support it.

RIMM Technologies

• RIMM is also called as Rambus Inline Memory Module.


• A RIMM is a memory module that houses Rambus DRAM (RDRAM) chips, which are
much faster than SDRAM.
• With RIMMs, each socket must be filled to maintain continuity throughout all sockets.
• Looks almost similar to DIMM but slightly bigger.
• Also called direct rambus memory module.
• They transfer data in 16 bit chunks along dedicated memory channel.
• Available in 168pin or 184pin and include long heat sink.

5. COOLING SYSTEM

Heat Sinks/Fans:

 As processors, graphics cards, RAM and other components in computers have increased
in speed and power consumption, the amount of heat produced by these components as a
side-effect of normal operation has also increased.

 These components need to be kept within a specified temperature range to prevent


overheating, instability, malfunction and damage leading to a shortened component
lifespan. Other devices which need to be cooled include the power supply
unit,optoelectronic devices such as higher-power lasers and light emitting diodes (LEDs)
and hard disks.

 A heat sink is a heat exchanger component attached to a device used for passive cooling.
It is designed to increase the surface area in contact with the cooling fluid surrounding it,
such as the air thus allowing it to remove more heat per unit time.

Page 40 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Other factors which improve the thermal performance of a heat sink are the approach air
velocity, choice of material – usually an aluminum alloy due to its high thermal
conductivity values (229 W/mºK), fin (or other protrusion) design and surface treatment.

 The approach air velocity depends on the attached or nearby fan. When there is no air
flow around the heat sink, energy cannot be transferred.

 A computer fan is any fan inside, or attached to, a computer case used for active
cooling, and may refer to fans that draw cooler air into the case from the outside, expel
warm air from inside, or move air across a heat sink to cool a particular component.

 A fan is needed to disperse the significant amount of heat that is generated by the
electrically powered parts in a computer. It is important for preventing overheating of the
various electronic components. Some computers will also have a heat sink (a piece of
fluted metal) located near the processor to absorb heat from the processor.

6. COMPUTER CASE

 A computer case (also known as a computer chassis, cabinet, box, tower, enclosure,
housing, system unit or simply case) is the enclosure that contains most of the
components of a computer (usually excluding the display, keyboard and mouse). If you
are building your own computer selecting the case will be one of your first choices to
make: the type of case, its size, orientation, the number of bays you will need etc.
Page 41 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
• A computer case is an enclosure that contains the main components of a computer.
• They are usually constructed from steel or aluminum combined with plastic, although
other materials such as wood have been used for specialized units.
• It has attachment points, slots and screws that allow these parts to be fitted onto the case.
• The basic form factors of computer include; desktop, tower
• Cases are available in different sizes and shapes; the size and shape of a computer case is
usually determined by the configuration of the motherboard that it is designed to
accommodate, since this is the largest and most central component of most computers.
• The most popular style for desktop computers is ATX, although microATX and similar
layouts became very popular for a variety of uses.

Functions

 It provides a standardized format for the installation of non-vendor-specific hardware.


 Housing for delicate internal components
 Attach internal and external components (chassis)
 It protects the motherboard circuitry and hardware from dust, spills etc
 Helps to keep the hardware cool.

As an Industry Standard
 Cases help standardize the safe installation of different vendors’ hardware. Samsung and Western
Digital each make hard drives with different capabilities, for example, but because they are a
standard size they will fit equally well into the same 3.5-inch hard drive location (or bay) within a
case. Such standardization allows for customization and lower production costs.
As Protection
 The exposed circuitry of a motherboard can malfunction if it gets bumped, something spilled on
it, or too much dust. Cases protect delicate internal components such as the motherboard from
such danger. Additionally, case fans help air flow and prevent components from overheating

Factors To Consider When Choosing A Computer Case

o The size of the motherboard


o The space available inside
o The number of external or internal components

Three varieties of case types are listed:

 Desktop - typically have four drive bays, about six expansion slots, and were meant to sit
horizontally on a desk. The text puts compact cases (low profile cases) in this category.
They are typically smaller, and meant for low cost, less powerful computers.
 Tower - typically sits vertically on a desk or on the floor (bad idea: static from the
carpet). These come in a variety of sizes, the larger ones generally for more powerful
computers and servers.
 Notebook/Laptop - used for portable computers. These vary in thickness and weight,
number of slots and ports, and processor power. The size of the case may require that the
power supply be external, and in some cases that peripheral devices are external as well.

Page 42 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Case Sizes
o Cases can come in many different sizes (known as form factors).
o The size and shape of a computer case is usually determined by the form factor of
the motherboard, since it is the largest component of most computers.
o Consequently, personal computer form factors typically specify only the internal
dimensions and layout of the case.
o For example, a case designed for an ATX motherboard and power supply may
take on several external forms, such as a vertical tower (designed to sit on the
floor, height > width) or a flat desktop (height < width) or pizza box (height ≤ 2
inches, designed to sit on the desk under the computer's monitor). Full-size tower
cases are typically larger in volume than desktop cases, with more room for drive
bays and expansion slots.
 Desktop cases—and mini-tower cases designed for the reduced microATX form factor—
are popular in business environments where space is at a premium.

Major Component Locations:

1. The motherboard is usually screwed to the case along its largest face, which could be the
bottom or the side of the case depending on the form factor and orientation.
2. Form factors such as ATX provide a back panel with cut-out holes to expose I/Oports
provided by integrated peripherals, as well as expansion slots which may optionally
expose additional ports provided by expansion cards.
3. The power supply unit is often housed at the top rear of the case; it is usually attached
with four screws to support its weight.
4. Most cases include drive bays on the front of the case; a typical ATX case includes a
5.25" bay (used mainly for optical drives) and 3.5" bays used for hard drives, floppy
drives, and card readers.
5. Buttons and LEDs are typically located on the front of the case; some cases include
additional I/O ports, temperature and/or processor speed monitors in the same area
6. Vents are often found on the front, back, and sometimes on the side of the case to allow
cooling fans to be mounted via surrounding threaded screw holes.

7. POWER SUPPLY UNIT

• The power supply is used to connect all of the parts of the computer to electrical power.
• Computer power supply is the electric source of all components of a computer.
• The rated output capacity of a PSU should usually be about 40% greater than the calculated
system power consumption needs obtained by adding up all the system components. This protects
against overloading the supply, and guards against performance degradation.

Function of computer power supply

Page 43 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 The function of power supply unit is to convert the electrical power (AC) comes from
wall socket to a suitable type and voltage (DC) so that each component of a computer
works properly.
 The power supply converts the alternating current (AC) from your mains (110V
input or 220V input) to the direct current (DC) needed by the computer.
 The power supply is used to connect all of the parts of the computer to electrical
power.
 Main source of power for the computer
 Supply power to the motherboard
 Converts mains AC to low-voltage regulated DC power for the internal components
of a computer.
 Regulates the voltage to eliminate spikes and surges common in most electrical systems
 Regulates power to the required voltages needed by various computer components.

 Lack of proper supply of power will damage a computer system. The power supply
receives 120 or 230V and converts into 3.3V, 5.5V and 12V.

Why different converted power?

 That is because all components of a computer system don’t need the same power.
 Regulates power to the required voltages needed by various computer components.
 For example, motherboard and cards use 3.3V. The most power demand parts such as
Fan and drives need 12V to operate.
 Power supplies - often referred to as switching power supplies, use switcher
technology to convert the AC input to lower DC voltages. The typical voltages
produced are: •3.3 volts, •5 volts, •12 volts

Power Supply Color Codes

Voltage Color Use


+12V Yellow Disk drive, fans, cooling devices
-12V Blue Serial ports
+3.3V Orange Newer CPUs, video cards
+5V Red Motherboard, Motherboard components
0V Black Ground, used to complete circuits with other voltages

Page 44 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Colour codes for the ATX power supply cables
PIN SIGNAL COLOUR
1 3.3V Orange
2 3.3V Orange
3 GROUND Black
4 5V Red
5 GROUND Black
6 5V Red
7 GROUND Black
8 POWER OK Green
9 5V Purple
10 12V yellow

ATX Main Power Supply Connector Pinout (Wire Side View)


Color Signal Pin Pin Signal Color
Orange* +3.3V 11 1 +3.3V Orange
Blue –12V 12 2 +3.3V Orange
Black GND 13 3 GND Black
Green PS_On 14 4 +5V Red
Black GND 15 5 GND Black
Black GND 16 6 +5V Red
Black GND 17 7 GND Black
White –5V 18 8 Power_Good Gray
Red +5V 19 9 +5VSB Purple
(Standby)
Red +5V 20 10 +12V Yellow

Page 45 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page 46 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 The 3.3-volts and 5-volts are typically used by digital circuits, while the 12-volt is used to
power fans and motors in disk drives. The main specification of a power supply is in
watts. A watt is the product of the voltage in volts and the current in amperes or amps.

The form factor of the power supply refers to its general shape and dimensions. The form factor
of the power supply must match that of the case that it is supposed to go into, and the
motherboard it is to power.

Power Supply Wattage:

Page 47 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

A 400-watt switching power supply will not necessarily use more power than a 250-watt supply.
A larger supply may be needed if you use every available slot on the motherboard or every
available drive bay in the personal computer case. It is not a good idea to have a 250-watt supply
if you have 250 watts total in devices, since the supply should not be loaded to 100 percent of its
capacity.

According to PC Power & Cooling, Inc., some power consumption values (in watts) for common
items in a personal computer are:

For overall power supply wattage, add the requirement for each device in your system, then
multiply by 1.5. The multiplier takes into account that today’s systems draw disproportionally on
the +12V output. Furthermore, power supplies are more efficient and reliable when loaded to
30% - 70% of maximum capacity.

Power Connectors
1. Molex connector
2. Berg connector
3. 20 pin or 24 pin
4. 4 pin or 8 pin

NB: don’t force connectors when connecting

Comparison of Power Supply Form Factors

The table below is a summary comparison of the different power supply form factors. It shows
their dimensions, the usual style of system in which they are used, and what sort of motherboard
connectors they provide. It also shows for each power supply form factor, the typical cases and
motherboards that are used with it. These lists should not be considered exhaustive. Also bear in
mind that some combinations are much more common than others. "AT/ATX Combo" refers to

Page 48 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
cases designed to fit either AT or ATX power supplies, and motherboards designed with both AT
and ATX style connectors.

Note: SFX and ATX power supplies can generally be interchanged in systems sized to hold them
because their 20-pin main motherboard connectors are almost identical. They are not however
exactly identical: the SFX power supply does not provide the -5 V signal that may be required
for some systems that use certain ISA bus expansion cards.

Typical
Match to
Form Dimensions Usual Motherboard Match to Case Form
Motherboard
Factor (W x D x H, Style(s) Connectors Factor
Form Factor
mm)

PC/XT 222 x 142 x 120 Desktop AT Style PC/XT PC/XT

Desktop
AT 213 x 150 x 150 AT Style AT AT, Baby AT
or Tower

Desktop Baby AT, AT, AT, Baby AT,


Baby AT 165 x 150 x 150 AT Style
or Tower AT/ATX Combo AT/ATX Combo

LPX, some Baby AT, LPX, AT, Baby AT,


LPX 150 x 140 x 86 Desktop AT Style
AT/ATX Combo AT/ATX Combo

ATX, Mini-ATX,
Extended ATX, Mini-ATX,
Desktop
ATX/NLX 150 x 140 x 86 ATX Style ATX, NLX, Extended ATX, NLX,
or Tower
microATX, AT/ATX microATX, FlexATX
Combo

microATX, FlexATX, microATX,


100 x 125 x Desktop
SFX ATX Style ATX, Mini-ATX, FlexATX, ATX,
63.5 * or Tower
NLX Mini-ATX, NLX

150 x 230 x 86
(single fan)
WTX Tower WTX Style WTX WTX
224 x 230 x 86
(double fan)

5.Expansion Slot
 An expansion slot is a socket on the motherboard that is used to insert an expansion card
(or circuit board), which provides additional features to a computer such as video, sound,
advanced graphics, Ethernet or memory.
 Alternatively referred to as a bus slot or expansion port, an expansion slot is a
connection or port located inside a computer on the motherboard or riser board that
allows a computer hardware expansion card to be connected. For example, if you wanted

Page 49 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
to install a new video card in the computer, you'd purchase a video expansion card and
install that card into the compatible expansion slot.
 The expansion card has an edge connector that fits precisely into the expansion slot as
well as a row of contacts that is designed to establish an electrical connection between the
motherboard and the electronics on the card, which are mostly integrated circuits.
Depending on the form factor of the case and motherboard, a computer system generally
can have anywhere from one to seven expansion slots. With a backplane system, up to 19
expansion cards can be installed.

Types of expansion slots

 AGP - Video card


 AMR - Modem, Sound card
 CNR - Modem, Network card, Sound card
 EISA - SCSI, Network card, Video card
 ISA - Network card, Sound card, Video card
 PCI - Network card, SCSI, Sound card, Video card
 PCI Express - Video card, Modem, Sound Card, Network Card
 VESA - Video card

Many of the above expansion card slots are obsolete. You're most likely only going to encounter
AGP, PCI, and PCI Express when working with computers today. In the picture below is an
example of what expansion slots may look like on a motherboard.

PCI Express
 PCI Express (or PCIe) is the newest standard for expansion cards on personal computers.
 PCI Express is meant to replace older standards like PCI and AGP, mentioned below.
 PCIe provides significantly more bandwidth, allowing for higher performance video
cards and network cards.
 Video cards in particular are the most common consumer use of these slots, since they
need high bandwidth for maximum 3D gaming and graphics performance.
 While PCI Express is meant to replace the AGP and PCI standards, many PCI cards are
still being manufactured, particularly for expansion cards which do not need the increase
in bandwidth provided.
 PCI Express is now dominating however, and motherboards are being manufactured with
fewer PCI slots and more PCIe slots.
PCI
 PCI (Peripheral Component Interconnect) is not to be confused with PCI Express, which
is meant to replace it.
 Unlike PCI Express, PCI is an older standard which provides less bandwidth for
expansion cards.
 In spite of the fact that the standard was created in 1993, new motherboards still ship with
PCI slots for compatibility purposes.
 PCI cards are still very common for expansion cards that do not need high bandwidth,
such as most sound cards, network cards, USB expansion cards for additional
connections, and more.
Page 50 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Since newer motherboards still tend to come with PCI slots for compatibility, PCI cards
will function on most computers.
 In contrast, PCI Express cards will only function on newer computers.
 Manufacturers can release expansion cards which function with most computers if the
cards are PCI.
AGP
 The AGP (Accelerated Graphics Port) expansion slot standard was introduced when
video cards needed more bandwidth for performance than was provided by PCI.
 As suggested by the title, AGP slots are used for video cards.
 However, AGP has been largely phased out in favour of the PCI Express expansion slot
standard.
 Unlike AGP, PCI Express provides higher bandwidth for other types of expansion cards
that could use it, such as some newer, high-performance sound and network cards.
ExpressCard & PC Card (or PCMCIA)
 These standards are designed to be used with portable computers such as laptops.
 ExpressCard is the successor to PC Card (also known as PCMCIA), and, like PCIe over
PCI, has more bandwidth.
 Unlike PCI, these types of cards are hot-pluggable (which means you can plug them in
while your laptop is running, without shutting it down first).
ISA
 ISA (Industry Standard Architecture) is another type of expansion slot you may have
heard of.
 It was the predecessor to PCI and you'll only find it on much older computers.

6. PERIPHERAL CONTROL CARDS

• A controller card can be an electronic integrated chip, an expansion card or a stand-


alone device that works as a crossing point with peripheral devices. The basic purpose of
the controller card is to provide additional features to the motherboard in managing all
the connected internal and external devices.

Typically, all devices require a controller card to work efficiently. In the event of an
absence of an internal controller slot, an external controller card is installed. A controller
card may include a memory controller, storage controller, input device controller and
many others, each of which serves its distinct device type.

• This term is used to describe important tools that allow your computer to connect and
communicate with various input and output devices. The term “card” is used because
these items are relatively flat in order to fit into the slots provided in the computer case.
A computer will probably have a sound card, a video card, a network card and a
mo``dem.

Expansion cards

Page 51 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
• Most computers have expansion slots on the motherboard that allow you to add various
types of expansion cards. These are sometimes called PCI (peripheral component
interconnect) cards. You may never need to add any PCI cards because most
motherboards have built-in video, sound, network, and other capabilities.
• However, if you want to boost the performance of your computer or update the
capabilities of an older computer, you can always add one or more cards. Below are some
of the most common types of expansion cards.

Video card

• The video card is responsible for what you see on the monitor. Most computers have a
GPU (graphics processing unit) built into the motherboard instead of having a separate
video card. If you like playing graphics-intensive games, you can add a faster video card
to one of the expansion slots to get better performance.

Sound card

• The sound card—also called an audio card—is responsible for what you hear in the
speakers or headphones. Most motherboards have integrated sound, but you can upgrade
to a dedicated sound card for higher-quality sound.

Page 52 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Network card

• The network card allows your computer to communicate over a network and access the
Internet. It can either connect with an Ethernet cable or through a wireless connection
(often called Wi-Fi). Many motherboards have built-in network connections, and a
network card can also be added to an expansion slot.

Expansion cards can provide various functions including:

 Sound
 Modems
 Network
 Interface adapters
 TV and radio tuning
 Video processing
 Host adapting such as redundant array of independent disks or small computer system interface
 Solid-state drive
 Power-on self-test
 Advanced multirate codec
 Basic input/output system (BIOS)
 Expansion read-only memory (ROM)
 Security devices
 RAM memory

Older expansion cards also included memory expansion cards, clock/calendar cards, hard disk
cards, compatibility cards for hardware emulation, and disk controller cards. The Altair 8800 was
the first slot-type expansion card bus added to a microcomputer. It was developed in 1974-1975
by IBM Corp.

Page 53 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The expansion slot opening is generally located on the back of a PC and provides an electrical
connection to the motherboard for an expansion card. Screws are then used to attach the card to
the slot for added security.

7. MONITOR

• A visual display unit, computer monitor or just display, is a piece of electrical equipment, usually
separate from the computer case, which displays visual images without producing a permanent
computer record. A display device was usually either a CRT in the 1980s, but by the 2000s, flat
panel displays such as a TFT LCD had largely replaced the bulkier, heavier CRT screens. Multi-
monitor setups are quite common in the 2010s, as they enable a user to display multiple programs
at the same time (e.g., an email inbox and a word processing program). The display unit houses
an electronic circuitry that generates its picture from signals received from the computer. Within
the computer, either integral to the motherboard or plugged into it as an expansion card, there is
pre-processing circuitry to convert the microprocessor's output data to a format compatible with
the display unit's circuitry. The images from computer monitors originally contained only text,
but as graphical user interfaces emerged and became common, they began to display more images
and multimedia content. The term "monitor" is also used, particularly by technicians in
broadcasting television, where a picture of the broadcast data is displayed to a highly
standardized reference monitor for confidence checking purposes.

8. KEYBOARD

• A keyboard is an arrangement of buttons that each correspond to a function, letter, or number.


They are the primary devices used for inputting text. In most cases, they contain an array of keys
specifically organized with the corresponding letters, numbers, and functions printed or engraved
on the button. They are generally designed around an operators language, and many different
versions for different languages exist. In English, the most common layout is the QWERTY
layout, which was originally used in typewriters. They have evolved over time, and have been
modified for use in computers with the addition of function keys, number keys, arrow keys, and
keys specific to an operating system. Often, specific functions can be achieved by pressing
multiple keys at once or in succession, such as inputting characters with accents or opening a task
manager. Programs use keyboard shortcuts very differently and all use different keyboard
shortcuts for different program specific operations, such as refreshing a web page in a web
browser or selecting all text in a word processor. In addition to the alphabetic keys found on a
typewriter, computer keyboards typically have a numeric keyboard and a row of function keys
and special keys, such as CNTRL, ALT, DEL and Esc.

9. MOUSE

• A computer mouse is a small handheld device that users hold and slide across a flat surface,
pointing at various elements of a graphical user interface with an on-screen cursor, and selecting
and moving objects using the mouse buttons. Almost all modern personal computers include a
mouse; it may be plugged into a computer's rear mouse socket, or as a USB device, or, more
recently, may be connected wirelessly via an USB dongle or Bluetooth link. In the past, mice had
a single button that users could press down on the device to "click" on whatever the pointer on the
screen was hovering over. Modern mice have two, three or more buttons, providing a "right click"
Page 54 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
function button on the mouse, which performs a secondary action on a selected object, and a
scroll wheel, which users can rotate using their fingers to "scroll" up or down. The scroll wheel
can also be pressed down, and therefore be used as a third button. Some mouse wheels may be
tilted from side to side to allow sideways scrolling. Different programs make use of these
functions differently, and may scroll horizontally by default with the scroll wheel, open different
menus with different buttons, etc. These functions may be also user-defined through software
utilities. Mice traditionally detected movement and communicated with the computer with an
internal "mouse ball", and used optical encoders to detect rotation of the ball and tell the
computer where the mouse has moved. However, these systems were subject to low durability,
accuracy and required internal cleaning. Modern mice use optical technology to directly trace
movement of the surface under the mouse and are much more accurate, durable and almost
maintenance free. They work on a wider variety of surfaces and can even operate on walls,
ceilings or other non-horizontal surfaces.

10. DOCKING STATION

 Alternatively referred to as a universal port replicator, a docking station is a hardware


device that allows portable computers to connect with other devices with little or no
effort. Docking stations enable users with a laptop computer to convert it into a desktop
computer when at the office or at home.
 For example, a user could use their laptop on the road, and then back at the office they
could connect the laptop to the docking station to use their monitor, speakers, and office
printer.

 Dock - Term used to describe the process of connecting a portable computer to a docking station.
 Undock (cold dock) - Term used to describe the process of disconnecting a portable computer
from a docking station after it has been shut down.

 A docking station is a hardware frame and set of electrical connection interfaces that
enable a notebook computer to effectively serve as a desktop computer . The interfaces
typically allow the notebook to communicate with a local printer, larger storage or
backup drives, and possibly other devices that are not usually taken along with a
notebook computer. A docking station can also include a network interface card ( NIC )
that attaches the notebook to a local area network ( LAN ).
 Variations include the port replicator , an attachment on a notebook computer that
expands the number of ports it can use, and the expansion base , which might hold a CD-
ROM drive, a floppy disk drive, and additional storage.
 There is a difference between docking station and port replicator.
 I use the term docking station for a box which contains slots to put some interface cards
in, and space to put a harddisk, etc. in. This box can be permanently connected to a PC. A
port replicator is just a copy of the laptop ports which may be connected permanently to
a PC.

Page 55 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

11. SERIAL, PARALLEL AND GAME PORTS

Computer Ports
 The peripheral hardware mentioned above must attach to the computer so that it can
transmit information from the user to the computer (or vice versa). There are a variety of
ports present on a computer for these attachments. These ports have gradually changed
over time as computers have changed to become faster and easier to work with. Ports
also vary with the type of equipment that connects to the ports. A computer lab manager
should become familiar with the most common ports (and their uses), as described below.

Serial Port.

 This port for use with 9 pin connectors is no longer commonly used, but is found on
many older computers. It was used for printers, mice, modems and a variety of other
digital devices.

Page 56 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Parallel Port. This long and slender port is also no
longer commonly used, but was the most common way
of attaching a printer to a computer until the
introduction of USB ports (see below). The most Serial Port (left)
common parallel port has holes for 25 pins, but other Parallel Port (right)
models were also manufactured.
VGA. The Video Graphics Array port is found on most
PS/2 Ports
computers today and is used to connect video display
devices such as monitors and projectors. It has three
rows of holes, for a 15 pin connector.
PS/2. Until recently, this type of port was commonly USB Ports
used to connect keyboards and mice to computers.
Most desktop computers have two of these round ports VGA Port
for six pin connectors, one for the mouse and one for
the keyboard. TRS (mini-jack)
Ports
USB. The Universal Serial Bus is now the most Phone/Modem Jacks (top)
common type of port on a computer. It was developed Ethernet Port (bottom)
in the late 1990s as a way to replace the variety of ports
described above. It can be used to connect mice, USB Ports
keyboards, printers, and external storage devices such
as DVD-RW drives and flash drives. It has gone Figure 1 - Back of Desktop Computer Showing Ports
through three different models (USB 1.0, USB 2.0 and
USB 3.0), with USB 3.0 being the fastest at sending and receiving information. Older USB
devices can be used in newer model USB ports.
TRS. TRS (tip, ring and sleeve) ports are also known as ports for mini-jacks or audio jacks.
They are commonly used to connect audio devices such as headphones and microphones to
computers.
Ethernet. This port, which looks like a slightly wider version of a port for a phone jack, is used
to network computers via category 5 (CAT5) network cable. Although many computers now
connect wirelessly, this port is still the standard for wired networked computers. Some
computers also have the narrower port for an actual phone jack. These are used for modem
connections over telephone lines.

HDMI - Allows you to connect your computer a High Definition display or TV.

eSATA - These ports allow you to connect an external SATA hard drive to your computer.

Page 57 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Back side connectors of PC

 SMPS. Switch Mode Power Supply uses electronics circuitry that converts the AC input
voltage to different values of regulated DC supply which is fed into various color-coded
wires fixed to connectors.
 SMPS FAN. The fan is fixed inside the SMPS and is used to radiate the internal heat of
SMPS to outside.
 Power In Socket. This socket is used to input 220V AC to the PC from mains supply
when the computer switch on the front side is pressed.
 PS-2 Port. You can see two different colored 6-pin round shaped connectors. These
connectors are used to connect input devices, keyboard and mouse. Color Coding defines
the connector type. The purple connector is dedicated to connect Keyboard and Green
color is used for Mouse.
 USB Port. The full form is Universal Serial Bus and is used to connect various input and
output devices like Mouse,Keyboard, Printers, Webcams etc. USB 3.0 is the latest
version which offers high data transfer speed.
 DVI Port. Digital Video Interface is a high-speed serial link for connecting output
display Devices.

Page 58 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 HDMI Port.HDMI stands for high definition multimedia interface. This is a latest
interface that helps to get high definition video and multi channel sound. You can
connect HDMI enabled blue ray devices, LED’s etc.
 15-pin Female VGA Port. This is used to connect display devices like Monitor / LCD /
LED Display.
 LAN Port. The LAN or network port is used to connect to other devices and computers
in a network.
 Audio Ports.Generally there are 3 number of audio ports on the back side of a PC. These
parts are either aligned vertically or in horizontal position. Green color port is dedicated
for headphones or speakers, a blue colored port is marked as Line-in and Mic can be
inserted in a pink port.
 Expansion Slots: These expansion slots are used to connect add-on cards to increase the
capabilities of the motherboard.

Front Side buttons on PC

 DVD-Writer. Top slot of the cabinet is reserved to fix CD-ROM or DVD-writer.


 Power-LED: The LED glows and indicates that the Input Power is ON
 HDD LED: When we are working on the computer, the hard disk is in use , this LED
glows and is the indication that the hard disk drive is in use.

Page 59 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Reset Switch: This computer switch is quite handy when the computer is stuck-up and
you are not able to work on the computer . Just press this switch, the computer will Re-
Boot.
 Front USB. Cabinet provides a facility for you to connect USB devices from front-side
as it is quite awkward to get to the back side of the computer again and again.
 Front Audio Ports: The ports for MIC and HeadPhone at the front are for user quite easy
to approach.
 Power Switch. It is used to switch-ON the computer.

HARD DISK INTERFACE(S)

 There are variants of each interface, and this article will not do justice to the different
types of ATA, SATA and SCSI interfaces. Thus, it will only highlight the more common
interfaces as used by the home user.
 SAS Serial Attached SCSI.
 SCSI Small Computer Systems Interface.
 SATA Serial ATA or Serial Advanced Technology Attachment.
 IDE Integrated/Intelligent Drive/Device Electronics.

ATA (IDE, ATAPI, PATA)

ATA is a common interface used in many personal computers before the emergence of SATA. It
is the least expensive of the interfaces.
Disadvantages
 Older ATA adapters will limit transfer rates according to the slower attached device
(debatable)
 Only ONE device on the ATA cable is able to read/write at one time
 Limited standard for cable length (up to 18inches/46cm)
Advantages
 Low costs
 Large capacity

SATA

SATA is basically an advancement of ATA.


Disadvantages
 Slower transfer rates compared to SCSI
 Not supported in older systems without the use of additional components
Advantages
 Low costs
 Large capacity
 Faster transfer rates compared to ATA (difference is marginal at times though)
 Smaller cables for better heat dissipation

Page 60 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

SCSI

SCSI is commonly used in servers, and more in industrial applications than home uses.
Disadvantages
 Costs
 Not widely supported
 Many, many different kinds of SCSI interfaces
 SCSI drives have a higher RPM, creating more noise and heat
Advantages
 Faster
 Wide range of applications
 Better scalability and flexibility in Arrays (RAID)
 Backward compatible with older SCSI devices
 Better for storing and moving large amounts of data
 Tailor made for 24/7 operations
 Reliability

Identify the components of the following portable systems and their unique ports

- laptop
- palmtop
- notebook
- docking station

TROUBLE SHOOTING EQUIPMENT


 Measuring and testing equipment as tools used in trouble shooting or diagnosing computer

1. DIGITAL MULTIMETERS

 A digital multimeter (DMM) is a test tool used to measure two or more electrical
values—principally voltage (volts), current (amps) and resistance (ohms). It is a standard
diagnostic tool for technicians in the electrical/electronic industries.
 Digital multimeters combine the testing capabilities of single-task meters—the voltmeter
(for measuring volts), ammeter (amps) and ohmmeter (ohms). Often they include a
number of additional specialized features or advanced options. Technicians with specific
needs, therefore, can seek out a model targeted for particular tasks.

Page 61 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 It can be used to measure AC and DC voltage, AC and DC current, and resistance by
changing the settings and range of the multimeter.
 It combines the functions of a voltmeter, ammeter, and ohmmeter.
 Some also have additional features like diode testing, continuity testing, transistor testing
and capacitance testing. The measuring range can be set by changing the range setting of
the meter.
 By having this tool, one will be able to measure the resistance of a component, the
capacitance of a capacitor, open or short circuit of a component, current that flows
through the circuit, the differential voltage between two points, type of transistors
whether it is PNP or NPN, and a host of other applications.
 There are two types of meter i.e. digital and analog type. The digital type has a LCD
display whereas the analog type has a needle with a linear/non linear display at its
background, which is used depending on its setting.

DMM facilities

While the facilities that a digital multimeter can offer are much greater than their analogue
predecessors, the cost of DMMs is relatively low. DMMs are able to offer as standard the basic
measurements that would typically include:

 Current (DC)
 Current (AC)
 Voltage (DC)
 Voltage (AC)
 Resistance

However, using integrated circuit technology, most DMMs are able to offer additional test
capabilities. These may include some of the following:
Page 62 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Capacitance
 Temperature
 Frequency
 Transistor test - hfe, etc
 Continuity (buzzer)

Pre-caution

1) Disconnect it before adjusting the range switch.


2) Check the setting of the range switch before connecting to the circuit.
3) When not in used, do not leave the meter set to a current range because the meter has a
low resistance and can be damaged easily at the setting.
4) Do not use the meter if the meter or the test leads look damaged.
5) Do not measure resistance in a circuit when power is applied to the circuit.
6) Do not apply more than the rated voltage between any input jack and the common point.
7) When making measurements, Keep your fingers behind the finger guards on the test
probes.

2. LOGIC PROBES

 Logic probes test for the presence or absence of low-voltage signals that represent digital data.
These data, symbolized by the binary numbers 0 and 1, are electrically defined in many circuits as
0 and 5 volts, respectively—though in practice, the actual voltages of the 0 and 1 values depend
entirely on the circuit.
 A logic probe tester is used for probing and analyzing logic circuits.
 Logic probe: This is a very simple instrument used to detect logic states, or give a very
basic indication of a changing logic state. It typically has a single LED to indicate the
logic state. However it gives no real indication whether any logic pattern is correct.

Logic probe basics

A logic probe is able to give an indication of the logic state of a line carrying a digital signal The
logic probe indicates whether there is a logic state "1" or "0", normally using an LED as the
indicator. Often the LED on the logic probe will use different colours to indicate different states.

Page 63 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
A logic probe normally may be capable of indicating up to four different states:

 Logic high : If the logic circuit is at a logic or digital high voltage, the logic probe will
indicate this on its interface - typically this will be a colour red.
 Logic low: Again the logic probe will indicate a logic or digital low. The most common
colour for this is green.
 Pulses: The logic probe is likely to incorporate a pulse detection circuit. When the line
is active a third colour, possibly amber will be indicated. The logic probe may well
incorporate circuitry to detect very short pulses and in this way indicate when the line is
active. Sometimes the length of te pulses may be indicated by the brightness of the LED.
 Line tri-stated : Often it is possible for lines to be tri-stated, i.e. the output device has its
output turned off and no real state is defined. Many logic probes are able to indicate this
state by having all indicators turned off.

Some logic probes may have a control to select the logic family being tested - different logic
families have slightly different high and low voltage levels.

Another facility that some logic probes may include is an audible indication of the logic state.
This feature is particularly useful when using a probe as eyes may need to be trained on the
circuit and not on the logic probe itself.

Logic probe advantages -

 Low cost: A logic probe does not contain much circuitry, and the display is very
rudimentary. Therefore the cost of manufacture is very low.
 Ease of use : To use a logic probe typically requires the connection of power leads and
then connecting the probe to the required point on the circuit.

Logic probe disadvantages -

 Very rough measurement: The nature of the logic probe means that only an indication
of the presence of a logic signal can be detected.
 Poor display: A logic probe only uses a few LEDs to indicate the nature of the logic
signal. As a result, little information can eb displayed about the nature of the logic signal
that is detected.

A logic probe tester is a cheap and relatively simple item of test equipment. It is versatile and
very transportable, and it also is able to provide a quick test for many circuits. However it is not
nearly as flexible as an oscilloscope or a logic analyzer. A logic probe can be used for quick
testing, whereas for more in-depth testing more sophisticated test equipment is needed.

Page 64 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 Logic probes, as shown ,are extremely simple and useful devices that are designed to help
you detect the logic state of an IC.
 Logic probes can show you immediately whether a specific point in the circuit is low,
high, open, or pulsing.
 A high is indicated when the light at the end of the probe is lit and a low is indicated
when the light is extinguished.
 Some probes have a feature that detects and displays high-speed transient pulses as small
as 5 nanoseconds wide.
 These probes are usually connected directly to the power supply of the device being
tested, although a few also have internal batteries.
 Since most IC failures show up as a point in the circuit stuck either at a high or low level,
these probes provide a quick, inexpensive way for you to locate the fault.
 They can also display that single, short-duration pulse that is so hard to catch on an
oscilloscope.

The ideal logic probe will have the following characteristics:

1. Be able to detect a steady logic level


2. Be able to detect a train of logic levels
3. Be able to detect an open circuit
4. Be able to detect a high-speed transient pulse
5. Have overvoltage protection
6. Be small, light, and easy to handle
7. Have a high input impedance to protect against circuit loading

3. LOGIC PULSER

 Another extremely useful device for troubleshooting logic circuits is the logic pulser.
 It is similar in shape to the logic probe and is designed to inject a logic pulse into the circuit
under test.
 Logic pursers are generally used in conjunction with a logic clip or a logic probe to help you trace
the pulse through the circuit under test or verify the proper operation of an IC.
 Some logic pulsers have a feature that allows a single pulse injection or a train of pulses.
 Logic pursers are usually powered by an external dc power supply but may, in some cases, be
connected directly to the power supply of the device under test.

Page 65 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 Is a handheld logic generator used for injecting controlled pulses into digital logic circuits
such as microprocessors
 A probe utilized to pulse, or change the logic state, of a logic circuit. Such an instrument may be
used in conjunction with a logic probe, to trace a pulse throughout a circuit being tested. Usually
used for inspection and troubleshooting.
 It is used to stimulate a node in the system of logic integrated circuits
 It is also used to measure continuity of base lines
 It may be used in conjunction with a logic probe
 It may be tested by pulsing the gates
 It inputs with a logic pulser while monitoring the output with a logic probe

 This LOGIC PULSER capable of delivering pulses of various compositions, to any type
of circuit you wish to test.
 Basically it is designed to complement the LOGIC PROBE and can be used in situations
where the LOGIC PROBE is not so effective.
 It is an improvement over a multimeter in that it has an audible output and is NOT
triggered when measuring across a diode.

The pulser is simple instrument. It has a probe body equipped with a needle-point electrode. When
touched to a conductor, circuit node or device terminal, it injects a single pulse or pulse train which can
be seen at the output (if it is present) by the logic probe, multimeter or oscilloscope. So we see that while
the pulser is used somewhat less frequently than the logic probe, it is nevertheless quite essential when an
input is needed.

Page 66 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Logic pulser

4. LOGIC ANALYZER

A logic analyser is used for verifying and debugging the operation of digital designs
looking at logic states and timings.

A logic analyzer is the ultimate tool for analyzing logic patterns.

 In the process of debugging and doing validation in a digital system, one of the common
task a designer need to do is the acquisition of digital waveform.
 The basic problem that a logic analyzer solves is that a digital circuit is too fast to be
observed by a human being, and has too many channels to be examined with an
oscilloscope.
 It has an oscilloscope display that displays the digital states of the system under test. It is
a tool that allows numerous digital waveform to be acquired simultaneously. The
acquisition can be clocked internally, or the test system can provide the sample clock.
 It would trigger on a complicated sequence of digital events, and then copy a large
amount of digital data from the system under test. The captured data will enable the user
to locate failure of the digital system.

Typical digital oscilloscopes have up to four signal inputs. Logic analyzers, have
channels range from 34 to 136. Each channel inputs one digital signal. It measures and
analyzes signals differently than an oscilloscope.
 It doesn’t measure analog details. Instead, it detects logic threshold levels. When you
connect an analyzer to a digital circuit, you’re only concerned with the logic state of the
signal.

Page 67 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Functions

 Debug and verify digital system operation


 Trace and correlate many digital signals simultaneously
 Detect and analyze timing violations and transients on buses
 Trace embedded software execution

Oscilloscope vs Logic Analyzer-Difference between Oscilloscope and Logic Analyzer

 Both Oscilloscope and Logic analyzer are both used to trouble shoot the electronic
circuits. They have their own specific niche applications of use.
Oscilloscope

 Oscilloscope taps the analog signals from the board and displays the voltage variation of
the signal over time on the screen. There are two types of oscilloscopes based on their
measurement and display methodology. Analog Oscilloscope directly plots the signal.
Digital Oscilloscope first converts the analog signal to digital signal and then plots it.
 Analog Oscilloscope does not store past samples. Digital Oscilloscope stores long
duration waveform data for analysis.

Page 68 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Logic Analyzer

 Unlike Oscilloscopes, logic analyzer is designed to analyze the data flow over multiple
buses of the microprocessor or microcontroller systems. This will help in analyzing many
aspects of embedded hardware and software.
 In order to analyze multiple channels, logic analyzers are used along with multichannel
probes. It analyzes digital signals.

Following steps are performed by logic analyzer for test and measurement:

• Make connection using probe with SUT (System Under Test)


• Setup clock mode and triggering as required.
• Acquire the digital logic states (in 1's and 0's) from the multiple channels.
• Analyze the logic signals and displays the same for further study.

Following table summarizes difference between oscilloscope and logic analyzer.

Oscilloscope Logic Analyzer

 It will have more input channels compare to


 It will have fewer input channels , maximum upto 4. oscilloscope, usually 34 and 136 channels are
available.
 It does not measure analog details. It detects logic
 Oscilloscope measures analog signals. threshold levels. It looks for just two logic levels
when interfaced with digital circuit.
 It traces embedded software execution with
 It does not troubleshoot software execution.
specific hardware activities on the circuit.
 It uses multi channel probe to acquire the large
 It uses single channel probe at a time.
number of signals simultaneously.
 Logic Analyzer analyzes both software and
 Oscilloscope analyzes only hardware.
hardware.
 Oscilloscope analyzes analog signal only.  Logic Analyzer analyzes digital signal only.

Page 69 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Oscilloscope Logic Analyzer


Oscilloscope is used for measuring analogue A logic analyser is used for verifying and debugging
waveforms: amplitude, phase values, or edge the operation of digital designs looking at logic states
measurements such as rise times, etc and timings.
Typical oscilloscope applications: Typical logic analyser applications:
 Investigating waveform shapes, ringing, rise  Investigate the system operation.
time, etc.
 Characterise aspects like waveform jitter and  Correlate a large number of digital signals
stability.
 Measure signal amplitudes.  Detect timing violations
 Detect transients, and unwanted pulses.  Trace embedded software operation.

5. MULTI-CHANNEL OSCILLOSCOPE

Oscilloscopes

 Oscilloscopes (or scopes) test and display voltage signals as waveforms, visual representations of
the variation of voltage over time. The signals are plotted on a graph, which shows how the signal
changes. The vertical (Y) access represents the voltage measurement and the horizontal (X) axis
represents time.
 It is a graph displaying device, that display the electrical signal based on the input to its
probes. It shows in real time how signals change over time. Usually the Y axis represents
the voltage and the X axis time.

Most oscilloscopes have intensity or brightness that can be adjusted. The display is caused by the
spot that periodically sweeps the display from left to right. In the design of electronics project,
the osc. is one of the most handy equipment that is worth investing.

Its functions are :

1. Shows and calculate the frequency and amplitude of an oscillating signal.


2. Shows the voltage and time of a particular signal. This function is the main used of all the
functions described here.
3. Helps to troubleshoot any malfunction components of a project by looking at the
expected output after a particular component.
4. Shows the content of the AC voltage or DC voltage in a signal.

When there is a change in the height of the waveform, it means that the voltage has changed. If
the line is horizontal it means that there is no change in voltage for that period of time. Some of
the common waveform that are measured using an osc. are as shown below.
Page 70 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

There are basically 2 types of osc. namely analog or digital type. Analog uses continuously
variable voltages. Digital uses discrete binary numbers that represent voltage samples.

 Analog osc. works by directly applying a voltage being measured to an electron beam
moving across the osc. screen. The voltage deflects the beam up and down
proportionally, tracing the waveform on the screen.
 Digital osc. samples the waveform and uses an analog to digital converter to convert the
voltage measured into digital format. It then uses this digital format to display the
waveform. It enables one to capture and view events that may happen only once.

They can process the digital waveform data or send the data to a computer for processing. Also,
they can store the digital waveform data for later viewing and printing.

Page 71 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

6. CURRENT TRACER

 It detects current flow.


 Detects the magnetic field that surrounds a conductor (or trace) through which current
is flowing.
 It does not matter even if the conductor is insulated
 A device that can be attached to any accessible point in the circuit to physically trace circuit
wiring current flow
 To work properly, the current tracer must be held vertically above the conductor.
 Current tracing is a way of troubleshooting on electronics
 They are jumpers on electronic boards
 You disconnect it and connect an ammeter to measure the current
 If measuring current is lower or higher than rated value, there is trouble on that part.
 Current tracer is a sensitive instrument, can trace current underground or hidden wall

The current tracer is responsive to current over a wide range, typically from 1 mA to 1 A. A
prominent indicator lamp resides at the probe tip and its brightness varies with the amount of
current. A control can be used to adjust sensitivity. Then, the procedure is to move along a
conductor or trace, taking note of the current. An abrupt change signals that a current sink or
source has been located.

5. SIGNATURE ANALYZER

 The signature analyzer was an item of test equipment that was used for fault finding
digital / logic electronic circuits.
 In view of the significant increase in complexity of logic circuits the signature analyzer is
little used these days.
 Signature analyzer: Signature analyzers are able detect digital waveform patterns.
These are typically indicated by their hex value making it easy to define the pattern at a

Page 72 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
given point in the circuit under given conditions. These analyzers are idea for field repair
and other similar applications where simple analysis of complex waveforms is required.

Options for digital / logic analysis

 When testing and analysing digital or logic circuits there are several options that are
available. Each of these has its own advantages and limitations and a choice can be made
dependent upon these.
 The signature analyser forms an ideal form of test instrument for analysing digital or
logic patterns in a circuit in some circumstances. It is often ideal for field repair and
applications where it can detect logic patterns in a circuit under given conditions, thereby
enabling detection of correct or incorrect operation of a circuit or board.

Signature analyzer basics

 A signature analyzer is normally used for checking data on given nodes within a logic
system such as a microprocessor board. A known operational scenario is set up, e.g. a test
mode, and the data on various nodes may be monitored. The signature analyzer converts
the serial data into a hexadecimal equivalent of the data pattern - this is the signature.
Typically this signature is four digits long, although different signature analyzers may
have different lengths.
 The basic signature analyser takes in the input from the node under test and using a clock
from the system. Start and stop pulses are captured to start and end the sample. The
pulses from the node under test are then passed into a shift register to provide the
hexadecimal equivalent of the waveform.

Basic block
diagram of a typical signature analyzer

 If the signature captured by the signature analyzer aligns with that of the same node on a
known good board, then this indicates this area is operating correctly.

Page 73 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 By probing different areas of a board with the signature analyzer, and comparing the
results with expected values, often detailed in a repair and service manual, it is possible
to locate the problem area.

Signature analyser instrument

A typical signature analyser will consist of the basic instrument with a number of switches,
inputs, outputs and a display like the image shown below.

Typical Signature Analyzer Front Panel

On the front panel are several different items:

 Gate: This indicates the input gate of the signature analyser is open and samples are
being taken.
 Display: The display indicates the signature taken, and in this case it shows the value
'832A' corresponding to the hex value of the signature of the data being sampled.
 Unstable signature: As the name implies, this indicates that the signature is unstable,
i.e. varying and the data is not satisfactory.
 Probe test: This is a connection and it is used for testing the probe to ensure the correct
operation of the probe and the system.
 Line: This is the power On / Off switch
 Start: This is used to select which edge is required to initiate the sampling period -
positive or negative going edge.
 Stop: Like the start, this is used to select which edge is required to terminate the
sampling period - positive or negative going edge.
 Clock: This is used for selecting which edge of the clock pulse is used for sampling the
line state.
 Hold: This is used to hold a single signature regardless of any changes in the incoming
date, for example if the input conditions change, etc..
 Self-Test: This is used to put the test instrument into a self-test mode for checking he
operation of the instrument as well as the probes.

In addition to the test instrument itself, a special test pod may also be used. This enables active
circuitry of the analyser to be placed closer to the circuit under test to remove or reduce the
effects of loading and long leads on rise times, etc..

Page 74 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The test pod will include the test probe for the point to be monitored as well as the clock input

STATIC CHARGE / STATIC ELECTRICITY

 It is the accumulation of electrical charges in an object


 Static electricity is often generated through tribocharging.
 Tribocharging is a contact electrification process that enables buildup of static electricity due to
touching or rubbing of surfaces in specific combinations of two dissimilar materials.
 Two dissimilar materials' surfaces just have to touch and then separate for an electric charge to
develop.
 When there is physical contact, a chemical bond is produced between the surfaces to some extent,
and charges are transferred from one surface to another.
 Examples of tribocharging include walking on a rug, rubbing a plastic comb against dry hair,
rubbing a balloon against a sweater, or removing some types of plastic packaging.
 In all these cases, the breaking of contact between two materials results in tribocharging
 This causes a charge imbalance between those dissimilar objects, thus creating a difference of
electrical potential that can lead to an ESD event.

 Electrostatic Discharge (ESD) is the term used to describe the transfer of static electricity from
one object to another.

 Electrostatic discharge (ESD) is the sudden flow of electricity between two electrically charged
objects caused by contact, an electrical short, or dielectric breakdown.
 A buildup of static electricity can be caused by tribocharging or by electrostatic induction.
 The ESD occurs when differently-charged objects are brought close together or when the
dielectric between them breaks down, often creating a visible spark.

CHARGE ACCUMULATION

 Whenever two dissimilar materials come in contact, electrons move from one surface to
the other. As these materials are separated and more electrons remain on one surface
than the other,one material takes on a positive charge and the other a negative charge.
 Mechanisms for Charge Accumulation:
 Contact and Frictional
 Double layer
 Induction
 Transport

CAUSES OF STATIC CHARGE/ ELECTRICITY

 Triboelectrification
o When a person’s fingertips touch a computer keyboard, they exchange electrons,
with one object becoming electrically positive and the other negative.
o When that person’s fingertips touch another object that has an opposite charge,
this causes electrons to flow back and forth.
Page 75 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Low humidity, Cold, dry areas.
o These conditions make air much less likely to conduct electrical charges.
o In moist air, a charge dissipates without notice because water conducts electricity.
o Dry air lacks this conductivity, so a charge continues to build up on objects until it
is intense enough to overcome the natural resistance of air over short distances.
o Static is not just a small charge. The most visually striking display of static
electricity is lightning, which reaches for miles from clouds to the ground. A
charge can be either positive or negative, but electrons, the negatively charged
particles, are what flow from one place to another.

SOURCES/ CAUSES OF ESD

 Static electricity.

Static electricity is often generated through tribocharging, the separation of electric


charges that occurs when two materials are brought into contact and then separated.
Examples of tribocharging include walking on a rug, rubbing a plastic comb against dry
hair, rubbing a balloon against a sweater, ascending from a fabric car seat, or removing
some types of plastic packaging. In all these cases, the breaking of contact between two
materials results in tribocharging, thus creating a difference of electrical potential that can
lead to an ESD event.

 Electrostatic induction.

This occurs when an electrically charged object is placed near a conductive object
isolated from the ground. The presence of the charged object creates an electrostatic field
that causes electrical charges on the surface of the other object to redistribute. Even
though the net electrostatic charge of the object has not changed, it now has regions of
excess positive and negative charges. An ESD event may occur when the object comes
into contact with a conductive path. For example, charged regions on the surfaces of
styrofoam cups or bags can induce potential on nearby ESD sensitive components via
electrostatic induction and an ESD event may occur if the component is touched with a
metallic tool.

 Energetic charged particles impinging on an object.

This causes increasing surface and deep charging. This is a known hazard for most
spacecraft.

 All materials (insulators and conductors alike)

Page 76 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
They are lumped together in what is known as the triboelectric series, which defines the
materials associated with positive or negative charges. Positive charges accumulate
predominantly on human skin or animal fur. Negative charges are more common to
synthetic materials such as Styrofoam or plastic cups. The amount of electrostatic charge
that can accumulate on any item is dependent on its capacity to store a charge. For
example, the human body can store a charge equal to 250 picofarads. This correlates into
a stored charge that can be as high as 25,000V.

HOW DOES ESD OCCUR?

 ESD can occur in a variety of forms. One of the most common is through human contact
with sensitive devices. Human touch is only sensitive on ESD levels that exceed 4,000V.
 A recent investigation found the human body and its clothing capable of storing between
500V and 2,500V electrostatic during the normal workday. This is far above the level that
damages circuits yet below the human perception threshold. Other sources of ESD
damage to equipment include:
 Troubleshooting electronic equipment or handling of printed circuit boards without using
an electrostatic wrist strap;
 Placement of synthetic materials (i.e. plastic, Styrofoam, etc.) on or near electronic
equipment; and
 Rapid movement of air near electronic equipment (including using compressed air to
blow dirt off printed circuit boards, circulating fans blowing on electronic equipment, or
using an electronic device close to an air handling system).
 In all of these scenarios, the accumulation of static charges may occur, but you may never
know. Furthermore, a charged object does not necessarily have to contact the item for an
ESD event to occur.

WAYS IN WHICH STATIC ELECTRICITY CAN DAMAGE COMPONENTS

 Static electricity can be dangerous to delicate circuits such as a computer motherboard.


This is why people working with computers ground themselves before touching any of
the internal parts. By touching a metal surface, the person dissipates any built up, excess
charge.

 Your sensitive computer components such as the Processor (CPU), hard drive, memory, main
board chips and expansion cards could be severely damaged.

 Whenever you open your computer and expose your components, you run the risk of
damaging your computer system with static electricity which has been built up by your
body.
 Your computer components inside your case (especially your hard drive) are prone to
being affected by ElectroStatic Discharge.

Page 77 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 It is very possible for you to be damaging your sensitve electronic components inside
your case from ElectroSatic Discharge without knowing it.
 If you felt a discharge, it possibly was more than 2,000 volts. A discharge as low as 200
volts can destroy your computer chip.
 It is possible that this damage might not be noticeable immediately. Your components
might just start a degradation process that slowly kills your computer parts. You could
(for example) start getting intermittent breakdowns until your computer stops functioning
properly.

CMOS Chips

 According to PCComputerNotes.com, newer integrated computer circuits known as


complimentary metal-oxide semiconductor (CMOS) chips are more susceptible to ESD
than older chips. Most central processing units and system memory cards are CMOS
chips.

Immediate Failure

 A common result of ESD damage is that it causes an immediate failure of a chip. This
may occur when the computer owner installs a new RAM card into the computer without
using an anti-static strap or some other grounding method. The static discharge destroys
the new RAM card and when the computer is turned on, it will not boot up properly,
according to PCComputerNotes.com. This type of problem usually can be resolved only
by replacing the damaged memory card.

Delayed Failure

 Another common occurrence with ESD is that a chip is damaged by static discharge, but
it may take weeks or sometimes months for the chip to completely fail, according to
PCComputerNotes.com. In this case, the computer may experience occasional failures
that can be hard to diagnose to a damaged chip.

How does ESD damage electronic circuitry?

 ESD is a tiny version of lightning. As the current dissipates through an object, it's seeking
a low impedance path to ground to equalize potentials. In most cases, ESD currents will
travel to ground via the metal chassis frame of a device. However, it's well known that
current will travel on every available path. In some cases, one path may be between the
PN junctions on integrated circuits to reach ground. This current flow will burn holes
visible to the naked eye in an integrated circuit, with evidence of heat damage to the
surrounding area. One ESD event will not disrupt equipment operation. However,
repeated events will degrade equipment's internal components over time.

Page 78 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

STATIC CHARGE PRECAUTIONS

To prevent damage to your sensitive electronic components in your system from ElectroStatic
Discharge, you should:

 Shut down your computer and turn off the switch on your surge protector leaving the
surge protector plugged in so that it will be grounded.
 Before touching any of the components inside your case, you should ground yourself to
discharge any static buildup.
 Make sure to discharge the static electricity by touching the metal chassis whilst wearing
an antistatic wrist strap.
 Antistatic Wrist Strap - Worn to protect your delicate computer components from static
electricity damage.
 Any static electrical charge that builds up on your body is then immediately transferred to
ground.
 Handle your expansion cards by their edges.
 Do not do any work inside your computer while standing on carpet.
 You should leave your components in their antistatic bags that you purchased them in
until you are ready to use them since, placing them outside of their bags, make them
susceptible to ESD..
 Do not work on your computer in cold, dry conditions since this encourages static
electricity. You should try to raise the humidity to between 50 to 60%.
 Do not wear woolen or nylon clothing while working on or repairing your computer.
 Keep your clothing away from drives, boards, memory, etc. Clothing could possibly be
electrically charged, especially when it is dry and cold.
 Leave your PC plugged into an AC outlet with the power switch turned off. This places
ground on the metal case.

You would then have to work with one hand always touching a metal part of the case.
ElectroStatic Discharge buildup would then be immediately grounded just like it would
with an antistatic wrist strap.

I would not recommend this method since it is very impractical; instead, protect your
computer components from static electricity by using an antistatic wrist strap.

Antistatic Wrist Strap:


When working inside a computer case, one of the most important tools that you should have
available is an antistatic wrist strap.

Damage to your computer can be prevented if you use an antistatic wrist strap. This strap fits on
your wrist (See Diagram A) while working on your computer components and has a wire
attachment with an alligator clip which is connected to your case.

Page 79 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

A CPU Heat Sink being installed

Antistatic Bag:
If you are removing a component for an extended period of time while you are building your
computer, you should store the component in an antistatic bag.

i. Antistatic Bag

 Antistatic bags are good for storing spare adapters and motherboards when the parts are not in use.
 However, antistatic bags lose their effectiveness after a few years.
 An antistatic bag is a bag used for storing or shipping electronic components which are prone to damage caused by
electrostatic discharge (ESD)

ii. Antistatic Wrist Strap

 An antistatic wrist strap allows the technician and the computer to be at the same voltage potential.
 As long as the technician and the computer or electronic part are at the same potential, static electricity does not occur.
 Technicians should use an ESD wrist strap whenever possible.
 A resistor inside an antistatic wrist strap protects the technician in case something accidentally touches the ground to
which the strap attaches while he or she is working inside a computer.
 An antistatic wrist strap, ESD wrist strap, or ground bracelet is an antistatic device used to safely ground a person
working on very sensitive electronic equipment, to prevent the buildup of static electricity on their body, which can
result in electrostatic discharge (ESD).
 It is used in the electronics industry by workers working on electronic devices which can be damaged by ESD, and also
sometimes by people working around explosives, to prevent electric sparks which could set off an explosion.
 It consists of an elastic band of fabric with fine conductive fibers woven into it, attached to a wire with a clip on the end
to connect it to a ground conductor.
 The fibers are usually made of carbon or carbon-filled rubber, and the strap is bound with a stainless steel clasp or plate.
 They are usually used in conjunction with an antistatic mat on the workbench, or a special static-dissipating plastic laminate on
the workbench surface.

Page 80 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
iii. Antistatic Mat

 Antistatic mats are available to place underneath a computer being repaired; such a mat may have a snap for connecting
the antistatic wrist strap.
 An antistatic floor mat or ground mat is one of a number of antistatic devices designed to help eliminate static
electricity.
 It does this by having a controlled low resistance: a metal mat would keep parts grounded but would short out exposed
parts; an insulating mat would provide no ground reference and so would not provide grounding.
 Typical resistance is on the order of 105 to 108 ohms between points on the mat and to ground.
 The mat would need to be grounded (earthed). This is usually accomplished by plugging into the grounded line in an
electrical outlet. It's important to discharge at a slow rate, therefore a resistor should be used in earthing the mat.
 The resistor, as well as allowing high-voltage charges to leak through to earth, also prevents a shock hazard when
working with low-voltage parts. Some ground mats allow you to connect an antistatic wrist strap to them. Versions are
designed for placement on both the floor and desk.

Page 81 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

MEMORY AND FACTORS AFFECTING


ACCESSING OF MEMORY
MAXIMIZING PERFORMANCE

Computer processor speeds have been increasing rapidly over the past several years. Increasing the speed
of the processor increases the overall performance of the computer. However, the processor is only one
part of the computer, and it still relies on other components in a system to complete functions. Because all
the information the CPU will process must be written to or read from memory, the overall performance of
a system is dramatically affected by how fast information can travel between the CPU and main memory.

So, faster memory technologies contribute a great deal to overall system performance. But increasing the
speed of the memory itself is only part of the solution. The time it takes for information to travel between
memory and the processor is typically longer than the time it takes for the processor to perform its
functions. The technologies and innovations described in this section are designed to speed up the
communication process between memory and the processor.

Describe factors affecting memory speed within a computer system:


 Wait states
 Memory interleave
 Page mode
 Cache
 Pipe lining
 Shadow RAM

Wait states

 A wait state is a situation in which the computer processor experiences a delay, mainly when
accessing external memory or a device that is slow in its response.
 Therefore, wait states are considered wasteful in processor performance.
 However, modern-day designs try to either eliminate or minimize wait states.
 These include caches, instruction pre-fetch and pipelines, simultaneous multithreading and branch
prediction.
 While all of these techniques cannot eliminate wait states entirely, they can significantly reduce
the problem when working together.
 Wait states are also used to reduce energy consumption, allowing the processor to slow down and
pause if there is no work for the CPU.
 When the processor requires access to the main memory, it starts by placing the address of the
information requested into the address bus.
 Following this, the processor needs to wait for the response, which may come back several cycles
later. Every one of these cycles is spent in a wait state.
 Microprocessors that power modern computers run extremely fast.
 However, the same cannot be said of the memory technology, which has not yet caught up to
similar speeds.
Page 82 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 A typical AMD Athlon 64 X2 and the Intel Core run at speeds of several GHz, meaning a clock
cycle is typically less than a nanosecond (0.3–0.5 ns).
 On the other hand, main memory has latency in the range of 15-30 ns. This mismatch results in a
wait state for the microprocessor, as a result slowing the overall speed of operation.

Memory interleave

 The term interleaving refers to a process in which the CPU alternates communication between
two or more memory banks.
 Interleaving technology is typically used in larger systems such as servers and workstations.
Here’s how it works: every time the CPU addresses a memory bank, the bank needs about one
clock cycle to “reset” itself.
 The CPU can save processing time by addressing a second bank while the first bank is resetting.
 Interleaving can also function within the memory chips themselves to improve performance.
 For example, the memory cells inside SDRAM chip are divided into two independent cell banks,
which can be activated simultaneously.
 Interleaving between the two cell banks produces a continuous flow of data. This cuts down the
length of the memory cycle and results in faster transfer rates.

Page 83 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page mode

 Paging is a memory management scheme by which a computer stores and retrieves data from
secondary storage for use in main memory.
 In this scheme, the operating system retrieves data from secondary storage in same-size blocks
called pages.
 Paging is an important part of virtual memory implementations in modern operating systems,
using secondary storage to let programs exceed the size of available physical memory.
 Paging is a DRAM memory management method used by computers that allows data to be saved
and obtained from a specified storage space to be used in the main memory.
 Under this method, data is gathered using blocks of the same size (called pages), which allows
for noncontiguous use of the physical address space.
 When accessing memory within the same row the computer keeps the row address and only
changes the column. Paging helps avoid fragmentation, reduces power consumption, allows for
faster access, and resolves other problems that can be caused when the physical address space is
used contiguously.
 Computer memory that uses this method is called Page Mode or Fast Page Mode, which is
sometimes abbreviated as FP or FPM.

Bursting
 Bursting is another time-saving technology.
 The purpose of bursting is to provide the CPU with additional data from memory based on the
likelihood that it will be needed.

Page 84 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 So, instead of the CPU retrieving information from memory one piece of at a time, it grabs a
block of information from several consecutive addresses in memory.
 This saves time because there’s a statistical likelihood that the next data address the processor
will request will be sequential to the previous one.
 This way, the CPU gets the instructions it needs without having to send an individual request for
each one. Bursting can work with many different types of memory and can function when reading
or writing data.
 Both bursting and pipelining became popular at about the same time that EDO technology
became available. EDO chips that featured these functions were called “Burst EDO” or “Pipeline
Burst EDO” chips.

Cache

 Cache memory is a relatively small amount (normally less than 1MB) of high speed memory that
resides very close to the CPU. Cache memory is designed to supply the CPU with the most
frequently requested data and instructions. Because retrieving data from cache takes a fraction of
the time that it takes to access it from main memory, having cache memory can save a lot of time.
If the information is not in cache, it still has to be retrieved from main memory, but checking
cache memory takes so little time, it’s worth it. This is analogous to checking your refrigerator for
the food you need before running to the store to get it: it’s likely that what you need is there; if
not, it only took a moment to check.
 The concept behind caching is the “80⁄20” rule, which states that of all the programs, information,
and data on your computer, about 20% of it is used about 80% of the time. (This 20% data might
include the code required for sending or deleting email, saving a file onto your hard drive, or
simply recognizing which keys you’ve touched on your keyboard.) Conversely, the remaining
80% of the data in your system gets used about 20% of the time. Cache memory makes sense
because there’s a good chance that the data and instructions the CPU is using now will be needed
again.
How cache memory works
 Cache memory is like a “hot list” of instructions needed by the CPU. The memory controller
saves in cache each instruction the CPU requests; each time the CPU gets an instruction it needs
from cache - called a “cache hit” - that instruction moves to the top of the “hot list.” When cache
is full and the CPU calls for a new instruction, the system overwrites the data in cache that hasn’t
been used for the longest period of time. This way, the high priority information that’s used
continuously stays in cache, while the less frequently used information drops out.
Levels of cache
 Today, most cache memory is incorporated into the processor chip itself; however, other
configurations are possible. In some cases, a system may have cache located inside the processor,
just outside the processor on the motherboard, and/or it may have a memory cache socket near the
CPU, which can contain a cache memory module. Whatever the configuration, any cache
memory component is assigned a “level” according to its proximity to the processor. For
example, the cache that is closest to the processor is called Level 1 (L1) Cache, the next level of
cache is numbered L2, then L3, and so on. Computers often have other types of caching in
addition to cache memory. For example, sometimes the system uses main memory as a cache for

Page 85 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
the hard drive. While we won’t discuss these scenarios here, it’s important to note that the term
cache can refer specifically to memory and to other storage technologies as well.
 You might wonder: if having cache memory near the processor is so beneficial, why isn’t cache
memory used for all of main memory? For one thing, cache memory typically uses a type of
memory chip called SRAM (Static RAM), which is more expensive and requires more space per
megabyte than the DRAM typically used for main memory. Also, while cache memory does
improve overall system performance, it does so up to a point. The real benefit of cache memory is
in storing the most frequently-used instructions. A larger cache would hold more data, but if that
data isn’t needed frequently, there’s little benefit to having it near the processor.
 It can take as long as 195ns for main memory to satisfy a memory request from the CPU.
External cache can satisfy a memory request from the CPU in as little as 45ns.

Pipe lining

 Pipelining is a computer processing technique where a task is divided into a series of stages with
some of the work completed at each stage. Through the division of a larger task into smaller,
overlapping tasks, pipelining is used to improve performance beyond what is possible with non-
pipelined processing. Once the flow through a pipeline is started, execution rate of the
instructions is high, in spite of the number of stages through which they progress.

 Pipelining is the process of accumulating and executing computer instructions and tasks from the
processor via a logical pipeline.
 It allows storing, prioritizing, managing and executing tasks and instructions in an orderly
process.
 Pipelining does not decrease the time for individual instruction execution.
 Instead, it increases instruction throughput.
 The throughput of the instruction pipeline is determined by how often an instruction exits the
pipeline
 Pipelining is also known as pipeline processing.

Page 86 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Pipelining is primarily used to create and organize a pipeline of instructions for a computer
processor to processes in parallel.
 Typically, pipelining is an ongoing process where new tasks are added frequently and completed
tasks are removed. Each of these tasks has different stages or segments and leaves the pipeline
after the completion of the processing at a specified time.
 All of these tasks are executed in parallel and are provided a fair share of processor time based on
their size, complexity and priority.
 Pipelining can include any tasks or instructions that need processing time or power.

Pipelining

 Pipelining is the process of accumulating instruction from the processor through a pipeline. It
allows storing and executing instructions in an orderly process. It is also known as pipeline
processing.
 Pipelining is a technique where multiple instructions are overlapped during execution. Pipeline is
divided into stages and these stages are connected with one another to form a pipe like structure.
Instructions enter from one end and exit from another end.
 Pipelining increases the overall instruction throughput.
 In pipeline system, each segment consists of an input register followed by a combinational circuit.
The register is used to hold data and combinational circuit performs operations on it. The output
of combinational circuit is applied to the input register of the next segment.

 Pipeline system is like the modern day assembly line setup in factories. For example in a car
manufacturing industry, huge assembly lines are setup and at each point, there are robotic arms to
perform a certain task, and then the car moves on ahead to the next arm.

Instruction Pipeline

 In this a stream of instructions can be executed by overlapping fetch, decode and execute phases
of an instruction cycle. This type of technique is used to increase the throughput of the computer
system.
 An instruction pipeline reads instruction from the memory while previous instructions are being
executed in other segments of the pipeline. Thus we can execute multiple instructions

Page 87 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
simultaneously. The pipeline will be more efficient if the instruction cycle is divided into
segments of equal duration.

Advantages of Pipelining
1. The cycle time of the processor is reduced.
2. It increases the throughput of the system
3. It makes the system reliable.

Disadvantages of Pipelining
1. The design of pipelined processor is complex and costly to manufacture.
2. The instruction latency is more.

Shadow RAM

 A RAM copy of a PC's ROM BIOS.


 In order to improve performance, the BIOS, which is stored in a ROM chip, is copied to and
executed from RAM.
 RAM chips are accessed faster than ROMs.
 Shadow RAM is a copy of Basic Input/output Operating System (BIOS) routines from read-only
memory (ROM) into a special area of random access memory (RAM) so that they can be
accessed more quickly.
 Access in shadow RAM is typically in the 60-100 nanosecond range whereas ROM access is in
the 125-250 ns range.
 In some operating systems such as DOS, certain BIOS routines are not only used during the boot
or startup of the system, but also during normal operation, especially to drive the video display
terminal.
 In Windows and OS/2, however, these routines are not used and the use of shadow RAM is not
necessary. In some systems, the user can turn the use of shadow RAM off or on.

Page 88 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
DEFINE CACHE MEMORY

 Cache memory, or CPU cache, is a type of memory that services the CPU.
 It is faster than main memory, physically located closer to the processor, and allows the CPU
to execute instructions and read and write data at a higher speed.
 Instructions and data are transferred from main memory to the cache in blocks to enhance
performance.
 Cache memory is typically static RAM (SRAM) and is identified by level.
 Level 1 (L1) cache is built directly into the CPU chip.
 Level 2 cache (L2) feeds the L1 cache.
 L2 can be built into the CPU chip, reside on a separate chip, or be a separate bank of chips on the
system board.
 If L2 is built into the CPU, then a level 3 cache (L3) may also be present on the system board.

 Cache memory is a high speed memory in the CPU that is used for faster access to data. It provides
the processor with the most frequently requested data. Cache memory increases performance and allows
faster retrieval of data.
 Cache memory is a small-sized type of volatile computer memory that provides high-speed data
access to a processor and stores frequently used computer programs, applications and data. It is the
fastest memory in a computer, and is typically integrated onto the motherboard and directly
embedded in the processor or main random access memory (RAM).
 Cache memory provides faster data storage and access by storing instances of programs and data
routinely accessed by the processor. Thus, when a processor requests data that already has an instance
in the cache memory, it does not need to go to the main memory or the hard disk to fetch the data.
 Cache memory can be primary or secondary cache memory, with primary cache memory directly
integrated into (or closest to) the processor. In addition to hardware-based cache, cache memory also
can be a disk cache, where a reserved portion on a disk stores and provides access to frequently
accessed data/applications from the disk.

Page 89 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
What is the purpose/ functions of cache memory?

 Used for faster access to data.


 allows the CPU to execute instructions and read and write data at a higher speed
 It provides the processor with the most frequently requested data.
 Cache memory increases performance and allows faster retrieval of data.
 Provides high-speed data access to a processor and stores frequently used computer programs,
applications and data.
 Instructions and data are transferred from main memory to the cache in blocks to enhance performance.
 Stores frequently used computer programs, applications and data.
 Provides faster data storage and access by storing instances of programs and data routinely accessed
by the processor.

What type of RAM is used for cache memory?

 SRAM is often used as cache memory for the CPU.


 In SRAM, a bit of data is stored using the state of a six transistor memory cell. This form of RAM
is more expensive to produce, but is generally faster and requires less dynamic power than
DRAM

Explain External cache memory

 An external cache is any cache memory or type of central processing unit (CPU) cache that is
housed, placed or installed external to a computer processor.
 It provides high speed data storage and processing services to the computer processor, its
primary/native cache and the main memory.
 External cache is also known as secondary cache.
 External cache is part of the main random access memory (RAM) or an independent component
installed on the motherboard. Some external caches are built and integrated on the processor dye
itself, but not directly the part of it.
 It serves as an intermediate memory between the processor and RAM. External cache is
implemented in several layers - all providing different speeds and capacity levels. For example,
L2 and L3 caches are common examples or layers of external cache. A L2 cache is faster but has
less storage capacity than a L3 cache.

Page 90 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Describe the following levels or types of cache memory

1. Level 1 (L1) cache or Primary Cache

L1 is the primary type cache memory. The Size of the L1 cache very small comparison to others that is
between 2KB to 64KB, it depent on computer processor. It is a embedded register in the computer
microprocessor(CPU).The Instructions that are required by the CPU that are firstly searched in L1 Cache.
Example of registers are accumulator, address register,, Program counter etc.

 A level 1 cache (L1 cache) is a memory cache that is directly built into the microprocessor, which
is used for storing the microprocessor’s recently accessed information, thus it is also called the
primary cache. It is also referred to as the internal cache or system cache.
 L1 cache is the fastest cache memory, since it is already built within the chip with a zero wait-
state interface, making it the most expensive cache among the CPU caches. However, it has
limited size. It is used to store data that was accessed by the processor recently, critical files that
need to be executed immediately and it is the first cache to be accessed and processed when the
processor itself performs a computer instruction.
 In more recent microprocessors, the L1 cache is divided equally into two: a cache that is used to
keep program data and another cache that is used to keep instructions for the microprocessor.
Some older microprocessors, on the other hand, make use of the undivided L1 cache and uses it
to store both program data and microprocessor instructions.
 It is implemented with the use of static random access memory (SRAM), which comes in
different sizes depending on the grade of the processor. This SRAM makes use of two transistors
per bit. The two transistors form a circuit known as a 'flip-flop’ since it has two states it can flip
between; the second transistor manages the output of the first transistor. For as long as power is
supplied to the circuit, it can hold data without external assistance.
 All L1 cache designs follow the same process; the control logic of the L1 cache stores frequently
used data in the cache and only updates external memory when the CPU hands over control to
other bus masters when peripheral devices are doing direct memory access.

Page 91 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

2. Level 2 (L2) cache or Secondary Cache

L2 is seconday type cache memory. The Size of the L2 cache is more capacious than L1 that is between
256KB to 512KB.L2 cache is Located on computer microprocessor.After searching the Instructions in L1
Cache,if not found then it searched into L2 cache by computer microprocessor. The high-speed system
bus interconnecting the cache to the microprocessor.

 A level 2 cache (L2 cache) is a CPU cache memory that is located outside and separate from the
microprocessor chip core, although, it is found on the same processor chip package. Earlier L2
cache designs placed them on the motherboard which made them quite slow.
 Including L2 caches in microprocessor designs are very common in modern CPUs even though
they may not be as fast as the L1 cache, but since it is outside of the core, the capacity can be
increased and it is still faster than the main memory.
 A level 2 cache is also called the secondary cache or an external cache.
 The level 2 cache serves as the bridge for the process and memory performance gap. Its main goal
is to provide the necessary stored information to the processor without any interruptions or any
delays or wait-states. It also helps in reducing the access time of data, especially in certain events
wherein that specific data was already accessed before, so it doesn’t have to be loaded again.
 Modern microprocessors sometimes include a feature called data pre-fetching, and the L2 cache
boosts this feature by buffering the program instructions and data that is requested by the
processor from the memory, serving as a closer waiting area compared to the RAM.
 The L2 cache was first introduced with the Intel Pentium and Pentium Pro powered computers.
Since then, it has always been included with the process, except in the case the early versions of
Celeron processors. Although it is not as fast as the L1 cache due to its location, it is still faster
than both L3 cache and the main memory. It is also the second priority of the computer when
looking at its performance in implementing instructions.

Page 92 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
3. Level 3 (L3) cache or Main Memory

The L3 cache is larger in size but also slower in speed than L1 and L2,it's size is between 1MB to 8MB.In
Multicore processors, each core may have seperate L1 and L2,but all core share a common L3 cache. L3
cache double speed than the RAM.

 A Level 3 (L3) cache is a specialized cache that that is used by the CPU and is usually built onto
the motherboard and, in certain special processors, within the CPU module itself. It works
together with the L1 and L2 cache to improve computer performance by preventing bottlenecks
due to the fetch and execute cycle taking too long. The L3 cache feeds information to the L2
cache, which then forwards information to the L1 cache. Typically, its memory performance is
slower compared to L2 cache, but is still faster than the main memory (RAM).
 The L3 cache is usually built onto the motherboard between the main memory (RAM) and the L1
and L2 caches of the processor module. This serves as another bridge to park information like
processor commands and frequently used data in order to prevent bottlenecks resulting from the
fetching of these data from the main memory. In short, the L3 cache of today is what the L2
cache was before it got built-in within the processor module itself.
 The CPU checks for information it needs from L1 to the L3 cache. If it does not find this info in
L1 it looks to L2 then to L3, the biggest yet slowest in the group. The purpose of the L3 differs
depending on the design of the CPU. In some cases the L3 holds copies of instructions frequently
used by multiple cores that share it. Most modern CPUs have built-in L1 and L2 caches per core
and share a single L3 cache on the motherboard, while other designs have the L3 on the CPU die
itself.

Page 93 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
READ ONLY MEMORY
ROM stands for Read Only Memory. The memory from which we can only read but cannot write on it.
This type of memory is non-volatile. The information is stored permanently in such memories during
manufacture. A ROM, stores such instructions that are required to start a computer. This operation is
referred to as bootstrap. ROM chips are not only used in the computer but also in other electronic items
like washing machine and microwave oven.

ROM CHIP

 Located on the motherboard


 Contains instructions that can directly be accessed by the CPU.
 It contains basic instructions for booting the computer and loading the operating system.
 The contents cannot be changed or erased.
 It is non-volatile, which means, it retains information even if the computer is powered off.
 It is sometimes called the firmware, but firmware is actually the software stored in a ROM Chip.
 The memory from which we can only read but cannot write on it.
 This type of memory is non-volatile.
 The information is stored permanently in such memories during manufacture.
 A ROM, stores such instructions that are required to start a computer.
 This operation is referred to as bootstrap.
 ROM chips are not only used in the computer but also in other electronic items like washing
machine and microwave oven.

Page 94 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Following are the various types of ROM

8. MROM (Masked ROM)


The very first ROMs were hard-wired devices that contained a pre-programmed set of data or
instructions. These kind of ROMs are known as masked ROMs which are inexpensive.

9. PROM (Programmable Read only Memory)


PROM is read-only memory that can be modified only once by a user. The user buys a blank
PROM and enters the desired contents using a PROM program. Inside the PROM chip there are
small fuses which are burnt open during programming. It can be programmed only once and is
not erasable.

10. EPROM(Erasable and Programmable Read Only Memory)


The EPROM can be erased by exposing it to ultra-violet light for a duration of up to 40 minutes.
Usually, an EPROM eraser achieves this function. During programming, an electrical charge is
trapped in an insulated gate region. The charge is retained for more than ten years because the
charge has no leakage path. For erasing this charge, ultra-violet light is passed through a quartz
crystal window(lid). This exposure to ultra-violet light dissipates the charge. During normal use
the quartz lid is sealed with a sticker.

11. EEPROM(Electrically Erasable and Programmable Read Only Memory)


The EEPROM is programmed and erased electrically. It can be erased and reprogrammed about
ten thousand times. Both erasing and programming take about 4 to 10 ms (milli second). In
EEPROM, any location can be selectively erased and programmed. EEPROMs can be erased one
byte at a time, rather than erasing the entire chip. Hence, the process of re-programming is
flexible but slow.

Advantages of ROM

 Non-volatile in nature
 These cannot be accidentally changed
 Cheaper than RAMs
 Easy to test
 More reliable than RAMs
 These are static and do not require refreshing
 Its contents are always known and can be verified

Page 95 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

ROM on the Mother Board


• PROM (Programmable ROM) ,
• EPROM (Erasable PROM),
• EPROM can further be classified into the following categories
3) EEPROM(Electrically Erasable PROM),
4) UVEPROM(Ultra Violet Erasable PROM),
• ROM stores information that must be constantly available and cannot be changed.
• Non Volatile: Data does not lost, when computer is switched off.
• PROM requires a special hardware called as PROM Burner to update the BIOS.
• EPROM also requires a special hardware called as PROM Burner to update the BIOS.
• EEPROM erases the data by applying high voltage than normal voltage to update the BIOS.
• UVPROM erases the data by applying UV rays to update the BIOS.

Differentiate between Random Access Memory (RAM) and Read Only Memory (ROM)
RAM ROM
 Stands for Random-Access
 Stands for Read-Only Memory
Memory
 Normally ROM is read only memory and it cannot
 RAM is a read and write memory be overwritten. However, EPROMs can be
reprogrammed
 RAM is faster  ROM is relatively slower than RAM
 RAM is a volatile memory. It
 ROM is permanent memory. Data in ROM will
means that the data in RAM will be
stay as it is even if we remove the power-supply
lost if power supply is cut-off
 There are mainly two types of
 There are several types of ROM; Erasable ROM,
RAM; static RAM and Dynamic
Programmable ROM, EPROM etc.
RAM
 RAM stores all the applications
 ROM usually stores instructions that are required
and data when the computer is up
for starting (booting) the computer
and running
 Price of RAM is comparatively
 ROM chips are comparatively cheaper
high
 RAM chips are bigger in size  ROM chips are smaller in size
 Content of ROM are usually first transferred to
 Processor can directly access the RAM and then accessed by processor. This is done
content of RAM in order to be able to access ROM content at a
faster speed.
 RAM is often installed with large  Storage capacity of ROM installed in a computer is
storage. much lesser than RAM

Page 96 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
ROM or Read Only Memory, Computers almost always contain a small amount of read-only memory that
holds instructions for starting up the computer. Unlike RAM, ROM cannot be written to. It is non-volatile
which means once you turn off the computer the information is still there.


PROM, short for programmable read-only memory A PROM is a memory chip on which data
can be written only once. Once a program has been written onto a PROM, it remains there
forever. Unlike RAM, PROM's retain their contents when the computer is turned off. The
difference between a PROM and a ROM (read-only memory) is that a PROM is manufactured as
blank memory, whereas a ROM is programmed during the manufacturing process. To write data
onto a PROM chip, you need a special device called a PROM programmer or PROM burner. The
process of programming a PROM is sometimes called burning the PROM.

EPROM (erasable programmable read-only memory) is a special type of PROM that can be
erased by exposing it to ultraviolet light. Once it is erased, it can be reprogrammed. An EEPROM
is similar to a PROM, but requires only electricity to be erased.

EEPROM- Acronym for electrically erasable programmable read-only memory. Pronounced


double-ee-prom or e-e-prom, an EEPROM is a special type of PROM that can be erased by
exposing it to an electrical charge. Like other types of PROM, EEPROM retains its contents even
when the power is turned off. Also like other types of ROM, EEPROM is not as fast as RAM.
EEPROM is similar to flash memory (sometimes called flash EEPROM). The principal
difference is that EEPROM requires data to be written or erased one byte at a time whereas flash
memory allows data to be written or erased in blocks. This makes flash memory faster.

Page 97 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Describe the various types of RAM technologies, for example EDO RAM

Page 98 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
RANDOM ACCESS MEMORY (RAM)

Is the best known form of Computer Memory? The Read and write (R/W) memory of a computer is called
RAM. The User can write information to it and read information from it.With Ram any location can be
reached in a fixed ( and short) amount of time after specifying its address.

The RAM is a volatile memory, it means information written to it can be accessed as long as power is on.
As soon as the power is off, it can not be accessed. so this mean RAM computer memory essentially
empty.RAM holds data and processing instructions temporarily until the CPU needs it.

RAM is considered “random access” because you can access any memory cell directly if you know the
row and column that intersect at that cell. RAM is made in electronic chips made of so called
semiconductor material, just like processors and many other types of chips. In RAM, transistors make up
the individual storage cells which can each “remember” an amount of data, for example, 1 or 4 bits – as
long as the PC is switched on. Physically, RAM consists of small electronic chips which are mounted in
modules (small printed circuit boards). The modules are installed in the PC’s motherboard using sockets –
there are typically 2, 3 or 4 of these.

There are two basic types of RAM


 Static RAM (SRAM)
 Dynamic RAM (DRAM)

Static RAM (SRAM)

The word static indicates that the memory retains its contents as long as power is being supplied.
However, data is lost when the power gets down due to volatile nature. SRAM chips use a matrix of 6-
transistors and no capacitors. Transistors do not require power to prevent leakage, so SRAM need not
have to be refreshed on a regular basis.
Because of the extra space in the matrix, SRAM uses more chips than DRAM for the same amount of
storage space, thus making the manufacturing costs higher. So SRAM is used as cache memory and has
very fast access.

Page 99 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Characteristic of the Static RAM
 It has long life
 There is no need to refresh
 Faster
 Used as cache memory
 Large size
 Expensive
 High power consumption

Dynamic RAM (DRAM)

DRAM, unlike SRAM, must be continually refreshed in order to maintain the data. This is done by
placing the memory on a refresh circuit that rewrites the data several hundred times per second. DRAM is
used for most system memory because it is cheap and small. All DRAMs are made up of memory cells
which are composed of one capacitor and one transistor.

Characteristics of the Dynamic RAM

 It has short data lifetime


 Need to be refreshed continuously
 Slower as compared to SRAM
 Used as RAM
 Lesser in size
 Less expensive
 Less power consumption

Page 100 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
TYPES OF DRAM PACKAGES AND DRAM MEMORY

Single Inline Memory Module (SIMM)

 Single inline memory module (SIMM) is a type of RAM (random access memory) that was
popular in the early 1980s to late 1990s. SIMMs have 32-bit data paths and were standardized
under the JEDEC JESD-21C standard. Non-IBM PC computers, UNIX workstations and the Mac
IIfx used the non-standard SIMMS.

Page 101 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Wang Laboratories invented and patented the SIMM in 1983. SIMMs with 30-pin variants were
used in 386, 486, Macintosh Plus, Macintosh II, Quadra and Wang VS systems. The 72-pin
variant was used in IBM PS/2, 486, Pentium, Pentium Pro and some Pentium II systems.
 Dual inline memory module (DIMM) has replaced SIMM beginning with the Intel P-5 Pentium
processors. SIMMs have redundant contacts on both sides of the module, whereas DIMMS have
separate electrical contacts on each side. DIMMS have 64-bit data paths, as opposed to SIMMS
which had 32-bit data paths. Intel Pentiums required that SIMMs be installed in pairs and DIMMs
eliminated that requirement.

SIMM Technologies

• SIMM is also called as Single Inline Memory Module.


• SIMMs was invented by JAMES CLAYTON of Wang laboratories in 1983.
• SIMMs are rated by speed, measured in nanoseconds.
• This speed is a measure of access time.
• Common SIMM speeds are 60, 70, or 80 ns.
• The smaller the speed rating is, the faster the chip.
• Available in 30pin, 72pin size.
30 PIN SIMM
• 20 pins for Row or Column addressing.
• 8 pins for additional bus width.
• Measures 3.5 inches wide and 1 inch breadth.
• A notch on edge of package prevents you from improperly inserting a SIMM with bad
orientation.
• Two holes on either side of SIMM allow sockets to latch the module securely in the place.
• 30 pin SIMMs can transfer 8 bits (9 bits in parity versions) of data at a time.
• 30 pin SIMM are mostly used in INTEL 286,386,486 systems.

72 PIN SIMMs
• Packs 4 bytes wide banks on a single module.
• Notch in the centre of SIMM prevent you from accidentally sliding a 30-pin SIMM into 72 pin
socket.
• 72 pin SIMM won’t fit into 30 pin socket.
• Notch on left side prevent improper orientation of SIMM.

• Measures 4.25 inches wide and 0.38 inch thick Is often double-side to achieve higher Capacities.
• 72 pin SIMMs can transfer 32 bits (36 bits in parity versions) of data at a time.
• 72 pin SIMMs are widely used in INTEL 486, PENTIUM, PENTIUM PRO&P-II systems.

SIMMs use FPM & EDO technologies to access data.

1) FPM (Fast Page Mode): Improved on earlier memory types by sending the row address just
once for many access to memory near that row (Earlier memory types required a complete
row and column for each memory access).
2) EDO (Extended Data Out): EDO is an improvement over FPM Memory .It is faster because it
eliminates the 10ns delay while waited before it issuing the next memory address.

Page 102 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Dual In-line Memory Module (DIMM)

 A dual inline memory module (DIMM) is a small-scale circuit board that holds memory chips on
the motherboard. DIMM incorporates a series of memory called dynamic random access memory
(DRAM), which provides primary storage, the main memory that continually reads and executes
stored instructions or data directly to the CPU.
 DIMM is an attempt to improve on the earlier single inline memory module (SIMM), which used
matched pairs. DIMM uses only one circuit board, thus increasing memory speed and storage.
DIMM also has a much smaller circuit board and easier insertion compared to SIMM.
 DIMM contains a series of DRAM integrated circuits. The modules are attached to a printed
circuit board, with several RAM chips on a single circuit board, which is connected to the
motherboard. With direct memory access (DMA), a PC processor can access any part of the
memory directly without having to proceed in chronological order from a starting place. With
DRAM, RAM accesses all parts of the memory directly.
 RAM chips can be installed individually on a motherboard or in sets of chips on a miniature
circuit board that plugs into the motherboard. The three most common circuit boards are:

1. Single Inline Memory Module (SIMM): A single in-line memory module with a 32-bit data path
2. Rambus Inline Memory Module (RIMM): Similar to SIMM but with a higher memory speed
(RDRAM). Both SIMM and RIMM modules are installed in matched pairs.
3. Dual Inline Memory Module (DIMM): Has a separate electrical connector on both sides of the
module. It stores each bit of data in a separate capacitor, providing direct access to the
motherboard through the system bus.

 Some memory modules have two or more independent sets of DRAM chips. These modules are
connected to the same address and data bus. Each set of modules is called a rank. Only one rank
can be accessed at a time because all ranks share the same bus. DIMM circuits are now being
made with up to four ranks per module.

DIMM Technologies
• DIMM is also called as Dual Inline Memory Module.
• A DIMM is a memory module that has pins on opposite sides of the circuit board that do not
connect and thus form two sets of contacts.
• Contain 168 or 184 pins.
• Hold between 8 MB and 2 GB of RAM.
• Newer DIMMs hold chips that use synchronous DRAM (SDRAM), which is DRAM that runs in
sync with the system clock and thus runs faster than other types of DRAM.
• 168 pins divided into 3 groups.
• First group runs from 1 to pin 10.
• Second group from 11 to pin 40.
• Third group from pin 41 to pin 84 and,
• Pin 85 is an opposite sign.
• A symmetrical arrangement of notches prevents improper insertion into its socket.
• Measures 5.25 inch wide and one inch breadth.
• DIMM closely resembles SIMM But physically larger.

Page 103 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

DIMM uses EDO &BEDO technologies to access data.

3) EDO (Extended Data Out): EDO is an improvement over FPM Memory .It is faster because it
eliminates the 10ns delay while waited before it issuing the next memory address.
4) BEDO (Burst Extended Data Out): BEDO is a refined version of EDO which reduces
memory access time than EDO.BEDO is not popular because INTEL does not support it.

Rambus Inline Memory Module (RIMM)

 Rambus Inline Memory Module, The memory module used with RDRAM chips.
 It is similar to a DIMM package but uses different pin settings.
 Rambus trademarked the term RIMM as an entire word.
 It is the term used for a module using Rambus technology.
 It is sometimes incorrectly used as an acronym for Rambus Inline Memory Module.
 A RIMM contains 184 or 232pins. Note must use all sockets in RIMM installation or use C_RIMM to
terminate banks

184 pin RIMM (RDRAM)

Rambus Dynamic Random Access Memory (RDRAM)

 Rambus Dynamic Random Access Memory (RDRAM) is a memory subsystem designed to


transfer data at faster rates. RDAM is made up of a random access memory (RAM), a RAM
controller and a bus path that connect RAM to microprocessors and other PC devices.

RDRAM was introduced in 1999 by Rambus, Inc. RDRAM technology was considerably faster
than older memory models, like the Synchronous DRAM (SDRAM). Typical SDRAM has a data
transfer rate of up to 133 MHz, while the RDRAM can transfer data at a speed of upto 800 MHz.

RDRAM is also known as Direct RDRAM or Rambus.

 RDRAM uses Rambus Inline Memory Module (RIMM) technology, which is installed in pairs,
transfers data from rising and falling clock signal edges and doubles physical clock rates. RIMM
data travels on a 16-bit bus that is similar to a packet network with transmitted data groups.
Internal RIMM speeds operate from 400 MHz to 800 MHz via a 400-MHz system bus. A
standard 400 MHz Rambus is known as PC-800 Rambus.

The RDRAM 16-bit bus uses a set of data processing features with a steady sequence stream,
Page 104 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
known as pipelining, that facilitate the output of one instruction prior to the input of the next
instruction. Pipelining transfers RAM data to cache memory, allowing up to eight simultaneous
data processing series. Pipelining also improves performance by increasing average successful
message delivery rates when processing streams of data.

Design guidelines and a validation program by Intel and Rambus were intended to ensure
RDRAM and RIMM module stability and to enhance earlier memory module requirements.
Although RDRAM's increased bandwidth allowed faster data transfer, RAM cells experienced
significant drops in performance, resulting in latency with additional RIMMs.

Latency improved in later RDRAM models, which were more expensive than Double Data Rate
(DDR) SDRAM and Streaming Data Request (SDR) SDRAM. By 2004, Intel discontinued
RDRAM in favor of DDR SDRAM and DDR-2 SDRAM modules.

Small Outline Dual Inline Memory Module (SO-DIMM)

 Small outline dual inline memory module (SO-DIMM) is a type of computer memory that is
smaller than the regular DIMM used in desktop PCs. SO-DIMM uses the same circuitry and
microchips as other memory modules, but is made in a smaller form factor to fit devices that do
not have much space such as laptops, high-end printers, enterprise-grade networking hardware
and even small-form-factor PCs like those that use mini-ITX motherboards.
 SO-DIMMs are noticeably smaller than regular DIMMs, about only half the length of the latter.
However, they are more or less equal in power and voltage ratings to DIMMs, so their small size
does not necessarily mean that they have lower memory capacities or lower performance. Their
smaller size means that device manufacturers can easily design them into their devices without
problem. Laptops usually have user-accessible SO-DIMM slots on the bottom, but slots are
sometimes located in different places depending on the brand and model.
 The first SO-DIMMs used 72-pin connectors, which meant they could only be used for 32-bit
addressing. Although modern SO-DIMMs are nearly on par with their DIMM counterparts, they
still lag behind them in performance and capacity, pending miniaturization of the newer
technology being applied on DIMMs.

SO-DIMM pin configurations:

 72-Pin SO-DIMM
 100-Pin SO-DIMM — SDRAM (PC-2100/2700)/EDO/firmware
 144-Pin SO-DIMM — SDRAM (PC-66/100/133/)/EDO
 200-Pin SO-DIMM — SDRAM (PC-2100/2700/3200) (PC2-3200/4200/5300/6400)
 204-Pin SO-DIMM — SDRAM (PC3-8500/10666)

Page 105 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

LAPTOP MEMORY

SO-DIMM

Short for Small Outline DIMM, a small version of a DIMM used commonly in notebook computers. 72
supports 32bit and 144 and 200 SO-DIMM pins supports a full 64-bit transfer.

(72, 144, 200) SO-DIMM

Micro-DIMM

Short for Micro Dual Inline Memory Module, a competing memory used on laptops, mostly supports 144
and 172 pins.

(144, 172) Micro-DIMM

SIMM

Acronym for single in-line memory module, a small circuit board that can hold a group of memory chips.
Typically, SIMM's holds up 8 (on Macintoshes) or 9 (on PCs) RAM chips. On PCs, the ninth chip is often
used for parity error checking. Unlike memory chips, SIMM's is measured in bytes rather than bits.
SIMM's is easier to install than individual memory chips. A SIMM is either 30 or 72 pins.

FPM RAM

Short for Fast Page Mode RAM, a type of Dynamic RAM (DRAM) that allows faster access to data in the
same row or page. Page-mode memory works by eliminating the need for a row address if data is located
in the row previously accessed. It is sometimes called page mode memory.

30 pin SIMM (Usually FPM or EDO RAM)

EDO DRAM

Short for Extended Data Output Dynamic Random Access Memory, a type of DRAM that is faster than
conventional DRAM. Unlike conventional DRAM which can only access one block of data at a time,
EDO RAM can start fetching the next block of memory at the same time that it sends the previous block
to the CPU.
Page 106 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

72 pin SIMM (EDO RAM)

DIMM

Short for dual in-line memory module, a small circuit board that holds memory chips. A single in-line
memory module (SIMM) has a 32-bit path to the memory chips whereas a DIMM has 64-bit path.
Because the Pentium processor requires a 64-bit path to memory, you need to install SIMM's two at a
time. With DIMM's, you can install memory one DIMM at a time. A DIMM contains 168 pins.

168 pin DIMM (SDRAM)

SDRAM

Short for Synchronous DRAM, a new type of DRAM that can run at much higher clock speeds than
conventional memory. SDRAM actually synchronizes itself with the CPU's bus and is capable of running
at 133 MHz, about three times faster than conventional FPM RAM, and about twice as fast EDO DRAM .
SDRAM is replacing EDO DRAM in many newer computers
SDRAM delivers data in high speed burst

DDR SDRAM

Short for Double Data Rate-Synchronous DRAM, a type of SDRAM that supports data transfers on both
edges of each clock cycle, effectively doubling the memory chip's data throughput. DDR-SDRAM is also
called SDRAM II.

184 pin DIMM (DDR-SDRAM)

DDR2-SDRAM

Short for Double Data Rate Synchronous DRAM 2 is a type of DDR that supports
higher's speeds than it's predecessor DDR SDRAM

Page 107 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

240 DIMM (DDR2-SDRAM)

DDR3-SDRAM

Short for Double Data Rate Synchronous DRAM 3 is the newest type of DDR that supports
the fastest speed of all the SDRAM memory

240 DIMM (DDR3-SDRAM)

RIMM

Rambus Inline Memory Module, The memory module used with RDRAM chips. It is similar to a DIMM
package but uses different pin settings. Rambus trademarked the term RIMM as an entire word. It is the
term used for a module using Rambus technology. It is sometimes incorrectly used as an acronym for
Rambus Inline Memory Module. A RIMM contains 184 or 232pins. Note must use all sockets in RIMM
installation or use C_RIMM to terminate banks

184 pin RIMM (RDRAM)

RDRAM

Short for Rambus DRAM, a type of memory (DRAM) developed by Rambus, Inc.
In 1997, Intel announced that it would license the Rambus technology for use on its future motherboards,
thus making it the likely de facto standard for memory architectures.

Page 108 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

232 pin RIMM (RDRAM)

SIMM and DIMM Sockets

MEMORY MODULE FORMATS


RAM DESKTOP INSTALLATION

Note RAM Memory Sticks come in the following sizes


8MB, 16MB, 32MB, 64MB, 128MB, 256MB, 512MB, 1GB, 2GB, 4GB and 8GB

MAIN MEMORY-- SIMMS, DIMMS AND OTHER RAM TECHNOLOGIES

SIMM – Single Inline Memory Module Installation (30 or 72 pin)

 Place SIMM in a 45 degree angle, push it upright to lock with the corresponding notch on the
sides
 Must be installed in same pairs
 Must populate first two slots of the SIMM sockets in order for it to work
 Single In-line Memory Module, SIMM:
 This type of DRAM or memory package holds up to eight nine RAM chips (8 in Macs and 9 in PCs
where the 9th chip is used for parity checking).
 Another important factor is the bus width, which for SIMMS is 32 bits.

Page 109 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

DIMM – Dual Inline Memory Module Installation (168, 184 or 240 pin

 The first thing you do is open the plastic retaining clips on each side of slots you are going to use.
 Align the cut-out on the module pin connector with the engaging pin on the slot
 Holding the module upright press down both ends.
 When the module is correctly seated, retaining clips should lock automatically.
 DIMM’s can be installed as a single pair (unless it states Dual Channel then you must install it in
pairs)
 Dual In-line Memory Module, DIMM:
 With the increase in data bus width, DIMMs began to replace SIMMs as the predominant type of
memory module.
 The main difference between a SIMM and a DIMM is that a DIMM has separate electrical
contacts on each side of the module, while the contacts on a SIMM are on both sides are
redundant.
 Standard SIMMs also have a 32-bit data bus, while standard DIMMs have a 64-bit data bus.

RIMM – Rambus Inline Memory Module Installation (184 or 232 pin)

 The first thing you do is open the plastic retaining clips on each side of slots you are going to use.
 Align the cut-out on the module pin connector with the engaging pin on the slot
 Holding the module upright press down both ends.
 When the module is correctly seated, retaining clips should lock automatically.
 Must populate all RIMM slots available
 If not unpopulated slots must use CRIMM’s (Continuity Rambus Inline Memory Module)
 Rambus In-line Memory Module, RIMM:
 This type of DRAM memory package is essentially the same as a DIMM but is referred to as
RIMMs because of their manufacturer and proprietary slot required

Small outline DIMM, SO-DIMM:


 This type of DRAM package is about half the size of the standard DIMM. Being smaller
they are used in small footprint PCs including laptops, netbooks, etc.,

Small outline RIMM, SO-RIMM:

 This type of DRAM package is a small version of the RIMM.

Page 110 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Describe different types of RAM packages and installation
 DIL
 SIMM
 SIP
 DIMM
 RIMM
 Inter cycle time and memory capacity as markings on memory chips

SIP

 Short for Single In-line Package, SIP is a computer chip packaging that contains one row of
connection pins, unlike dual in-line packages (DIPs) which contain two rows.
 A single inline package (SIP) is a computer chip package that contains only a single row of
connection pins. This is different from dual inline packages (DIP), which have two rows of connected
pins.
 A single inline package may also be known as a singline inline pin package (SIPP).
 SIP is not as common as the dual in-line package (DIP); however, SIPs have been used to package
multiple resistors and RAM chips with a common pin.
 By either using surface mounting device process or DIP process, SIPs collectively arrange RAM
chips on a small board.
 The board alone includes a single row of pin leads, which connect to a particular socket on a system
or a system-expansion board. SIPs are usually associated with memory modules.
Page 111 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 When compared with DIPs, which have a typical maximum I/O count of 64, SIPs usually consist of a
typical maximum I/O count of 24, but with lower package expenditures.
 Most small-form SIPs are parallel-array devices of common-value components, such as resistor
arrays, diodes, etc.
 The large-form SIPs are often hybrid circuits, such as oscillators, timers, etc.
 The body of SIP is either made of ceramic or plastic, with a lead count usually ranging between four
and 64.
 There are three SIP styles: molded, conformal coated and uncoated.

DIL

 Dual In-line Package, a DIP is a chip encased in hard plastic with pins running along the outside.
The picture is an example of a DIP found on a computer motherboard that has been soldered into
place. Below is an illustration of a comparison between a DIP and a SIP not connected to a circuit
board.
 A dual inline package switch (DIP switch) is a set of manual electrical switches designed to hold
configurations and select the interrupt request (IRQ). DIP switches are used in place of jumper
blocks. Most motherboards have several DIP switches or a single bank of DIP switches. Commonly,
DIP switches are used to hold configuration settings.
 Normally DIP switches are found on motherboards, expansion cards or auxiliary cards. They consist
of tiny rectangular components that contain parallel rows of terminals (terminal pins) and a
connecting mechanism to the circuit board.
 Programmable chips on a computer and extra self-configuration hardware have drastically eliminated
the need for DIP switches. The trend is for settings can be accessed through a software control panel,
allowing for easier and more convenient changes.
 DIP switches were originally used to select the IRQ and memory addresses for ISA PC cards; they
were mostly mounted on printed circuit boards but were also used to store settings in many arcade
games and set security codes in garage door openers and wireless telephones.

There are many types of DIP switches. Two of the most common are:

 Slide and Rocker Actuator DIP Switches: These are typical on/off switches with a SPST (single-
pole, single-throw) contacts. They have a one-bit binary value with a standard ASCII character.
 Rotary DIP Switch: This DIP switch has several electrical contacts which are rotated and aligned.
They switches can be small or large and provide a selection of switching combinations.

Less common DIP switches are SPDT (double pole single throw), DPST (double pole single throw),
DPDT (double pole double throw) MPST (multiple-pole, single-throw) and MTSP (multiple-throw,
single-pole) DIP switches.

Page 112 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

SIMM

As previously mentioned, the term SIMM stands for Single In-Line Memory Module. With SIMMs,
memory chips are soldered onto a modular printed circuit board (PCB), which inserts into a socket on the
system board.

The first SIMMs transferred 8 bits of data at a time. Later, as CPUs began to read data in 32-bit chunks, a
wider SIMM was developed, which could supply 32 bits of data at a time. The easiest way to differentiate
between these two different kinds of SIMMs was by the number of pins, or connectors. The earlier
modules had 30 pins and the later modules had 72 pins. Thus, they became commonly referred to as 30-
pin SIMMs and 72-pin SIMMs.

Another important difference between 30-pin and 72-pin SIMMs is that 72-pin SIMMs are 3⁄4 of an inch
(about 1.9 centimeters) longer than the 30-pin SIMMs and have a notch in the lower middle of the PCB.
The graphic below compares the two types of SIMMs and indicates their data widths.

Below is a graphic illustration of a 4 MB SIMM as well as a diagram pointing out the important features
of a SIMM. Today, the SIMM is rarely used and have been replaced by DIMMs.

Page 113 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
DIMM

Dual In-line Memory Modules, or DIMMs, closely resemble SIMMs. Like SIMMs, most DIMMs install
vertically into expansion sockets. The principal difference between the two is that on a SIMM, pins on
opposite sides of the board are “tied together” to form one electrical contact; on a DIMM, opposing pins
remain electrically isolated to form two separate contacts.

168-pin DIMMs transfer 64 bits of data at a time and are typically used in computer configurations that
support a 64-bit or wider memory bus. Some of the physical differences between 168-pin DIMMs and 72-
pin SIMMs include: the length of module, the number of notches on the module, and the way the module
installs in the socket. Another difference is that many 72-pin SIMMs install at a slight angle, whereas
168-pin DIMMs install straight into the memory socket and remain completely vertical in relation to the
system motherboard. The illustration below compares a 168-pin DIMM to a 72-pin SIMM.

Comparison of a 72-pin SIMM and a 168-pin DIMM.

Page 114 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
SO DIMMS

A type of memory commonly used in notebook computers is called SO DIMM or Small Outline DIMM.
The principal difference between a SO DIMM and a DIMM is that the SO DIMM, because it is intended
for use in notebook computers, is significantly smaller than the standard DIMM. The 72-pin SO DIMM is
32 bits wide and the 144-pin SO DIMM is 64 bits wide.

RIMM
 The full form of RIMM is Rambus Inline Memory Module. It is similar to a DIMM package but
uses different pin settings. A RIMM module consists of RDRAM chips. In RIMM, the command
address and control signals are buffered. RIMM memory is expensive and slower then DIMMs
that’s why not well known. The RIMM used in personal computers and workstations. The
SIMMs have upto 64-bit data path. In case of pins, RIMMs contains variants with upto 184 pin.

Page 115 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Define the term parity bit and state its use in RAM.
 RAM parity checking is the storing of a redundant parity bit representing the parity (odd or
even) of a small amount of computer data (typically one byte) stored in random access memory,
and the subsequent comparison of the stored and the computed parity to detect whether a data
error has occurred.
 The parity bit was originally stored in additional individual memory chips; with the introduction
of plug-in DIMM, SIMM, etc. modules, they became available in non-parity and parity (with an
extra bit per byte, storing 9 bits for every 8 bits of actual data) versions.

 Parity refers to the redundant check bit that represents the even/odd condition of a certain unit
(usually one byte) of computer data stored in the RAM of a device. This is used to check and
double check for errors by comparing the stored and the computed parity. Parity bits are stored in
additional individual memory chips, with 9 bits for every 8 bits of actual data.
 Parity is also known as random access memory (RAM) parity.
 During the early years of computers, it was common for users to experience faulty memory and
parity issues. A parity bit was hence required to check and detect errors in memory. A parity error
causes the system to stop, which causes the loss of any unsaved data. This is generally a better
choice than saving corrupt data. To save space, sometimes logic parity RAM is used, which uses
an 8-bit RAM in the same style as a 9-bit parity RAM. This is the reason why logic parity RAM
is also sometimes known as “fake parity RAM.” Modern computers do not support parity error
detection anymore owing to the risk of data loss and corruption

Page 116 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Describe the concept behind memory banks.

 A memory bank is the logical storage within computer memory that is used for storing
and retrieving frequently used data.
 It can be a part of standard RAM or the cache memory used for easily accessing and
retrieving program and standard data.
 A memory bank is primarily used for storing cached data, or data that helps a computer
access data much more quickly than standard memory locations.
 Typically, a memory bank is created and organized by the memory access controller and
the actual physical architecture of the memory module.
 In SDRAM and DDR RAM, the memory bank can consist of multiple columns and rows
of storage units spread across several chips.
 Each memory module can have two or more memory banks for program and data storage.

 In very simple terms, Banking will be defined as placing chips in number of rows and
columns at a given level of a memory hierarchy.

 In a banked memory system, the data is divided, or interleaved, across the memories so
that each memory contains only a fraction of the data.

 So if we consider a particular banked system and one would want to retrieve a given
datum, some of the bits of the address would be used to select that memory bank which
contains it.

Following is an example of a banked memory system.

Page 117 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 In the above picture there is parallel access to a 32 bit word with 4 banks being used
where every 8 bit word is being generated parallelly.

There's another example:

Consider a 32 x 16 memory, implemented with 8 x 8 chips. Each bank


must appear to be 8 x 16: 8 entries of 16 bits each.
There will be 32 / 8 = 4 memory banks, each with 16 / 8 = 2 chips.
In this arrangement, a 16–bit word will be split over two chips.

Page 118 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Describe the following types of memory


 Conventional memory,
 Expanded memory,
 Upper memory
 Extended memory.

Conventional memory

 Conventional memory is contiguous memory directly used by applications running on any Intel
80x86 microprocessor that is running in real mode under unaugmented MS-DOS.
 Addressed from 0 to 640KB (up to 736KB with special device drivers and hardware). The
original 8088 processor could address up to 1MB (220, 20 being the number of address lines
which come out of the CPU) of memory directly; however, IBM chose to reserve the upper
384KB for ROM and other uses.
 Is the remaining 640kb from IBM real mode or base memory available to load and run
your applications

Expanded memory

 Technique used to overcome the traditional 640kb limit of real mode addressing.
 Addresses expanded memory blocks by switching them into base memory range where
the CPU accesses it in real mode

 Expanded memory (EM) is an overarching or umbrella term for several technology variants that
do not necessarily work with each other or are directly related to each other. However, these
technologies were meant to solve the same problem, the 640 KB limit on usable memory for
programs in the DOS operating system. The most widely used expanded memory variant was the
Expanded Memory Specification (EMS) or the LIM EMS.
 Expanded memory refers to various methods for allowing the use of more than the default 640
KB limit imposed by the DOS operating system. The most widely used expanded memory system
was the specification jointly developed by Lotus Software, Intel and Microsoft, which was simply
Page 119 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
called the Expanded Memory Specification. But to differentiate it from the others, it was
sometimes referred to as the LIM EMS to denote the developers. The first widely used version
was the EMS 3.2, which was able to support up to 8 MB of expanded memory.
 Another expanded memory technology was developed by AST Research, Ashton-Tate and
Quadram, the Extended EMS (EEMS) and competed directly with the LIM EMS 3.x. EEMS
allowed any 16 KB region in the lower RAM to be mapped to expanded memory, so long as it
was not directly associated with CPU interrupts or dedicated I/O memory used by video and
network cards. This meant that programs could be switched in and out of the extra RAM.
However, practically all features of EEMS were incorporated into LIM EMS.
 IBM also had their own expanded memory specification, which they called the Expanded
Memory Adapter (XMA). They used expansion boards that could be addressed by either an
expanded memory model or extended memory. These boards did not work with EMS out of the
box and the IBM DOS driver used for it was the XMAEM.SYS, but a later driver called
XMA2EMS.SYS gave the XMA boards EMS emulation.

Extended memory

 An additional memory to the 640kb introduced by using a protected mode of addressing


allows modern processors to handle up to 4GB of protected mode memory

 Extended memory is memory located past the first megabyte of the address space. Only 286s and
up support extended memory. With the minor exception of the HMA (see below), it is only
addressable by applications run in real mode. DOS applications make use of this memory to store
data, but not to execute code. XMS (eXtended Memory Standard, set by Microsoft) permits
applications to allocate extended memory. It also copies data to and from extended memory and
conventional memory so that the application does not need to switch between modes. Like EMS,
XMS usually requires loading a device driver.
 Extended memory is limited to 15MB on 286 and 386SX models. (15MB extended memory, plus
1MB conventional and upper memory equals 16MB, or 224, 24 being the number of address lines
coming out of the CPU); limited to 4 gigabytes (232) for 386DX chips and up.

Page 120 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Upper memory blocks

 Upper memory blocks refers to memory found between A0000-FFFFFh (640KB to 1024KB) in
the CPU address space. Upper memory blocks are not connected with conventional memory, and
cannot be directly used by applications which have not been made aware of its existance. Most
likely this memory is mapped into a region using 386 memory management hardware and
software or special chipsets on the motherboard, rather than physically existing there. Ordinary
applications cannot use this memory easily. Using special utilities (e.g., LOADHIGH, etc.), you
can place device drivers, TSRs, and certain DOS data areas (e.g., BUFFERS, FILES, FCBS) into
upper memory blocks. This helps free up conventional memory.

High memory area

 High memory area is an area slightly smaller than 64Kb starting at the 1024Kb boundary,
available only on 286 or higher computers. Due to the design of the memory addressing of Intel
microprocessors, you can address this space in real mode without switching to protected mode.
With appropriate software, you can load device drivers and TSRs into this region as well.
Quaterdeck's Desqview uses the high memory area, as does Windows 3.x and DOS 5.0 and up.

VCPI (Virtual Control Program Interface) and DPMI (DOS Protected


Mode Interface)

 VCPI (Virtual Control Program Interface) and DPMI (DOS Protected Mode Interface) are API's
(Application Programming Interfaces) that make it possible for software to take advantage of the
special capabilities of the 386 chips and up without getting in each other's way. The domain for
these standards go beyond simple memory management, but they often include EMS and XMS
support as well. VCPI services are provided by QEMM/Desqview and most of the DOS extenders
currently on the market; DMPI is somewhat newer (and more capable) and currently is supported
primarily in Microsoft Windows.

Page 121 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Shadow ram

 Shadow ram is memory that is used to "shadow" ROM chips. Because ROM chips are typically
very slow to access, many system operations (i.e., cases where the applications call the system
BIOS or other ROMs) can be sped up by copying their contents to RAM, and then using special
hardware (like the C&T chipset, or the abilities of the 386 chips and up) to make the memory
appear where the ROM's should be. This strategy is being used less often, as the speed-up is
sometimes not worth the extra memory-management hassles for the end-user.

Cache ram

 Cache ram is very high-speed RAM chips which sit between the CPU and main memory. It stores
(i.e., caches) memory accesses by the CPU. Cache ram helps to alleviate the gap between the
speed of a CPU's megahertz rating and the ability of RAM to respond and deliver data. It reduces
the frequency that the CPU must wait for data from the main memory.

Explain each of the following memory


techniques
 Memory interleave
 Memory paging
 Memory mapping

Interleaved Memory
 It is a technique for compensating the relatively slow speed of DRAM(Dynamic RAM). In this technique,
the main memory is divided into memory banks which can be accessed individually without any
dependency on the other.
 For example: If we have 4 memory banks(4-way Interleaved memory), with each containing 256 bytes,
then, the Block Oriented scheme(no interleaving), will assign virtual address 0 to 255 to the first bank, 256
to 511 to the second bank. But in Interleaved memory, virtual address 0 will be with the first bank, 1 with
the second memory bank, 2 with the third bank and 3 with the fourt, and then 4 with the first memory bank
again.
 Hence, CPU can access alternate sections immediately without waiting for memory to be cached. There are
multiple memory banks which take turns for supply of data.
 Memory interleaving is a technique for increasing memory speed. It is a process that makes the system
more efficient, fast and reliable. For example: In the above example of 4 memory banks, data with virtual
address 0, 1, 2 and 3 can be accessed simultaneously as they reside in spearate memory banks, hence we do
not have to wait for completion of a data fetch, to begin with the next.
 An interleaved memory with n banks is said to be n-way interleaved. In an interleaved memory system,
there are still two banks of DRAM but logically the system seems one bank of memory that is twice as
large.
 In the interleaved bank representation below with 2 memory banks, the first long word of bank 0 is floowed
by that of bank 1, which is followed by the second long word of bank 0, which is followed by the second
long word of bank 1 and so on.
 The following figure shows the organization of two physical banks of n long words. All even long words of
logical bank are located in physical bank 0 and all odd long words are located in physical bank 1.

Page 122 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Types:
There are two methods for interleaving a memory:
2-Way Interleaved
Two memory blocks are accessed at same time for writing and reading operations.
4-Way Interleaved
Four memory blocks are accessed at the same time.

Memory Mapping
 Memory mapping is the translation between the logical address space and the physical memory.
The objectives of memory mapping are (1) to translate from logical to physical address, (2) to aid
in memory protection (q.v.), and (3) to enable better management of memory resources. Mapping
is important to computer performance, both locally (how long it takes to execute an instruction)
and globally (how long it takes to run a set of programs). In effect, each time a program presents a
logical memory address and requests that the corresponding memory word be accessed, the
mapping mechanism must translate that address into an appropriate physical memory location. The
simpler this translation, the lower the implementation cost and the higher the performance of the
individual memory reference.
 There are two fundamental situations to be handled. When the logical address space is smaller than
the physical address space (common to microcontrollers, microprocessors, and older mini- and
mainframe computers, mapping is needed to gain access to all of physical memory). When the
logical address space is larger than the physical address space, mapping is used to insure that each
logical address generated corresponds to an existing physical memory cell.

Page 123 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Memory-mapping is a mechanism that maps a portion of a file, or an entire file, on disk to a
range of addresses within an application's address space. The application can then access files on
disk in the same way it accesses dynamic memory. This makes file reads and writes faster in
comparison with using functions such as fread and fwrite.

The transformation of data from main memory to cache memory is called mapping. There are 3
main types of mapping:

 Associative Mapping
 Direct Mapping
 Set Associative Mapping

Associative Mapping

The associative memory stores both address and data. The address value of 15 bits is 5 digit octal
numbers and data is of 12 bits word in 4 digit octal number. A CPU address of 15 bits is placed
in argument register and the associative memory is searched for matching address.

Direct Mapping

The CPU address of 15 bits is divided into 2 fields. In this the 9 least significant bits constitute
the index field and the remaining 6 bits constitute the tag field. The number of bits in index field
is equal to the number of address bits required to access cache memory.

Page 124 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Set Associative Mapping

The disadvantage of direct mapping is that two words with same index address can't reside in
cache memory at the same time. This problem can be overcome by set associative mapping.

In this we can store two or more words of memory under the same index address. Each data
word is stored together with its tag and this forms a set.

Page 125 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Memory Paging
 Divides system RAM into small groups or pages which can then be loaded into memory
for processing

 Paging refers to memory allocation. In a paging memory management scheme, data are
stored and managed in identical consistent blocks referred to as 'pages.’
 Paging can be important in memory storage for hardware systems because it allows more
versatility than some traditional processes.
 Older paradigms involved putting programs into contiguous or linear storage, which
caused problems with disk fragmentation and other issues. Users would have to run
defragmentation utilities to optimize hard disk space.
 With the emergence of virtual memory and virtualized systems, paging plays an even
more developed role. Paging can be part of the memory management storage setup that
uses logical or virtual systems over physical random access memory storage designs.
 Experts also often contrast paging with segmentation, where more broad-based protocols
involve a segment for each process. Engineers look at how data goes from a CPU to
memory, and how to make that process more productive and efficient, which is where
paging can factor into a more futuristic design.

Page 126 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

List procedures to be taken when additional memory is installed in


a PC and state the function of device drivers used to realize
additional memory (memory managers)

Research**************

Page 127 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Define the access time and cycle time of memory devices.

Memory access time

 Access time is the amount of time it takes the processor to read data, instructions, and information
from memory. A computer’s access time directly affects how fast the computer processes data.
Accessing data in memory can be more than 200,000 times faster than accessing data on a hard
disk because of the mechanical motion of the hard disk. Today’s manufacturers use a variety of
terminology to state access times. Some use fractions of a second, which for memory occurs in
nano seconds. A nanosecond (abbreviated ns) is one billionth of a second. A nanosecond is
extremely fast. Other manufacturers state access times in MHz; for example, 800 MHz RAM.
 While access times of memory greatly affect overall computer performance, manufacturers and
retailers usually list a computer’s memory in terms of its size, not its access time.

Memory Cycle Time

 It is the time that is measured in nanoseconds, the time between one Ram access of time when the
next Random Access Memory RAM access starts. Access time were used as synonym of it but
IBM separates that with some explanation. That Cycle Time find the right place for the
memory to take place in the memory and transfer time of that information/process. So one should
not get confused while thinking about the Clock Cycle or Clock Speed which have to do with
number of cycles/second to which a processor is paced.

 Difference Between Access Time and Cycle Time of Memory is that Access time is the amount
of time it takes the processor to read data, instructions, and information from memory. While
Cycle Time of Memory is the time that is measured in nanoseconds, the time between
one Ram access of time when the next Random Access Memory RAM access starts.

Page 128 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

EXTERNAL DATA BUS TYPES (EXPANSION


BUS)
 The external bus, or expansion bus, is made up of the electronic pathways that connect the
different external devices, such as printer etc., to the computer.
 An expansion bus is a computer bus which moves information between the internal hardware of a
computer system (including the CPU and RAM) and peripheral devices. It is a collection of wires
and protocols that allows for the expansion of a computer.

Page 129 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Describe and compare common external bus types as:

ISA
Stands for "Industry Standard Architecture." ISA is a type of bus used in PCs for adding
expansion cards. For example, an ISA slot may be used to add a video card, a network card, or
an extra serial port. The original 8-bit version of PCI uses a 62 pin connection and supports clock
speeds of 8 and 33 MHz. 16-bit PCI uses 98 pins and supports the same clock speeds.

The original 8-bit version of ISA was introduced in 1981 but the technology did not become
widely used until 1984, when the 16-bit version was released. Two competing technologies --
MCA and VLB -- were also used by some manufacturers, but ISA remained the most common
expansion bus for most of the 1980s and 1990s. However, by the end of the twentieth century,
ISA ports were beginning to be replaced by faster PCI and AGP slots. Today, most computers
only support PCI and AGP expansion cards.

Page 130 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

What does Industry Standard Architecture Bus (ISA Bus) mean?


An Industry Standard Architecture bus (ISA bus) is a computer bus that allows additional expansion cards
to be connected to a computer's motherboard. It is a standard bus architecture for IBM compatibles.
Introduced in 1981, the ISA bus was designed to support the Intel 8088 microprocessor for IBM’s first-
generation PC.

In the late 1990s the faster peripheral component interconnect (PCI). Soon afterwards, use of the ISA bus
began to diminish, and most IBM motherboards were designed with PCI slots. Although there are still a
few motherboards being made with ISA slots, these are generally referred to as the legacy bus
motherboards.
The ISA bus provides direct memory access using multiple expansion cards on a memory channel
allowing separate interrupt request transactions for each card. Depending on the version, the ISA bus can
support a network card, additional serial ports, a video card and other processors and architectures,
including:

 IBM PC with Intel 8088 microprocessor


 IBM AT with Intel 80286 processor (1984)
 Extended Industry Standard Architecture (1988)

Page 131 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The ISA bus first included synchronicity with the CPU clock. It was later upgraded to high-level
buffering, which interfaced the chipsets with the CPU. Likewise, the ISA bus used bus mastering, which
directly accessed just the first 16 MB of main memory.

Page 132 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page 133 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

EISA

Extended Industry Standard Architecture (EISA) is a bus architecture that extends the Industry
Standard Architecture (ISA) from 16 bits to 32 bits. EISA was introduced in 1988 by the Gang of
Nine - a group of PC manufacturers.

EISA was designed to compete with IBM’s Micro Channel Architecture (MCA) - a patented 16
and 32-bit parallel computer bus for IBM’s PS/2 computers. EISA extended the advanced
technology (AT) bus architecture and facilitated bus sharing between multiple central processing
units (CPU).

EISA is also known as Extended ISA.

The EISA bus is compatible with older ISA buses with 8-bit or 16-bit data paths. Two 32-bit data
path slots are the same width as one 16-bit ISA slot. However, EISA bus slots are deeper than
16-bit slots because 32-bit circuit board edge connectors have long fingers deep inside the EISA
slot that connect to the 32-bit pins. The 16-bit circuit board partly extends to the 16-bit pins with
a shallow connection.

Page 134 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Enhanced 4 GB memory extended EISA’s 32-bit bus market, but the MCA bus was more
popular. Though costly, EISA adapted easily to older ISA circuit boards. Thus, EISA was
primarily used for high-end servers requiring heavy bandwidth. Unlike MCA, EISA accepts
IBM’s older XT system architecture and ISA circuit boards. EISA connectors are 16-bit superset
connectors for ISA system boards, providing more signals and enhanced performance.

The main difference between MCA and EISA is that EISA/ISA buses are backward compatible.
An EISA PC is compatible with older EISA/ISA expansion cards, but on.ly MCA expansion
cards may be used by an MCA bus.

EISA has 32-bit direct memory access (DMA), central processing unit (CPU) and bus master
devices. EISA also has improved data transfer rates (DTR) up to 33 MB, automatic
configuration, synchronous data transfer protocol (SDTP) and a compatible structure for older
ISA buses with 8 or 16-bit data paths.

Most EISA cards were designed for network interface cards (NIC) or small computer system
interfaces (SCSI). EISA is also accessible via several non IBM-compatible PCs, such as the HP
9000, MIPS Magnum, HP Alpha Server and SGI Indigo2.

Eventually, PCs required faster buses for higher performance. Faster expansion cards, like
LocalBus or Video Electronics Standards Association (VESA), were introduced, and there was
no longer an EISA card market.

EISA is the term used to describe the expansion slots and related circuitry (the expansion bus)
found in some higher-priced PCs. If you see an ad for an EISA computer, it's talking about a PC
with EISA slots.

Page 135 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

MCA

Micro Channel Architecture is an interface between a computer and multiple computers and its
expansion cards and their associated devices. MCA was a distinct break from previous bus
architectures such as Industry Standard Architecture. The pin connections in MCA are smaller
than other bus interfaces. For this and other reasons, MCA does not support other bus architectures.
Although MCA offers a number of improvements over other bus architectures, its proprietary,
nonstandard aspects did not encourage other manufacturers to adopt it.

Page 136 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

It has influenced other bus designs and it is still in use in PS/2s and in some minicomputer
systems. The MCA bus was IBM's attempt to replace the ISA bus with something "bigger and
better". When the 80386DX was introduced in the mid-80s with its 32-bit data bus, IBM decided
to create a bus to match this width. MCA is 32 bits wide, and offers several significant
improvements over ISA. The MCA bus has some pretty impressive features considering that it
was introduced in 1987, a full seven years before the PCI bus made similar features common on
the Pc. In some ways it was ahead of its time, because back then the ISA bus really wasn't a
major performance limiting factor:

32 Bit Bus Width: The MCA bus features a full 32 bit bus width, the same width as the VESA
and PCI local buses. It had far superior throughput to the ISA bus.

Bus Mastering: The MCA bus supported bus mastering adapters for greater efficiency,
including proper bus arbitration.

Plug and Play: MCA automatically configured adapter cards, so there was no need to fiddle
with jumpers. This was eight years before Windows 95 brought PnP into the mainstream!

MCA had a great deal of potential. Unfortunately, IBM made two decisions that would doom
MCA to utter failure in the marketplace. First, they made MCA incompatible with ISA, this
means ISA cards will not work at all in an MCA system, one of the few categories of PCs for
which this is true. The PC market is very sensitive to backwards-compatibility issues, as
evidenced by the number of older standards that persist to this day. Second, IBM decided to
make the MCA bus proprietary. It in fact did this with ISA as well, however in 1981 IBM could

Page 137 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
afford to flex its muscles in this manner, while by this time the clone makers were starting to
come into their own and weren't interested in bending to IBM's wishes.

These two factors, combined with the increased cost of MCA systems, led to the demise of the
MCA bus. With the PS/2 now discontinued, MCA is dead on the PC platform, though it is still
used by IBM on some of its RISC 6000 UNIX servers. It is one of the classical examples in the
field of computing of how non-technical issues often dominate over technical ones. But one of
MCA's disadvantages is that it has poor DMA controller circuitry.

Features of Micro Channel Architecture

• I/O data transfers of 8-, 16-, 24-, or 32-bits within a 64KB address space (16-bit
address width).
• Memory data transfers of 8-, 16-, 24-, or 32-bits within a 16MB (24-bit address
width) or 4GB (32-bit address width) address space.
• An arbitration procedure that enables up to 15 devices and the system master to bid
for control of the channel.
• A basic transfer procedure that allows data transfers between masters and slaves.
• A direct memory access (DMA) procedure that supports multiple DMA channels.
Additionally, this procedure allows a device to transfer data in bursts.
• An optional streaming data procedure that provides a faster data-transfer rate than the
basic transfer procedure and allows 64-bit data transfers.
• Address- and data-parity enable and detect procedures.
• Interrupt sharing on all levels.
• A flexible system-configuration procedure that uses programmable registers.
• An adapter interface to the channel using:
• A 16-bit connector with a 24-bit address bus and a 16-bit data bus
• A 32-bit connector with a 32-bit address bus and a 32-bit data bus
• An optional matched-memory extension
• An optional video extension.
• Support for audio signal transfer (audio voltage-sum node).
• Support for both synchronous and asynchronous data transfer.
• An exception condition reporting procedure.
• Improved electromagnetic characteristics.

Page 138 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

MACINTOSH (Nu- Bus)


A Nubus is a 32-bit parallel computer bus. It was created by the Massachusetts Institute of Technology
and originated from the NuMachine workstation project, which designed workstations to interface with
LANs using microprocessors. The MIT laboratory team for the NuMachine worked in collaboration with
Western Digital.

The original Nubus and NuMachine were designed for the Western Union NuMachine and for the Lisp
Machines Incorporated LMI-Lambda. The NuMachine was used in components by Texas Instruments,
Next, Incorporated (NeXT) and Apple Computer. In 1983 the NuMachine was bought by Texas
Instruments. It was replaced by the TI Explorer in 1985.

At the time, Nubus was considered a significant advancement, since most computer interfaces used an 8-
bit bus. Today, Nubus is no longer used and was replaced mostly by the peripheral component
interconnect (PCI) and other parallel buses.

Example of a NuBus graphics card, a Radius PrecisionColor Pro 8/24xj. This is a "half-length" card,
with a maximum length of 7 inches. The maximum length for full-size NuBus cards is 12 inches.

The Nubus card uses pins instead of an edge connector, which is used on a PCI or industry standard
architecture card.

Not only did the Nubus introduce a 32-bit bus, but it had an ID structure permitting cards to be identified
by the host during booting. At the time, the majority of buses used pins on the CPU, which connected to
the backplane. This structure corresponded to data standards and signaling, which included configuring
the memory and the card, interrupts and other time consuming tasks. In fact, Nubus was one of the first
plug-and-play designs.

Page 139 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
However, the Nubus architecture required a controller chip between the I/O chips on the card and the bus.
This scheme required additional cost and complexity compared to the simple bus systems supported by
minimal I/O chips.

Nubus cards can be designed as either a master or slave. A master manages bus requests for bus mastery
and can secure the bus from access by other Nubus devices for an allotted time. The slave responds to
requests, transmits non-master requests and does not need support for the entire 32-bit transfer.

A 24-bit Nubus card is utilized on the Macintosh II series. It is called a 24-bit aliasing and supports
address lines 0 to 23. Nubus was also chosen for NeXT Computer modules, but it had a different printed
circuit board design.

Page 140 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

VESA

 Stands for "VESA Local Bus." (VESA stands for "Video Electronics Standards Association").
 The VLB, or VL-bus is a hardware interface on the computer's motherboard that is attached to an
expansion slot.
 By connecting a video expansion card to the VLB, you can add extra graphics capabilities to your
computer. The interface supports 32-bit data flow at up to 50 MHz.
 Though the VLB architecture was popular in the early 1990s, it has since been replaced by the
newer and faster, but still three-lettered, ISA, PCI, and AGP slots.

 VESA Local Bus (sometimes called the VESA VL bus) is a standard interface between your
computer and its expansion slot that provides faster data flow between the devices controlled by
the expansion cards and your computer's microprocessor.

 A "local bus" is a physical path on which data flows at almost the speed of the microprocessor ,
increasing total system performance. VESA Local Bus is particularly effective in systems with
advanced video cards and supports 32-bit data flow at 50 MHz . A VESA Local Bus is
implemented by adding a supplemental slot and card that aligns with and augments an Industry
Standard Architecture expansion card. (ISA is the most common expansion slot in today's
computers.)

Page 141 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

LOCAL BUS
Also called the "system bus," a local bus is the pathway between the CPU, memory and peripheral
controller chips. The term was popular in the early 1990s with the introduction of the VESA local bus

In computer architecture, a local bus is a computer bus that connects directly, or almost directly,
from the CPU to one or more slots on the expansion bus. The significance of direct connection to
the CPU is avoiding the bottleneck created by the expansion bus, thus providing fast throughput.
There are several local buses built into various types of computers to increase the speed of data
transfer. Local buses for expanded memory and video boards are the most common.

VESA Local Bus is an example of a local bus design.

Although VL-Bus was later succeeded by AGP, it is not correct to categorize AGP as a local bus.
Whereas VL-Bus operated on the CPU's memory bus at the CPU's clock speed, an AGP
peripheral runs at specified clock speeds that run independently of the CPU clock (usually using
a divider of the CPU clock). The concept of Local Bus was pioneered by Dado Banatao.

Local Bus
Peripheral controller cards plug into slots on the local bus.

Page 142 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

PCI

A Peripheral Component Interconnect Bus (PCI bus) connects the CPU and expansion boards
such as modem cards, network cards and sound cards. These expansion boards are normally
plugged into expansion slots on the motherboard.

The PCI local bus is the general standard for a PC expansion bus, having replaced the Video
Electronics Standards Association (VESA) local bus and the Industry Standard Architecture
(ISA) bus. PCI has largely been replaced by USB.

This term is also known as conventional PCI or simply PCI.

PCI requirements include:

 Bus timing
 Physical size (determined by the wiring and spacing of the circuit board)
 Electrical features
 Protocols

Page 143 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
PCI specifications are standardized by the Peripheral Component Interconnect Special Interest
Group.

Today, most PCs do not have expansion cards, but rather devices integrated into the
motherboard. The PCI bus is still used for specific cards. However, for practical purposes, USB
has replaced the PCI expansion card.

During system startup the operating system searches for all PCI buses to attain information about
the resources needed for each device. The OS communicates with each device and assigns
system resources, including memory, interrupt requests and allotted input/output (I/O) space.

Page 144 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

PCIe

Peripheral Component Interconnect Express, better known as PCI Express (and abbreviated PCIe
or PCI-E) and is a computer expansion card standard. PCI-E is used in motherboard-level
connections and as an expansion card interface. The new standard for personal computers is
called PCIe 3.0. One of the improvements of PCI-E over its predecessors is a new topology
allowing for the faster exchange of data.

The new PCI-E 3.0 technology is different from the former PCI, PCI-X and AGP boards in many
ways:

 Communication consists of data and status-message traffic being packetized and depacketized.
 Data is sent via paired point-to-point serial links, called lanes, allowing data movement in both
directions simultaneously and allowing more than one pair of devices to communicate
simultaneously.
 PCI-E slots contain from one to 32 lanes in powers of 2 (1, 2,4, 8 etc.). Each “lane” is a pair of
data transfer lines, one for transmitting and one for receiving, and is composed of 4 wires. The
number of lanes in a slot is denoted by an x before it, e.g. x16 designates a 16-lane PCI-E card.
 Higher bandwidth is provided by channel grouping – using multiple lanes for a single device.
 Serial buses transmit data faster than parallel buses due to the latter’s limitation requiring data
to arrive simultaneously at their destination (This has to do with the frequency and wavelength
of a single bit). With serial buses there is no requirement for signals to arrive simultaneously.
 PCI-E follows a layered protocol composed of 3 layers: a transaction layer, a data link layer, and
a physical layer.

The following are the rates of transmission and bandwidth for the various PCI-E busses. These
rates are for total transmission in both directions, 50% being in either direction:

 PCI Express 1x 500 MB/s


 PCI Express 2x 1000 MB/s
 PCI Express 4x 2000 MB/s
 PCI Express 8x 4000 MB/s
 PCI Express 16x 8000 MB/s (x16 cards are the largest size in common use.)
 PCI Express 32x 16000 MB/s

Page 145 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
By comparison a PCI card has bandwidth of 132 MB/s; AGP 8x: 2,100 MB/s; USB 2.0: 60
MB/s; IDE: 100 to 133 MB/s; SATA: 150 MB/s; SATA II: 300 MB/s; Gigabit Ethernet: 125
MB/s; and Firewire 800: approx. 100 MB/s.

Page 146 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

USB

A Universal Serial Bus (USB) is a common interface that enables communication between
devices and a host controller such as a personal computer (PC). It connects peripheral devices
such as digital cameras, mice, keyboards, printers, scanners, media devices, external hard drives
and flash drives. Because of its wide variety of uses, including support for electrical power, the
USB has replaced a wide range of interfaces like the parallel and serial port.

Universal serial bus (USB) is a connector between a computer and a peripheral device such as a
printer, monitor, scanner, mouse or keyboard. It is part of the USB interface, which includes
types of ports, cables and connectors.

The USB interface was developed to simplify the connection between computers and peripheral
devices. Prior to the USB interface, peripheral devices had a multitude of connectors. The USB
interface provides various benefits, including plug-and-play, increased data transfer rate (DTR),
reduced number of connectors, and addressing usability issues with existing interfaces.

The USB interface was developed in the mid-1990s and is standardized by the USB
Implementers Forum (USB-IF). Originally, the standards defined two types of connectors,
known as A-type and B-type. Both types use 4 flat pins with the first pin (the +5V supply
voltage) and fourth pin (the supply ground) slightly longer to first connect the power supply.

Page 147 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
This substantially lowers the possibility of the data connection receiving voltages. In both types,
the connection is held in place by friction.

A-Type connectors are used on devices that provide power, such as a computer, and have a flat
and rectangular interface. They provide a downstream connection. B-Type connectors are used
on devices receiving power such as a peripheral device. They have slightly beveled exterior
corners on the top ends and are somewhat square in shape. They provide an upstream
connection. Although there have been several revisions of the USB connector since the original
standards were implemented, the majority of USB products still use A and B connector
interfaces.

The USB connector is intentionally designed to be properly connected. It is impossible to


connect it upside-down. The USB icon is imprinted on the top side of the plug, making it easy
for visual alignment. Additionally, USB standards specify that a connector must support a
compliant extension cable or fit within size restrictions.

There are several versions of USB connectors, which vary in their DTRs: USB 1.0 with DTR of
1.5 Mbps and 12 Mbps, USB 2.0 with DTR of 480 Mbps, and USB 3.0, or SuperSpeed, with
DTR up to 5 Gbps.

The USB interface replaced a wide range of previous interfaces, such as serial and parallel ports
and individual power chargers for portable devices. USB connectors are now commonly used
with devices like network adapters and portable media players as well as video game consoles
and smartphones. USB connectors are also used for devices requiring smaller USB connectors.

Page 148 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

PCMCIA (laptops)
The PC Card bus was developed in 1989 by the PCMCIA (Personal Computer Memory Card
International Association, which is the name sometimes given to the bus) consortium in order to
extend current peripheral equipment connectivity on mobile computers.

A PC card slot is an expansion slot often found in notebook computers that allows for the easy
and quick addition of a host of different devices. Originally designed for adding memory to
portable computer systems, the PC card standard has been updated several times since its
original creation.

PC cards are Plug and Play devices that are often hot-swappable (i.e., cards may be removed and
inserted with the computer power turned on, without rebooting) under Mac OS and Windows 95
and beyond. (Windows NT, however, has more limited support for PC cards, and you cannot
change cards on the fly.) Many systems will give a familiar beep sound from the computer's
speaker when you remove or insert a card.

Page 149 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Technical Characteristics

PCMCIA peripheral equipment comes in the shape of a credit card (54mm by 85 mm) and has a
68-pin connector

There are three form factors that correspond to three standard thicknesses:

Type Width (mm) Length (mm) Thickness (mm)

Type I PC Card 54 85 3.3

Type II PC Card 54 85 5.0

Type III PC Card 54 85 10.5

Physical Characteristics

The PCMCIA specification 2.0 release in 1991 added protocols for I/O devices and hard disks. The 2.1
release in 1993 refined these specifications, and is the standard around which PCMCIA cards are built
today.

PCMCIA cards are credit card size adapters which fit into PCMCIA slots found in most handheld and
laptop computers. In order to fit into these small size drives, PCMCIA cards must meet very strict
physical requirements as shown in the figure below. There are three types of PCMCIA cards, Type I
generally used for memory cards such as FLASH and STATIC RAM; Type II used for I/O peripherals
such as serial adapters, parallel adapters, and fax-modems (this is the type of card Quatech manufactures);

Page 150 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
and Type III which are used for rotating media such as hard disks. The only difference in the physical
specification for these cards is thickness.

PCMCIA Card Physical Characteristics

Interface:
68 Pins

I/O
Connection:
manufacturer
determined

Length:
85.6 mm

Width: 54.0
mm

Thickness:
dependant
on Type (see
right)

Card & Socket Services

Functionally, a PCMCIA card can perform any memory or I/O operation so long as it adheres to the
PCMCIA interface structure. As shown in the figure below, PCMCIA is a tiered system which uses a set
of device independent drivers to integrate any type of PCMCIA card into the host system. Socket
Services, the lowest tier in the architecture, provides a universal software interface for the PCMCIA
sockets themselves. Socket Services manages all the sockets installed in a system so that resources can be
properly allocated. It is also the means by which individual cards access registers on the host system.
Socket Services can be added to a computer as a device driver, or it can be built into PC BIOS.

Directly above Socket Services in the hierarchy sits Card Services. Card Services is an application
programming interface (API) which permits multiple software programs to work with multiple PCMCIA
cards. For instance, Card Services will allow both Internet applications and fax applications to use an
installed PCMCIA card modem. Like Socket Services, Card Services can be implemented as a device
driver. It can also be built into a computer's operating system, as it is in Windows 95/98/NT/2000/XP and
OS/2.

Page 151 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

16-Bit PCMCIA

PCMCIA specification 2.1 provides for a 16-bit bus interface, has a maximum clock speed of 10MHz and
is capable of speeds to 20Mbps. The 2.1 spec. does not provide for bus mastering, DMA, or multiple
interrupts, (however, Quatech's interrupt sharing software drivers allow sharing the one interrupt among
multiple I/O devices). While PCMCIA provides only a minimal performance improvement over ISA, and
does not come close in speed to PCI, it does provide for considerably more flexibility than either of the
others.

The two most important features of PCMCIA are its Plug and Play and Hot Swapping capabilities. As
with PCI, PCMCIA cards are truly Plug and Play--you simply insert them, and instructions coded into
chips on the card provide the information a host needs to configure the cards and appropriately allocate
resources. Not only are there no jumpers or switches to set, users never even see the inside of a PCMCIA
card. It is simply inserted into the drive, and the system does the rest. (An open PCMCIA card is pictured
below, to show what you've been missing.)

Page 152 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Quatech's QSP-100 PCMCIA card Uncovered

This configuration procedure, along with the fact that PCMCIA cards are not connected directly to the
motherboard, but are easily inserted into and ejected from a PCMCIA drive, allows the cards to be hot
swappable. This means that the system need not be shut down then rebooted to add, remove, or exchange
cards. Thus, you could insert a PCMCIA scanner, scan a drawing of your newest board layout, then
remove the scanner and insert a modem and e-mail the scan to a manufacturer for mass production. While
this might not be very important for desktop PCs with large numbers of expansion slots, it is vitally
important for laptops with limited resources and usually only two PCMCIA slots. It becomes even more
important for handheld computers which often have only one PCMCIA slot and one serial port.

32-Bit CardBus

In 1995 the PCMCIA 2.1 specification was enhanced to provide for 32-bit operation. The new
architecture, called CardBus, was closely based on the PCI bus, and strove to provide the same
improvements over the 16-bit PCMCIA card as PCI did over ISA. As such, CardBus provides for 33MHz
operation and correspondingly increased data transfer. It also introduces DMA and bus mastering to
PCMCIA based systems, which can markedly increase performance. Realizing that there are still many
16-bit PCMCIA card peripherals in the marketplace CardBus is fully backward compatible with the older
card design.

Because of this backward compatibility, Quatech has decided not to redesign our serial data
communication PCMCIA cards for CardBus or any of the other newer PCMCIA-based busses such as
CardBay, as doing so would limit the number of systems that could use our cards. As discussed with PCI,
because of the limitations imposed by serial and parallel transfers, there would be no noticeable
performance gains for Quatech serial cards under CardBus.

PCMCIA for Data Communication

Though PCMCIA card use is not limited to portable computers, (see PCMCIA drives for desktop PCs)
there are few instances where it is the best choice for data communication in desktop computers, however
it can be a very useful choice for sharing peripherals that were purchased for primary use in a laptop such
as a wireless modem or a scanner. In desktops, PCMCIA is better suited for adding extra storage space
via hard-disk cards, or transferring large files from portable systems. However, for laptop and handheld
computers, PCMCIA provides a way to connect a varied array of peripherals to the system, and to share
those devices with a desktop computer.
Page 153 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Clearly there is a size advantage to PCMCIA for portable applications. The cards are small, light, and
have low power requirements. They are an ideal interface choice for peripherals that have been scaled
down for portable use. Further, the ability to Hot Swap PCMCIA cards provides for the flexibility needed
to use multiple peripherals with only one or two slots. USB which also provides Hot Swapping is another
alternative for portable applications. However, to use USB the peripheral devices in your system must be
replaced with bus specific devices--an expensive prospect. Further, many USB products must be powered
by the computer itself, thereby reducing the time a laptop can function on battery alone. Or, if too much
power is required, they must be plugged-in, making them less attractive portable solutions. So if cost and
power conservation are your primary concerns, PCMCIA is still the best, most flexible choice for portable
applications.

There are USB adapters available, (such as Quatech's USB to Serial adapters) that perform the same
function as serial PCMCIA cards--essentially permitting standard RS-232 or RS-422/485 peripherals to
be connected to a PC that lacks native serial ports. Such products exist for parallel ports as well. With
parallel ports in particular, (but in some cases with serial ports ), PCMCIA is a much more reliable
solution. Many software applications designed to work with a computer's native ports have much more
success using a PCMCIA based parallel port than a USB-based parallel port. This may be because
PCMCIA was based on a traditional plug-in board bus, and thus the add-in ports implemented via
PCMCIA are more similar to a native port than those implemented via USB which has a completely
different bus architecture.

PCMCIA Specs
Bus Clock Signal 10 MHz

Bus Width 16-bit

Theoretical Max. Transfer Rate 20 Mbytes/sec (160 Mbits/sec)

Advantages Ideal for portable systems, hot swappable, Plug & Play

Disadvantages lower speed , needs special drive for use in desktop PCs

Page 154 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

SUMMARY
Inside computers, there are many internal components. In order for these components to
communicate with each other they make use of wires that are known as a ‘bus’.

A bus is a common pathway through which information flows from one computer component
to another. This pathway is used for communication purpose and it is established between two or
more computer components. We are going to check different computer bus architectures that
are found in computers.

DIFFERENT TYPES OF COMPUTER BUSES

Page 155 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Address bus

 An address bus is a computer bus (a bus is a series of lines connecting two or more
devices to each other) that is used to specify an address. may modern PC’s have more
than 36 adress lines, which in turn allows them to access over 64gb of memory. An
address bus is measured by the amount of memory a system can get. the address bus is
used by the CPU or DMA (direct memory access) to locate an address.

Control bus

 A control bus is another type of bus that is used by processors for communicating with
other devices that are in the computer this occurs through connections such as cables or
printed circuits. although the data bus carries actual data that is being processed, the
control bus carries signals that report the status of the variety of devices that the control
bus is communicating with.

Data Bus

 The data bus, also known as internal bus, connects all the internal devices of a computer,
such as CPU and graphics card, to the motherboard. data buses are also referred to as a
local bus, because they are supposed to connect to local devices. This bus is rather quick
and is independent of the rest of the computer operations.

Page 156 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Functions of Buses in Computers

1. Data sharing - All types of buses found in a computer transfer data between the computer
peripherals connected to it.

The buses transfer or send data in either serial or parallel method of data transfer. This allows for
the exchange of 1, 2, 4 or even 8 bytes of data at a time. (A byte is a group of 8 bits). Buses are
classified depending on how many bits they can move at the same time, which means that we
have 8-bit, 16-bit, 32-bit or even 64-bit buses.

2. Addressing - A bus has address lines, which match those of the processor. This allows data
to be sent to or from specific memory locations.

3. Power - A bus supplies power to various peripherals connected to it.

4. Timing - The bus provides a system clock signal to synchronize the peripherals attached to
it with the rest of the system.

The expansion bus facilitates easy connection of more or additional components and devices on a
computer such as a TV card or sound card.

Bus Terminologies

Computers have two major types of buses:


1. System bus:- This is the bus that connects the CPU to main memory on the motherboard. The
system bus is also called the front-side bus, memory bus, local bus, or host bus.
2. A number of I/O Buses, (I/O is an acronym for input / output), connecting various peripheral
devices to the CPU. These devices connect to the system bus via a ‘bridge’ implemented in the
processors chipset. Other names for the I/O bus include “expansion bus", "external bus” or “host
bus”.

Expansion Bus Types


These are some of the common expansion bus types that have ever been used in computers:

 ISA - Industry Standard Architecture


 EISA - Extended Industry Standard Architecture
 MCA - Micro Channel Architecture
 VESA - Video Electronics Standards Association
 PCI - Peripheral Component Interconnect
 PCMCIA - Personal Computer Memory Card Industry Association (Also called PC bus)
 AGP - Accelerated Graphics Port
 SCSI - Small Computer Systems Interface.

Page 157 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The 8 Bit and 16 Bit ISA Buses

ISA Bus

This is the most common type of early expansion bus, which was designed for use in the original
IBM PC. The IBM PC-XT used an 8-bit bus design. This means that the data transfers take place
in 8 bit chunks (i.e. one byte at a time) across the bus. The ISA bus ran at a clock speed of 4.77
MHz.

For the 80286-based IBM PC-AT, an improved bus design, which could transfer 16-bits of data
at a time, was announced. The 16-bit version of the ISA bus is sometimes known as the AT bus.
(AT-Advanced Technology)

The improved AT bus also provided a total of 24 address lines, which allowed 16MB of memory
to be addressed. The AT bus was backward compatible with its 8-bit predecessor and allowed 8-
bit cards be used in 16-bit expansion slots.

When it first appeared the 8-bit ISA bus ran at a speed of 4.77MHZ – the same speed as the
processor. Improvements done over the years eventually made the AT bus ran at a clock speed of
8MHz.

Page 158 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Comparison between 8 and 16 Bit ISA Bus

8-Bit ISA card (XT-Bus) 16-Bit ISA (AT –Bus card)

8-bit data interface 16-bit data interface

4.77 MHZ bus 8-MHZ bus

62-pin connector 62-pin connector

36-pin AT extension connection

Comparison of 8-bit, & 16-bit ISA Bus as Used in Early Computers.

MCA (Micro Channel Architecture)

IBM developed this bus as a replacement for ISA when they designed the PS/2 PC launched in
1987.

The bus offered a number of technical improvements over the ISA bus. For instance, the MCA
ran at a faster speed of 10MHz and supported either 16-bit or 32-bit data. It also supported bus
mastering - a technology that placed a mini-processor on each expansion card. These mini-
processors controlled much of the data transfer allowing the CPU to do other tasks.

One advantage of MCA was that the plug-in cards were software configurable; this means that
they required minimal intervention by the user when configuring.

The MCA expansion bus did not support ISA cards and IBM decided to charge other
manufacturers royalties for use of the technology. This made it unpopular and it is now an
obsolete technology.

The EISA Bus

The EISA Bus Slots (on the left) Where EISA Cards Were Connected |

Page 159 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

EISA (Extended Industry Standard Architecture)

EISA (Extended Industry Standard Architecture)

This is a bus technology developed by a group of manufactures as an alternative to MCA. The


bus architecture was designed to use a 32-bit data path and provided 32 address lines giving
access to 4GB of memory.

Like the MCA, EISA offered a disk-based setup for the cards, but it still ran at 8MHz in order for
it to be compatible with ISA.

The EISA expansion slots are twice as deep as an ISA slot. If an ISA card is placed in an EISA
slot it will use only the top row of connectors, however, a full EISA card uses both rows. It
offered bus mastering.

EISA cards were relatively expensive and were normally found on high-end workstations and
network servers.

VESA Bus

It was also known as the Local bus or the VESA-Local bus. VESA (Video Electronics
Standards Association) was invented to help standardize PCs video specifications, thus solving
the problem of proprietary technology where different manufacturers were attempting to develop
their own buses.

The VL Bus provided 32-bit data path and ran at 25 or 33 MHZ. It ran at the same clock
frequency as the host CPU. But this became a problem as processor speeds increased because,
the faster the peripherals are required to run, the more expensive they are to manufacture.

It was difficult to implement the VL-Bus on newer chips such as the 486s and the new Pentiums
and so eventually the VL-Bus was superseded by PCI.

VESA slots had extra set of connectors and thus the cards were larger. The VESA design was
backward compatible with the older ISA cards.

Features of the VESA local bus card:-

 32-bit interface
 62/36-pin connector
 90+20 pin VESA local bus extension

Page 160 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Peripheral Component Interconnect

Peripheral Component Interconnect (PCI) is one of the latest developments in bus architecture and is
the current standard for PC expansion cards. Intel developed and launched it as the expansion bus for the
Pentium processor in 1993. It is a local bus like VESA, that is, it connects the CPU, memory and
peripherals to wider, faster data pathway.

PCI supports both 32-bit and 64-bit data width; it is compatible with 486s and Pentiums. The bus data
width is equal to the processor, such as, a 32 bit processor would have a 32 bit PCI bus, and operates at
33MHz.

PCI was used in developing Plug and Play (PnP) and all PCI cards support PnP. This means a user can
plug a new card into the computer, power it on and it will “self-identify” and “self-specify” and start
working without manual configuration using jumpers.

Unlike VESA, PCI supports bus mastering that is, the bus has some processing capability and thus the
CPU spends less time processing data. Most PCI cards are designed for 5v, but there are also 3v and dual-
voltage cards. Keying slots used help to differentiate 3v and 5v cards and also to make sure that a 3v card
is not slotted into a 5v socket and vice versa.

The PCI Slots

The PCI Bus Architecture

Accelerated Graphics Port

The need for high quality and very fast performance of video on computers led to development
of the Accelerated Graphics Port (AGP). The AGP Port connects to the CPU and operates at
the speed of the processor bus. This means that video information is sent more quickly to the
card for processing.

Page 161 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The AGP uses the main PC memory to hold 3D images. In effect, this gives the AGP video card
an unlimited amount of video memory. To speed up the data transfer, Intel designed the port as a
direct path to the PC’s main memory.

Data transfer rate ranges from 264 Mbps to 528mbps, 800 Mbps up to 1.5 Gbps. AGP connector
is identified by its brown colour.

Personal Computer Memory Card Industry Association (PC Card)

The Personal Computer Memory Card Industry Association was founded to give a standard bus
for laptop computers. So it is basically used in the small computers.

Small Computer System Interface

Short for Small Computer System Interface, a parallel interface standard used by Apple
Macintosh computers, PC's and Unix systems for attaching peripheral devices to a computer.

Page 162 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The SCSI Port

Mac LC SCSI Port |

Universal Serial Bus (USB)

This is an external bus standard that supports data transfer rates of 12 Mbps. A single USB port
connects up to 127 peripheral devices, such as mice, modems, and keyboards. The USB also
supports hot plugging or insertion (ability to connect a device without turning the PC of) and
plug and play (You connect a device and start using it without configuration).

We have 3 versions of USB:-

USB 1x

First released in 1996, the original USB 1.0 standard offered data rates of 1.5 Mbps. The USB
1.1 standard followed with two data rates: 12 Mbps for devices such as disk drives that need
high-speed throughput and 1.5 Mbps for devices such as joysticks that need much less
bandwidth.

USB 2x

In 2002 a newer specification USB 2.0, also called Hi-Speed USB 2.0, was introduced. It
increased the data transfer rate for PC to USB device to 480 Mbps, which is 40 times faster than
the USB 1.1 specification. With the increased bandwidth, high throughput peripherals such as
digital cameras, CD burners and video equipment could now be connected with USB.

USB 3.0

USB 3.0 is the third major version of the Universal Serial Bus (USB) standard for interfacing
computers and electronic devices. Among other improvements, USB 3.0 adds the new transfer

Page 163 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
rate referred to as SuperSpeed USB (SS) that can transfer data at up to 5 Gbit/s (625 MB/s),
which is about 10 times as fast as the USB 2.0 Standard

IEEE 1394

The IEEE 1394 is a very fast external serial bus interface standard that supports data transfer
rates of up to 400Mbps (in 1394a) and 800Mbps (in 1394b). This makes it ideal for devices that
need to transfer high levels of data in real-time, such as video devices. It was developed by apple
with the name firewire.

A single 1394 port can connect up 63 external devices.

 It supports Plug and play


 Supports hot plugging, and
 Provides power to peripheral devices.

The IEEE 1394 Expansion Card

Firewire Ports

Page 164 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

SECONDARY MEMORY STORAGE DEVICES

Page 165 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Describe Floppy Disk drive in terms of:-


 Physical construction
 Disk controllers
 Standard format

Floppy Disk Drive (FDD)

A floppy disk drive (FDD), or floppy drive, is a hardware device that reads data storage
information. It was invented in 1967 by a team at IBM and was one of the first types of hardware
storage that could read/write a portable device. FDDs are used for reading and writing on
removable floppy discs. Floppy disks are now outdated, and have been replaced by other storage
devices such as USB and network file transfer.

The Floppy Disk Drive

The major parts of a FDD include:

Page 166 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Read/Write Heads: Located on both sides of a diskette, they move together on the same
assembly. The heads are not directly opposite each other in an effort to prevent interaction
between write operations on each of the two media surfaces. The same head is used for reading
and writing, while a second, wider head is used for erasing a track just prior to it being written.
This allows the data to be written on a wider "clean slate," without interfering with the analog
data on an adjacent track.
 Drive Motor: A very small spindle motor engages the metal hub at the center of the diskette,
spinning it at either 300 or 360 rotations per minute (RPM).
 Stepper Motor: This motor makes a precise number of stepped revolutions to move the
read/write head assembly to the proper track position. The read/write head assembly is
fastened to the stepper motor shaft.
 Mechanical Frame: A system of levers that opens the little protective window on the diskette to
allow the read/write heads to touch the dual-sided diskette media. An external button allows
the diskette to be ejected, at which point the spring-loaded protective window on the diskette
closes.
 Circuit Board: Contains all of the electronics to handle the data read from or written to the
diskette. It also controls the stepper-motor control circuits used to move the read/write heads
to each track, as well as the movement of the read/write heads toward the diskette surface.

The read/write heads do not touch the diskette media when the heads are traveling between
tracks. Electronic optics check for the presence of an opening in the lower corner of a 3.5-inch
diskette (or a notch in the side of a 5.25-inch diskette) to see if the user wants to prevent data
from being written on it.

Click on the picture to see a brief video of a diskette being inserted. Look for the silver, sliding door
opening up and the read/write heads being lowered to the diskette surface.

Page 167 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Read/write heads for each side of the diskette

Floppy Disk Drive Terminology

 Floppy disk - Also called diskette. The common size is 3.5 inches.
 Floppy disk drive - The electromechanical device that reads and writes floppy disks.
 Track - Concentric ring of data on a side of a disk.
 Sector - A subset of a track, similar to wedge or a slice of pie.

Writing Data on a Floppy Disk

The following is an overview of how a floppy disk drive writes data to a floppy disk. Reading
data is very similar. Here's what happens:

Page 168 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
1. The computer program passes an instruction to the computer hardware to write a data file on a
floppy disk, which is very similar to a single platter in a hard disk drive except that it is spinning
much slower, with far less capacity and slower access time.
2. The computer hardware and the floppy-disk-drive controller start the motor in the diskette
drive to spin the floppy disk. The disk has many concentric tracks on each side. Each track is
divided into smaller segments called sectors, like slices of a pie.
3. A second motor, called a stepper motor, rotates a worm-gear shaft (a miniature version of the
worm gear in a bench-top vise) in minute increments that match the spacing between tracks.
The time it takes to get to the correct track is called "access time." This stepping action (partial
revolutions) of the stepper motor moves the read/write heads like the jaws of a bench-top vise.
The floppy-disk-drive electronics know how many steps the motor has to turn to move the
read/write heads to the correct track.
4. The read/write heads stop at the track. The read head checks the prewritten address on the
formatted diskette to be sure it is using the correct side of the diskette and is at the proper
track. This operation is very similar to the way a record player automatically goes to a certain
groove on a vinyl record.
5. Before the data from the program is written to the diskette, an erase coil (on the same
read/write head assembly) is energized to "clear" a wide, "clean slate" sector prior to writing the
sector data with the write head. The erased sector is wider than the written sector -- this way,
no signals from sectors in adjacent tracks will interfere with the sector in the track being
written.
6. The energized write head puts data on the diskette by magnetizing minute, iron, bar-magnet
particles embedded in the diskette surface, very similar to the technology used in the mag stripe
on the back of a credit card. The magnetized particles have their north and south poles oriented
in such a way that their pattern may be detected and read on a subsequent read operation.
7. The diskette stops spinning. The floppy disk drive waits for the next command.

On a typical floppy disk drive, the small indicator light stays on during all of the above
operations.

Page 169 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Read/ write operation of floppy disk


 Floppy disk has to inserted into the floppy disk drive
 Drive is made up of a box with a slot into which user inserts the disk
 When the user inserts the disk, the drive grabs the disk and spins inside its plastic jacket
 The drive has multiple levers that get attached to the disk.
 One lever opens the metal plate or shutter, to expose the data access area.
 Other levers and gears move two read/write heads until they almost touch the diskette on
both the side.
 The drive circuit board receives instructions for reading/writing the data from/to the disk
through FDD controller.
 If the data are to be written onto the disk, the circuit board first verifies that no light is
visible through a small window in the floppy disk
 If the photo-sensor on the opposite side of the FDD detects a beam id light, floppy drive
detects to be write protected and does not allow the recording of the data
 The circuit board translates the instructions into signals that control the movement of the
disk and the read/write heads.
 A motor located beneath the disk spins a shaft that engages notch on the hub of the disk,
causing the disk to spin.
Page 170 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 When the heads are in correct position, electrical impulses create a magnetic field in one
of heads to write data to either the top or bottom of the disk
 On reading the data, the electrical signals are sent to the computer from the
corresponding magnetic field generated by the metallic particle on the disk.
 Since floppy disk head touches the diskette, both media and head wear out quickly.
 To reduce wear and tear, personal computers retract the heads and stop rotation when a
drive is not reading or writing the data.
 Next write/ read command is given, there is a delay of about half a second while the
motor gathers maximum speed.

1.44MB 3 1/2" Floppy Disk

The 3 1/2", 1.44MB, high-density (HD) drives first appeared from IBM in the PS/2 product line
introduced in 1987. Most other computer vendors started offering the drives as an option in their
systems immediately afterward. For systems that include floppy drives, the 1.44MB type is still
by far the most popular.

Page 171 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The drive records 80 cylinders consisting of 2 tracks each with 18 sectors per track, resulting in a
formatted capacity of 1.44MB. Some disk manufacturers label these disks as 2.0MB, and the
difference between this unformatted capacity and the formatted usable result is lost during the
format. Note that the 1440KB of total formatted capacity does not account for the areas the FAT
file system reserves for file management, leaving only 1423.5KB of actual file-storage area.

The drive spins at 300rpm and in fact must spin at that speed to operate properly with existing
high-and low-density controllers. To use the 500KHz data rate (the maximum from most
standard high-and low-density floppy controllers), these drives must spin at a maximum of
300rpm. If the drives were to spin at the faster 360rpm rate of the 5 1/4" drives, they would have
to reduce the total number of sectors per track to 15; otherwise, the controller could not keep up.
In short, the 1.44MB 3 1/2" drives store 1.2 times the data of the 5 1/4" 1.2MB drives, and the
1.2MB drives spin exactly 1.2 times faster than the 1.44MB drives. The data rates used by both
of these HD drives are identical and compatible with the same controllers. In fact, because these
3 1/2" HD drives can run at the 500KHz data rate, a controller that can support a 1.2MB 5 1/4"
drive can also support the 1.44MB drives.

Other types of floppy drives that have been used in the past include the following:

 2.88MB 3 1/2". This size was used on some IBM PS/2 and ThinkPad models in the early
1990s.
 720KB 3 1/2". This size was used by IBM and others starting in 1986 before the 1.44MB
3 1/2" drive was introduced.
 1.2MB 5 1/4". It was introduced by IBM for the IBM AT in 1984 and widely used
throughout the rest of the decade.
 360KB 5 1/4". An improved version of the floppy disk drive originally used by the IBM
PC, it was used throughout the 1980s on XT-class machines and some AT-class
machines.

Page 172 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Floppy Drive Interfaces

Floppy drives are interfaced to the PC in several ways. Many still include the traditional floppy
controller interface (even if a drive is not installed in the system), but some now use the USB
interface; this is covered later in this chapter. Because the traditional floppy controller only
works internally, all external drives are interfaced via USB or some other alternative interface.
USB drives often have a standard floppy drive inside an external box with a USB-to-floppy
controller interface converter inside. Newer, legacy-free systems don't include a traditional
floppy controller and typically use USB as the floppy interface. In the past, some drives have
been available in FireWire (IEEE 1394) or even parallel interfaces as well. For more information
on USB or the parallel port, see Chapter 15.

Drive Components

All floppy disk drives, regardless of type, consist of several basic common components. To
properly install and service a disk drive, you must be able to identify these components and
understand their functions (see Figure 10.2).

A typical 3 1/2" floppy disk drive.

Page 173 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Read/Write Heads

A floppy disk drive usually has two read/write headsone for each side of the disk, with both
heads being used for reading and writing on their respective disk sides (see Figure 10.3). At one
time, single-sided drives were available for PC systems (the original PC had such drives), but
today single-sided drives are a faded memory.

Figure A double-sided drive head assembly.

Analyzing 3 1/2" Floppy Disk Media Construction

3 1/2" disks differ from the older 5 1/4" disks in both construction and physical properties. The
flexible (or floppy) disk is contained within a plastic jacket. The 3 1/2" disks are covered by a
more rigid jacket than are the 5 1/4" disks. The disks within the jackets, however, are virtually
identical except, of course, for size.

The 3 1/2" disks use a much more rigid plastic case than 5 1/4" disks, which helps stabilize the
magnetic medium inside. Therefore, the disks can store data at track and data densities greater
than the 5 1/4" disks (see Figure 10.8). A metal shutter protects the media-access hole. The drive
manipulates the shutter, leaving it closed whenever the disk is not in a drive. The medium is then
completely insulated from the environment and from your fingers. The shutter also obviates the
need for a disk jacket.

Page 174 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Figure Construction of a 3 1/2" floppy disk.

Because the shutter is not necessary for the disk to work, you can remove it from the plastic case
if it becomes bent or damaged. Pry it off the disk case; it will pop off with a snap. You also
should remove the spring that pushes it closed. Additionally, after removing the damaged shutter,
you should copy the data from the damaged disk to a new one.

Rather than an index hole in the disk, the 3 1/2" disks use a metal center hub with an alignment
hole. The drive "grasps" the metal hub, and the hole in the hub enables the drive to position the
disk properly.

On the lower-left part of the disk is a hole with a plastic sliderthe write-protect/enable hole.
When the slider is positioned so the hole is visible, the disk is write-protected, meaning the drive
is prevented from recording on the disk. When the slider is positioned to cover the hole, writing
is enabled, and you can save data to the disk. For more permanent write-protection, some
commercial software programs are supplied on disks with the slider removed so you can't easily
enable recording on the disk. This is exactly opposite of a 5 1/4" floppy, in which covered means
write-protected, not write-enabled.

On the other (right) side of the disk from the write-protect hole is usually another hole called the
media-density-selector hole. If this hole is present, the disk is constructed of a special medium
and is therefore an HD or ED disk. If the media-sensor hole is exactly opposite the write-protect
hole, it indicates a 1.44MB HD disk. If the media-sensor hole is located more toward the top of
the disk (the metal shutter is at the top of the disk), it indicates a 2.88MB ED disk. No hole on
Page 175 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
the right side means that the disk is a low-density disk. Most 3 1/2" drives have a media sensor
that controls recording capability based on the absence or presence of these holes.

The actual magnetic medium in both the 3 1/2" and 5 1/4" disks is constructed of the same basic
materials. They use a plastic base (usually Mylar) coated with a magnetic compound. High-
density disks use a cobalt-ferric compound; extended-density disks use a barium-ferric media
compound. The rigid jacket material on the 3 1/2" disks has occasionally caused people to
believe incorrectly that these disks are some sort of "hard disk" and not really a floppy disk. The
disk cookie inside the 3 1/2" case is just as floppy as the 5 1/4" variety.

Floppy Disk Media Types and Specifications

This section examines the types of disks that have been available to PC owners over the years.
Especially interesting are the technical specifications that can separate one type of disk from
another, as Table 10.4 shows. The following sections define all the specifications used to
describe a typical disk.

Floppy Disk Media Specifications

5 1/4" 3 1/2"
Double- Quad- High- Double- High-
Media Density Density Density Density Density Extra High-
Parameters (DD) (QD) (HD) (DD) (HD) Density (ED)
Tracks per inch 48 96 96 135 135 135
(TPI)
Bits per inch 5,876 5,876 9,646 8,717 17,434 34,868
(BPI)
Media Ferrite Ferrite Cobalt Cobalt Cobalt Barium
formulation
Coercivity 300 300 600 600 720 750
(oersteds)
Thickness 100 100 50 70 40 100
(micro-in.)
Recording Horiz. Horiz. Horiz. Horiz. Horiz. Vert.
polarity
5 1/4" quad-density (QD) disks were never used as an official standard for PCs. They were widely used,
however, by the Tandy 2000 (a semi compatible MS-DOS computer that used the rare Intel 80186
processor and was introduced shortly before the IBM AT in 1984), the Tandy Color Computer and other
computers running the OS-9 operating system, and some CP/M computers. A QD disk can be
reformatted as a double-density disk by a PC that has a 360KB floppy drive.

Page 176 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Describe CD writable (WORM and DVD) in terms of:


 Physical construction
 Disk controllers
 Disk interfaces, USB , SCSI , IDE/ EIDE, IEEE , - 1394, PROPRIETARY)

Page 177 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

OPTICAL DISC TYPES:

The Compact Disc was the first optical disc to become a success on the market. It was the result
of a Philips/Sony collaboration in the early 1980s. The first CD hit the market in 1982, eventually
hitting a commercial peak in 2000. The Compact Disc, measuring 4.8 in. in diameter, became the
dominant medium for popular music, computer software, and video games in the 1990s thanks to
the superior audio/video (A/V) and storage capacity it had over its predecessors.

Blank CDs are available as either Recordable (CD-R) or ReWritable (-RW) and be manufactured
for a wide variety of burning speeds. A user can burn data onto a CD-R only once, but is able to
burn data onto a CD-RW multiple times. A typical (meaning single-layered) CD holds seven
hundred megabytes (700 MB), or 74 min. of audio.

Read-only media (ROM):

DVD-ROM: These are pressed similarly to CDs. The reflective surface is silver or gold colored.
They can be single-sided/single-layered, single-sided/double-layered, double-sided/single-
layered, or double-sided/double-layered. As of 2004, new double-sided discs have become
increasingly rare.
DVD-D: a new self-destructing disposable DVD format. Like the EZ-D, it is sold in an airtight
package, and begins to destroy itself by oxidation after several hours.
DVD Plus: combines both DVD and CD technologies by providing the CD layer and a DVD
layer. Not to be confused with the DVD+ formats below.
DVD-R for Authoring: a special-purpose DVD-R used to record DVD masters, which can then
be duplicated to pressed DVDs by a duplication plant. They require a special DVD-R recorder,
and are not often used nowadays since many duplicators can now accept ordinary DVD-R
masters.

Page 178 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
DVD-R (strictly DVD-R for General): can record up to 4.5 GB in a similar fashion to a CD-R
disc. Once recorded and finalized it can be played by most DVD-ROM players.
DVD-RW: can record up to 4.7 GB in a similar fashion to a CD-RW disc.
DVD-R DL: a derivate of DVD-R that uses double-layer recordable discs to store up to 8.5 GB
of data.
DVD-RAM (current specification is version 2.1): requires a special unit to play 4.7GB or 9.4GB
recorded discs (DVD-RAM disc are typically housed in a cartridge). 2.6GB discs can be
removed from their caddy and used in DVD-ROM drives. i Top capacity is 9.4GB
(4.7GB/side)…

Recordable Media:

DVD+R: can record up to 4.7 GB single-layered/single-sided DVD+R disc, at up to 16x speed.


Like DVD-R you can record only once.
DVD+RW: can record up to 4.7 GB at up to 16x speed. Since it is rewritable it can be
overwritten several times. It does not need special “pre-pits” or finalization to be played in a
DVD player.
DVD+R DL: a derivate of DVD+R that uses dual-layer recordable discs to store up to 8.5 GB of
data. Dual Layer recording allows DVD-R and DVD+R discs to store significantly more data, up
to 8.5 Gigabytes per disc, compared with 4.7 Gigabytes for single-layer discs. DVD-R DL (dual
layer was developed for the DVD Forum by Pioneer Corporation, DVD+R DL (double layer —
see figure) was developed for the DVD+RW Alliance by Philips and Mitsubishi Kagaku Media
(MKM). A Dual Layer disc differs from its usual DVD counterpart by employing a second
physical layer within the disc itself. The drive with Dual Layer capability accesses the second
layer by shining the laser through the first semi-transparent layer. The layer change mechanism
in some DVD players can show a noticeable pause, as long as two seconds by some accounts.
More than a few viewers have worried that their dual layer discs were damaged or defective.
DVD recordable discs supporting this technology are backward compatible with some existing
DVD players and DVD-ROM drives. Many current DVD recorders support dual-layer
technology, and the price point is comparable to that of single-layer drives, though the blank
media remains significantly more expensive.

HD DVD: High Density DVD, or High-Definition DVD is a high-density optical disc format
designed for the storage of data and high-definition video. HD DVD has a single-layer capacity
of 15 GB and a dual-layer capacity of 30 GB. There is also a double-sided hybrid format which
contains standard DVD-Video format video on one side, playable in regular DVD players, and
HD DVD video on the other side for playback in high definition on HD DVD players. JVC has
developed a similar hybrid disc for the Blu-ray format. These hybrid discs make retail marketing
and shelf space management easier. This also removes some confusion from DVD buyers since
they can now buy a disc compatible with any DVD/HD DVD player in their house. The HD
DVD format also can be applied to current red laser DVDs in 5, 9, 15 and 18 GB capacities
which offers a lower-cost option for distributors.

Page 179 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Blu-Ray

A Blu-ray Disc is a high-density optical disc format for the storage of digital media, including high-
definition video. The name Blu-ray Disc is derived from the blue-violet laser used to read and write this
type of disc. Because of this shorter wavelength (405 nm), substantially more data can be stored on a Blu-
ray Disc than on the common DVD format, which uses a red, 650 nm laser. Blu-ray Disc can store 25 GB
on each layer, as opposed to a DVD’s 4.7 GB. Several manufacturers have released single layer and dual
layer (50 GB) recordable BDs and rewritable discs. Blu-ray Disc is similar to PDD, another optical disc
format developed by Sony (which has been available since 2004) but offering higher data transfer speeds.
PDD was not intended for home video use and was aimed at business data archiving and backup. Blu-ray
Disc is currently in a format war with rival format HD DVD. About 9 hours of high-definition (HD) video
can be stored on a 50 GB disc. About 23 hours of standard-definition (SD) video can be stored on a 50
GB disc. On average, a single-layer disc can hold a High Definition feature of 135 minutes using MPEG-
2, with additional room for 2 hours of bonus material in standard definition quality. A double-layer disc
will extend this number up to 3 hours in HD quality and 9 hours of SD bonus material.

Optical Disk Drive (ODD)

An optical disk drive (ODD) uses a laser light to read data from or write data to an optical disc.
These include CDs, DVDs, and Blu-ray discs. This allows you to play music or watch movies
using pre-recorded discs. Computer software also often comes on one of these discs, so you need

Page 180 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
an optical drive to install software. Most modern drives allow you to write to an empty disc, so
you can create your own music CDs or create a backup copy of important data.

Components

An optical disk drive uses a laser to read and write data. A laser in this context means an
electromagnetic wave with a very specific wavelength within or near the visible light spectrum.
Different types of discs require different wavelengths. For compact discs, or CDs, a wavelength
of 780 nanometers (nm) is used, which is in the infrared range. For digital video discs, or DVDs,
a wavelength of 650 nm (red) is used, while for Blu-ray discs a wavelength of 405 nm (violet) is
used.

An optical drive that can work with multiple types of discs will therefore contain multiple lasers.
The mechanism to read and write data consists of a laser, a lens to guide the laser beam, and
photodiodes to detect the light reflection from the disc.

The optical mechanisms for reading CDs and DVDs are quite similar, so the same lens can be
used for both types of discs. The mechanism for reading Blu-ray discs, however, is quite
different. An optical drive that works with all types of discs will therefore have two separate
lenses: one for CD/DVD and one for Blu-ray.

An optical disc drive with separate lenses for CD/DVD and for Blu-ray discs

In addition to the lens, an optical drive has a rotational mechanism to spin the disc. Optical
drives were originally designed to work at a constant linear velocity (CLV) - this means that the
disc spins at varying speeds depending on where the laser beam is reading, so the spiral groove
of the disc passes by the laser at a constant speed. This means that a disc spins at around 200

Page 181 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
rotations per minute (rpm) when the laser is reading near the outer rim of the disc and at around
500 rpm when reading near the inner rim.

This constant speed is very important for music CDs and movie discs, since you want to listen to
music or watch a movie at the regular speed. For other applications, however, such as reading or
writing other types of data, working at this speed is not needed. Modern optical drives can often
spin much faster, which results in higher transfer speeds. When you see an optical drive reported
as a 4x drive, for example, this means it can spin at four times the base speed (i.e., between 800
and 2,000 rpm).

An optical drive also needs a loading mechanism. Two general types are in use:

1. A tray-loading mechanism, where the disc is placed onto a motorized tray, which moves in and
out of the computer case.
2. A slot-loading mechanism, where the disc is slid into a slot and motorized rollers are used to
move the disc in and out.

Tray-loading mechanisms for optical drives in desktop computers tend to be rather bulky.

Typical tray-loading optical drive for desktop computers

For laptops, the tray-loading mechanism is much smaller.

Optical disk drives come in two variants determined by the disc loading mechanisms that they
use:

1. Tray Load drive - In a tray-loading mechanism, the disc is placed onto a motorized tray, which
moves in and out of the computer.
2. Slot Load drive - In a slot-loading mechanism, the disc is slid into a slot and the motorized rollers
inside the drive are used to move the disc in and out.

Page 182 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

How It Works: CD-ROM Optical Drive

CD burners operate on the same essential technology that modern DVD burners do. A modern
DVD burner employs two lasers at different times: one for DVDs, one for CDs. A laser sled (1),
on sliding rails (2), is driven by a motor (3) that moves it radially relative to the disc hub (4).
(Another, variable-speed spindle motor, located under the hub, spins the disc itself.) A diode
inside the laser assembly emits a laser beam, which is focused through a lens (5). Scaling the
laser's power to different levels allows the drive to read discs (using a low power setting) or write
them (at high power).

As for the discs themselves, a commercial pressed CD-ROM disc starts as a molded platter of
polycarbonate (6), a tough plastic. Its surface is laced with microscopic pits (7) that represent
data, arranged in a tight spiral like an LP record's groove. (DVDs' tighter spiral partly explains
their greater capacity.) Atop the polycarbonate is spun a microthin layer of reflective material
(8), often an alloy containing aluminum, silver, or gold, topped by a lacquer or other protective
coating (9) and a label surface (10).

A reading laser beam partly scatters when it strikes pits in the spiral. When it hits lands (11) on
the disc, as the flat spots between the pits are called, it bounces cleanly back into an optical
pickup in the laser assembly. The drive electronics translate this pit-versus-land data stream into
binary code and, in turn, into actionable bytes of data.

Ready to write a disc? Insert a writable or rewritable disc into the drive, and the drive's firmware
detects the disc type, determines the media's parameters in a lookup table, and deploys an
appropriately powered writing laser. It writes data from the hub outward in a spiral, but writable
discs can't be physically "pitted" like pressed commercial ones. Instead, "R"-type write-once
Page 183 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
CDs and DVDs have an organic dye layer, backed by a reflective layer and protective/label
surfaces. When the writing laser hits the dye, it "burns" nonreflective spots, which are later
readable as light-scattering pits.

Rewritable discs (CD-RWs, DVD±RWs) work similarly, but they substitute a mutable "phase
change" chemical layer for the dye. The chemical is clear in one state, opaque in another; a
properly calibrated laser melts a pattern of pit-like nonreflective spots into the layer. A laser set
to a different strength, however, can eradicate the pattern, allowing re-use of the disc.

CD-ROM

Compact Disk - Read Only Memory drives were among the first disk-based drives for modern
personal computers. These are like regular CDs but contain read-only media such as music, data
files or software. CD-ROM drives read only CD-DA (audio) discs, CD-ROM (data) discs, and
(usually) CD-R/CD-RW writable discs. The maximum storage capacity of a typical CD-ROM is
around 700MB.

CD-R or CD-RW

CD-R or CD-RW drives are also called CD writers, CD burners, or CD recorders can read the
same formats as CD-ROM drives CD-DA, CD-ROM, and CD-R/RW discs but can also write
data to inexpensive CD-R (write-once) and CD-RW (rewritable) discs.

Note: Write speeds are typically slower than read speeds to maintain stability; write processes are
highly sensitive to shock and can corrupt the disk beyond repair when forcibly interrupted. While RW
drives can write multiple times, writable disks come in one-time write (R) and multiple-time write (RW)
variations.
DVD-ROM

Digital Versatile Disk-Read Only Memory drives are the direct evolution from CD-ROM drives.
DVDs had greater capacity and performance. DVD-ROM drives can read CD-DA, CD-ROM,
and CD-R/RW discs, but they also read DVD-Video, DVD-ROM, and (sometimes) DVD-Audio
discs.

DVD+/-RW

DVD writers typically do it all, they both read and write both CDs and DVDs. All current DVD
writers can write DVD+R, DVD+RW, DVD-R, and DVD-RW discs interchangeably. Most
models can also write dual-layer DVD+R DL and/or DVD-R DL discs which store about 8.5 GB
rather than the 4.7 GB capacity of standard single-layer discs.

Note: Although DVD+R and DVD+RW (the plus formats) are technically superior to DVD-R and DVD-RW
(the minus formats), the robust error detection and correction features of the DVD-R/RW drive are

Page 184 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
equally important. DVD+R/RW discs maybe incompatible with some older DVD players. Early DVD+RW
and DVD+R recorders could not write to DVD-R(W) discs and vice versa.

Note: Write speeds are typically slower than read speeds to maintain stability; write processes are
highly sensitive to shock and can corrupt the disk beyond repair when forcibly interrupted. While RW
drives can write multiple times, writable disks come in one-time write (R) and multiple-time write (RW)
variations.
Blu-ray

Blu-ray drives are the latest optical drives available. Blu-ray drives are typically reserved for
devices with high-definition display capabilities, including high-end computers and the
PlayStation 3 video game console. Blu-ray drives and disks can process extremely large amounts
of data: Blu-ray disks have a standard capacity of 25 GB but can store more than 50 GB of data
on a Blu-ray Dual Layer disc.

Note: Write speeds are typically slower than read speeds to maintain stability; write processes are
highly sensitive to shock and can corrupt the disk beyond repair when forcibly interrupted. While RW
drives can write multiple times, writable disks come in one-time write (R) and multiple-time write (RW)
variations.

There are three main types of Optical Disk Drive interface including the older IDE (Integrated Drive
Electronics) also called PATA (Parallel ATA), the new SATA (Serial ATA), and SCSI (Small Computer System
Interface)

Page 185 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Describe Hard Disk Drives in terms of:
 Physical construction
 Disk controllers
 Disk interfaces ( IDE, ESDI, and SCSI ) considering the
following parameters, data transfer rate, typical storage
capacity, embodiment of the controller on the interface card
and the type data encoding techniques (FM, MFM, and
RLL)
 Caching controllers
 Software caching

Construction and Operation of the Hard Disk

Photograph of a modern SCSI hard disk, with major components annotated.


The logic board is underneath the unit and not visible from this angle.
Page 186 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Hard Disk Operational Overview


A hard disk uses round, flat disks called platters, coated on both sides with a special media
material designed to store information in the form of magnetic patterns. The platters are mounted
by cutting a hole in the center and stacking them onto a spindle. The platters rotate at high speed,
driven by a special spindle motor connected to the spindle. Special electromagnetic read/write
devices called heads are mounted onto sliders and used to either record information onto the disk
or read information from it. The sliders are mounted onto arms, all of which are mechanically
connected into a single assembly and positioned over the surface of the disk by a device called
an actuator. A logic board controls the activity of the other components and communicates with
the rest of the PC.

Each surface of each platter on the disk can hold tens of billions of individual bits of data. These
are organized into larger "chunks" for convenience, and to allow for easier and faster access to
information. Each platter has two heads, one on the top of the platter and one on the bottom, so a
hard disk with three platters (normally) has six surfaces and six total heads. Each platter has its
information recorded in concentric circles called tracks. Each track is further broken down into
smaller pieces called sectors, each of which holds 512 bytes of information.

The entire hard disk must be manufactured to a high degree of precision due to the extreme
miniaturization of the components, and the importance of the hard disk's role in the PC. The
main part of the disk is isolated from outside air to ensure that no contaminants get onto the
platters, which could cause damage to the read/write heads.

Page 187 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Exploded line drawing of a modern hard disk, showing the major components.
Though the specifics vary greatly between different designs, the basic
components you see above are typical of almost all PC hard disks.

Here's an example case showing in brief what happens in the disk each time a piece of
information needs to be read from it. This is a highly simplified example because it ignores
factors such as disk caching, error correction, and many of the other special techniques that
systems use today to increase performance and reliability. For example, sectors are not read
individually on most PCs; they are grouped together into continuous chunks called clusters. A
typical job, such as loading a file into a spreadsheet program, can involve thousands or even
millions of individual disk accesses, and loading a 20 MB file 512 bytes at a time would be
rather inefficient:

1. The first step in accessing the disk is to figure out where on the disk to look for the needed
information. Between them, the application, operating system, system BIOS and possibly any
special driver software for the disk, do the job of determining what part of the disk to read.
2. The location on the disk undergoes one or more translation steps until a final request can be
made to the drive with an address expressed in terms of its geometry. The geometry of the drive
is normally expressed in terms of the cylinder, head and sector that the system wants the drive
to read. (A cylinder is equivalent to a track for addressing purposes). A request is sent to the
drive over the disk drive interface giving it this address and asking for the sector to be read.
3. The hard disk's control program first checks to see if the information requested is already in the
hard disk's own internal buffer (or cache). It if is then the controller supplies the information
immediately, without needing to look on the surface of the disk itself.
4. In most cases the disk drive is already spinning. If it isn't (because power management has
instructed the disk to "spin down" to save energy) then the drive's controller board will activate
the spindle motor to "spin up" the drive to operating speed.
5. The controller board interprets the address it received for the read, and performs any necessary
additional translation steps that take into account the particular characteristics of the drive. The
hard disk's logic program then looks at the final number of the cylinder requested. The cylinder
number tells the disk which track to look at on the surface of the disk. The board instructs the
actuator to move the read/write heads to the appropriate track.
6. When the heads are in the correct position, the controller activates the head specified in the
correct read location. The head begins reading the track looking for the sector that was asked
for. It waits for the disk to rotate the correct sector number under itself, and then reads the
contents of the sector.
7. The controller board coordinates the flow of information from the hard disk into a temporary
storage area (buffer). It then sends the information over the hard disk interface, usually to the
system memory, satisfying the system's request for data.

Page 188 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

How It Works: Platter-Based Hard Drive

Inside a modern hard drive, a stack of mirror-smooth aluminum or glass platters (1) spins at a
constant rate. The platters are mounted on a smooth-running spindle (2) whose rotation is eased
by an oil-filled bearing, and are driven by a high-speed motor. (Glass may seem an odd choice of
materials for the platters; when used, it's found most often in notebook drives. It's more rigid
than aluminum at a given thickness, and therefore makes possible thinner platters.)

Interleaved with the platters, lightweight actuator arms (3), one for each platter side, swivel in
unison on a pivot, controlled by a coil in the pivot mechanism (4). Each of these arms is tipped
by a drive head (5), which is mounted on a tiny suspension mechanism that's designed to fly,
thanks to a law known as Bernoulli's principle, a minuscule distance above the platters. The
heads ride above the surface on a cushion of air that is created by the spinning of the platters.

The platter surfaces are coated with a thin film that stabilizes the magnetically reactive particles
that are spread across the disk. These particles represent data as vast series of positive and
negative charges. To "write" data, the drive heads change the particles' magnetic orientation via
current passing through a coil, in essence "flipping" them as needed. In recent drives, a separate
giant magnetoresistive (GMR) head performs the read functions, detecting particles' magnetic
resistance at the quantum level. This signal is amplified and fed to the drive electronics, which
perform error correction and convert data into a PC-usable format.

Formatted platters are divided into tracks (6) (concentric circles) and sectors (7) (track
segments). When initially formatting the drive, the OS will also, under Windows, define
Page 189 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
groupings of sectors called clusters (8). The cluster size, consistent drive-wide, denotes the
minimum space that a file, when written, must occupy. (As you might guess, the smaller the
cluster size, the less overall wastage, since a file does not necessarily fill a perfectly even set of
clusters.) Another platter region, the landing area (9), often situated near the hub, serves as a
parking space for the heads when they are inactive.

The actuator arms look like record-player tone arms, but they take orders from your system
through the drive's firmware and interface (10). When the drive receives a write or read order,
the arms swing over the appropriate track, pausing until the data (or an appropriate chunk of
unoccupied space) rotates under the head. For a write command, the drive might write the file in
one chunk, or, if enough contiguous clusters aren't handy, in scattered parts. A directory table on
the disk catalogs files and fragments for later retrieval. When a read command occurs, the drive
checks the table, then sends the actuator arms to fetch the pieces.

Sometimes bottlenecks occur, so a chunk of on-drive memory, the buffer (not shown), acts as a
way station for inbound or outbound data delayed in transit. It can also predictively stash oft-
requested data to spare the drive from having to mechanically fetch it. When a buffer "hit"
occurs, data transfer soars for a brief moment—that's because the buffer's solid-state memory is
far faster than the hard-working (but mechanical) platters and arms.

Hard Disk Controller (HDC)


A hard disk controller (HDC) is an electrical component within a computer hard disk that
enables the processor or CPU to access, read, write, delete and modify data to and from the hard
disk. Essentially, an HDC allows the computer or its processor to control the hard disk.

A hard disk controller's primary function is to translate the instructions received from the
computer into something that can be understood by the hard disk and vice versa. It consists of an
expansion board and its related circuitry, which is usually attached directly to the backside of the
hard disk. The instructions from a computer flow through the hard disk adapter, into the hard
disk interface and then onto the HDC, which sends commands to the hard disk for performing
that particular operation.

Typically, the type and functions of a hard disk controller depend on the type of interface being
used by the computer to access the hard disk. For example, an IDE hard disk controller is used
for IDE interface based hard disks.

 The disk controller is the controller circuit which enables the CPU to communicate with
a hard disk, floppy disk or other kind of disk drive. Also it provides an interface between
the disk drive and the bus connecting it to the rest of the system.
 Early disk controllers were identified by their storage methods and data encoding. They
were typically implemented on a separate controller card. Modified frequency

Page 190 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
modulation (MFM) controllers were the most common type in small computers, used for
both floppy disk and hard disk drives. Run length limited (RLL) controllers used data
compression to increase storage capacity by about 50%. Priam created a proprietary
storage algorithm that could double the disk storage. Shugart Associates Systems
Interface (SASI) was a predecessor to SCSI.
 Modern disk controllers are integrated into the disk drive. For example, disks called
"SCSI disks" have built-in SCSI controllers. In the past, before most SCSI controller
functionality was implemented in a single chip, separate SCSI controllers interfaced disks
to the SCSI bus.
 The most common types of interfaces provided nowadays by disk controllers are PATA
(IDE) and Serial ATA for home use. High-end disks use SCSI, Fibre Channel or Serial
Attached SCSI. Disk controllers can also control the timing of access to flash memory
which is not mechanical in nature (i.e. no physical disk).

Describe the following in terms of physical construction and operation


i. Hard disk (3)
 Holds the main storage media of a computer
 Uses rugged solid substrates called platters
 Storage is achieved by depositing a thin magnetic film on either side of each disk
 single hard disk usually consists of several platters. Each platter requires two read/write heads,
one for each side. All the read/write heads are attached to a single access arm so that they
cannot move independently. Each platter has the same number of tracks, and a track location
that cuts across all platters is called a cylinder
ii. Compact disc (3)
 Are recorded as a single continuous spiral track running from the spindle to the lead out area
 A compact disk is a circular plastic plate that has a special coating on one side of it that is used
to burn data onto it
 compact is a small, portable, round medium made of molded polymer for electronically
recording, storing, and playing back audio, video, text, and other information in digital form

Hard Disk Interface(s)


There are a few ways in which a hard disk can connect/interface with:

 (A)dvanced (T)echnology (A)ttachment (Also known as IDE, ATAPI and Parallel ATA)
 (S)erial ATA
 SCSI(aka Scuzzy)

There are variants of each interface, and this article will not do justice to the different types of
ATA, SATA and SCSI interfaces. Thus, it will only highlight the more common interfaces as
used by the home user.
Page 191 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

ATA (IDE, ATAPI, PATA)

 ATA is the acronym for Advanced Technology Attachment. It has been an industry standard hard
drive interface for 15 years. ATA uses a 16-bit parallel connection to make the link between
storage devices and motherboards, and is also called PATA to distinguish it from the newer SATA
standard. In addition, ATA is also known as IDE or EIDE (Enhanced Integrated Drive Electronics).

Currently the two most popular standards for ATA hard drives are the ATA-6 (which is also
known as Ultra ATA 100 or Ultra DMA 100) and ATA 133. The maximum bandwidth for the
former is 100MB/s, and 133 MBs for the latter. Most of today's optical drives utilize the
IDE/PATA interface.

 ATA is a common interface used in many personal computers before the emergence of
SATA. It is the least expensive of the interfaces.
 IDE or ATA - This is currently the most common interface used but is quickly becoming
overcome by the newer SATA interface. Hard drives using this type of interface have
speeds up to 100 Mbps.
 PATA : Parallel ATA drives range between 33 MHz and 133 MHz (Ultra ATA/33
through /133), use a 4-pin Molex power connector, 40-pin IDE ribbon cable for data, and
can be jumpered as a single, master, slave or cable select

Page 192 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Disadvantages

 Older ATA adapters will limit transfer rates according to the slower attached device (debatable)
 Only ONE device on the ATA cable is able to read/write at one time
 Limited standard for cable length (up to 18inches/46cm)

Advantages

 Low costs
 Large capacity

Page 193 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

SATA

 SATA is basically an advancement of ATA.


 The newer SATA interface is also an industry standard for connecting hard drives to computer
systems, and is based on serial signaling technology. The advantages over PATA include longer,
thinner cables for more efficient airflow within a computer chassis, fewer pin conductors for
reduced electromagnetic interference, and lower signal voltage to minimize noise margin. The
bandwidth of SATA is also far improved over PATA - the SATA 1.0 can reach a maximum of
1.5Gb/s (1500MB/s), while the latest SATA 2.5 standard can support up to 3Gb/s (3000MB/s).
The latter also sports additional features such as NCQ (Native Command Queuing), port
multiplier and port selector, although certain non-SATA 2.5 devices may also provide some of
these features. As a result, the SATA interface is gradually replacing PATA as the mainstream
hard drive interface in the personal storage market.
 SATA: Serial ATA drives come in 150, 300, and 600 MB/s versions (1.5/3.0/6.0 Gb/s),
use a 15-pin power connector and a 7-pin data connector.
 SATA - A newer interface that uses less bulky cables and has speeds starting at 150
Mbps for SATA and 300 Mbps for SATA II. Almost all computer manufacturers have
started using SATA drives.

Page 194 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Disadvantages

 Slower transfer rates compared to SCSI


 Not supported in older systems without the use of additional components

Advantages

 Low costs
 Large capacity
 Faster transfer rates compared to ATA (difference is marginal at times though)
 Smaller cables for better heat dissipation

Page 195 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
SCSI

 SCSI, or Small Computer System Interface has the outstanding ability to


compartmentalize diverse operations, making SCSI more suitable for multitasking
operating environments. In addition, SCSI enhances critical performance in situations
where more than one hard drive is used, such as in workstation/server or RAID
environments. SCSI hard drives are typically available with less seek times, lower
latencies and much higher transfer rates than an equivalent-capacity ATA drive. Before
serial signaling technology was applied to the SCSI field, all SCSI interface standards
used parallel technology to transfer data.
 SAS (Serial Attached SCSI)
 SCSI is commonly used in servers, and more in industrial applications than home uses.
 SCSI: Small Computer System Interface drives range in transfer rates from 160 MB/s to
640 MB/s and use 68-pin, 80-pin or serial connectors.
 SCSI - This type of interface is typically used in a business environment for servers.
Hard Drives designed for a SCSI interface tend to have a faster RPM which therefore
provides better performance.

Page 196 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Disadvantages

 Costs
 Not widely supported
 Many, many different kinds of SCSI interfaces
 SCSI drives have a higher RPM, creating more noise and heat

Advantages

 Faster
 Wide range of applications
 Better scalability and flexibility in Arrays (RAID)
 Backward compatible with older SCSI devices
 Better for storing and moving large amounts of data
 Tailor made for 24/7 operations
 Reliability

ESDI (Enhanced Small Disk Interface)


Enhanced Small Disk Interface (ESDI) was a disk interface designed by Maxtor Corporation in the
early 1980s to be a follow-on to the ST-412/506 interface. ESDI improved on ST-506 by moving certain
parts that were traditionally kept on the controller (such as the data separator) into the drives themselves,
and also generalizing the control bus such that more kinds of devices (such as removable disks and tape
drives) could be connected. ESDI used the same cabling as ST-506 (one 34-pin common control cable,
and a 20-pin data channel cable for each device), and thus could easily be retrofitted to ST-506
applications.

ESDI was popular in the mid-to-late 1980s, when SCSI and IDE technologies were young and immature,
and ST-506 was neither fast nor flexible enough. ESDI could handle data rates of 10, 15, or 20 Mbit/s (as
opposed to ST-506's top speed of 7.5 Mbit/s), and many high-end SCSI drives of the era were actually
high-end ESDI drives with SCSI bridges integrated on the drive.

Page 197 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
By 1990, SCSI had matured enough to handle high data rates and multiple types of drives, and ATA was
quickly overtaking ST-506 in the desktop market. These two events made ESDI less and less important
over time, and by the mid-1990s, ESDI was no longer in common use.

DISK DATA ENCODING TECHNIQUES (FM, MFM, AND RLL)


Digital information is a stream of ones and zeros. Hard disks store information in the form of
magnetic pulses. In order for the PC's data to be stored on the hard disk, therefore, it must be
converted to magnetic information. When it is read from the disk, it must be converted back to
digital information. This work is done by the integrated controller built into the hard drive, in
combination with sense and amplification circuits that are used to interpret the weak signals read
from the platters themselves.

Magnetic information on the disk consists of a stream of (very, very small) magnetic fields. As
you know, a magnet has two poles, north and south, and magnetic energy (called flux) flows
from the north pole to the south pole. Information is stored on the hard disk by encoding
information into a series of magnetic fields. This is done by placing the magnetic fields in one of
two polarities: either so the north pole arrives before the south pole as the disk spins (N-S), or so
the south pole arrives before the north (S-N).

Technical Requirements for Encoding and Decoding

You might think that since there are two magnetic polarities, N-S and S-N, they could be used
nicely to represent a "one" or a "zero" to allow easy encoding of digital information. That would
be nice, but it doesn't work that way. The main reason is that the read/write heads are designed
Page 198 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
not to measure the polarity of magnetic fields, but rather flux reversals, which occur when the
head moves from an area that has north-south polarity to one that has south-north polarity, or
vice-versa. What this means is that the encoding of data must be done based on these reversals,
and not the contents of the individual fields.

There is also another consideration in the encoding of data, and that is the necessity of using
some sort of method of indicating where one bit ends and another begins. Even if we could use
one polarity to represent a "one" and another to represent a "zero", what would happen if we
needed to encode on the disk a stream of 1,000 consecutive zeros? It would be very difficult to
tell where, say, bit 787 ended and bit 788 began. Also, since adjacent magnetic fields of the same
polarity combine to form a larger one, this would, in layman's terms, create a mess.

To keep track of which bit is where, some sort of clock synchronization must be added to the
encoding sequence. The flux reversals are considered to be written at a clock frequency, and the
higher the frequency of reversals, the more data that can be stored in a given space.

Different encoding methods have developed to allow data to be stored effectively on hard disks
(and other media). These have been refined over time to allow for more efficient, closely packed
storage. It's important to understand the distinction of what density means in this context.
Hardware technology strives to allow more bits to be stored in the same area by allowing more
flux reversals per linear inch of track. Encoding methods strive to allow more bits to be stored by
allowing more bits to be encoded (on average) per flux reversal.

Modified Frequency Modulation (MFM)

The first encoding system for recording digital data on magnetic media, was frequency
modulation, of course abbreviated FM. (This has nothing whatever to do with FM radio, of
course, except for a similarity in the concept of how the data is encoded.) This is a simple
scheme, where a one is recorded as two consecutive flux reversals, and a zero is recorded as a
flux reversal followed by no flux reversal. This can also be thought of as follows: a one is a
reversal to represent the clock followed by a reversal to represent the "one", and a zero is a
reversal to represent the clock followed by "no reversal" to represent the "zero".

The name "frequency modulation" can be seen in the patterns that are created if you look at the
encoding pattern of a stream of ones or zeros. If we designate "R" to represent a flux reversal and
"N" to represent no flux reversal, a byte of zeroes would be encoded as
"RNRNRNRNRNRNRNRN", while a byte of all ones would be "RRRRRRRRRRRRRRRR".
As you can see, the ones have double the frequency of reversals compared to the zeros; hence
frequency modulation.

The problem with FM is that it is wasteful: each bit requires two flux reversal positions, resulting
in a theoretical overhead of 100% compared to the ideal case (one reversal per bit). FM is
obsolete and is no longer used. In fact, it was obsolete before the PC was really invented; it was
originally used in floppy disks of older machines.

Page 199 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The replacement for FM is modified frequency modulation, or MFM. MFM improves on FM by
reducing the number of flux reversals inserted just for the clock. Instead of inserting a clock
reversal before each bit, one is inserted only between consecutive zeroes. This means far fewer
reversals are needed on average per bit. This allows the clock frequency to be doubled, allowing
for approximately double the storage capacity of FM.

MFM encoding was used on the earliest hard disks, and also on floppy disks. In fact, MFM is
still the standard that is used for floppy disks today. For hard disks it was replaced by the more
efficient RLL and its variants. Presumably this did not happen for floppy disks because the need
for more efficiency was not nearly so great, compared to the need for backward compatibility.

Run Length Limited (RLL)

An improvement on the MFM encoding technique used in earlier hard disks and used on all
floppies is run length limited or RLL. This is a more sophisticated coding technique, or more
correctly stated, "family" of techniques. I say that RLL is a family of techniques because there
are two parameters that define how RLL works, and therefore, there are several different
variations. (Of course, you don't need to know which one your disk is using, since this is all
internal to the drive anyway).

RLL works by looking at groups of bits instead of encoding one bit at a time. The idea is to mix
clock and data flux reversals to allow for even denser packing of encoded data, to improve
efficiency. The two parameters that define RLL are the run length and the run limit (and hence
the name). The run length is the minimum spacing between flux reversals, and the run limit is the
maximum spacing between them. As mentioned before, the amount of time between reversals
cannot be too large or the read head can become out of sync and lose track of which bit is where.

The type of RLL used is expressed as "X,Y RLL" where X is the run length and Y is the run
limit. The most commonly used type of RLL is 1,7 RLL. Describing how RLL encodes the data
would be too involved to include here because it uses specific patterns of reversals to encode
patterns of bits, and I've probably gotten more detailed here than I wanted to anyway. One or
another variant of RLL is now used on most hard disk drives.

Page 200 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 Caching controllers

What is controller cache?

The controller cache is a physical memory space that streamlines two types of I/O (input/output)
operations: between the controllers and hosts, and between the controllers and disks.
For read and write data transfers, the hosts and controllers communicate over high-speed
connections. However, communications from the back-end of the controller to the disks is
slower, because disks are relatively slow devices.
When the controller cache receives data, the controller acknowledges to the host applications that
it is now holding the data. This way, the host applications do not need to wait for the I/O to be
written to disk. Instead, applications can continue operations. The cached data is also readily
accessible by server applications, eliminating the need for extra disk reads to access the data.

The controller cache affects the overall performance of the storage array in several ways:

 The cache acts as a buffer, so that host and disk data transfers do not need to be synchronized.
 The data for a read or write operation from the host might be in cache from a previous operation,
which eliminates the need to access the disk.
 If write caching is used, the host can send subsequent write commands before the data from a
previous write operation is written to disk.
 If cache prefetch is enabled, sequential read access is optimized. Cache prefetch makes a read

 Software caching

Explain RAID systems (paralleling of hard drives.)


Page 201 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RAID Recovery
An array of hard disks known as RAID (Redundant
Array of Independent Disks or some call it Redundant
Array of Inexpensive Disk) are used in enterprise
which is normally the main storage centre for mission-
critical data.

RAID was originally deployed at the time when the


individual disk capacity was too small to hold huge and
demanding operational data. RAID tackles this issue
nicely by daisy-chaining individual disk to form a
larger volume much sought after during that time.
Additional redundancy to increase failsafe is another
consideration. Increase in performance for some RAID
configuration under specific operational environment is
obviously another factor for selection of RAID.

RAID can be quite complex considering that different users may be using different methods to
store their data on their RAID server resulting in different kinds of RAID configuration.
Whatever the configuration, the main consideration still hinges on three critical factors - Volume
size, Failsafe and Performance considerations.

Common RAID configuration may deploy duplex / mirror, striping (with or without parity) or a
combination of these.

RAID
What is RAID?
To understand RAID, imagine multiple disk drives that are put together and interlinked in an
array to obtain greater performance, capacity and reliability.
RAID is an acronym for Redundant Array of Independent Disks - the technique that was
developed by researchers at the University of California at Berkeley during 1987 to overcome
the limitation and deficiency imposed by a single hard disk.

Using RAID
RAID became popular when the needs of new applications and devices are beyond the capability of a
typical single hard disk. Special hard disks are expensive and RAID thus became an affordable alternative
to large storage system that requires speedy data transfer rates and security.
RAID is now commonly used on computer server to reliably store large chunks of data. With the
availability of RAID options now integrated into motherboard chipsets and operating systems,
desktop and high-end users are starting to employ this technology to operate the storage-
intensive tasks, such as non-linear video/audio editing and critical real-time operations.
Page 202 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Advantages of RAID Disk


The key design of RAID is to provide data integrity, fault tolerance, throughput and
expandability.
RAID can be set to various configurations to increase I/O disk performances by merging the
efficiency of two or more hard drives into one logical volume or exploiting the redundancy of a
second drive for data replication and integrity.

Data Integrity and Fault Tolerance


Data on the RAID array can withstand the failure of one or more hard disk without any data loss. Restoration is done
automatically and replacing data from backups are unnecessary. Data is also examined and corrected to prevent
corrupted duplications or modifications.

Throughput
Data can be read and written at a greater speed.

Expandability
The size of a RAID array is dynamic and easily configurable. Suppose 200GB of hard disk space is needed to store a
video file. Instead of splitting the file into several parts, multiple disks can be merged to provide a single disk
volume.

RAID Disk Setup


To set up a RAID array, a RAID controller is used. The basic idea of having a controller is to
manage read/write requests, mirroring, parity and data stripping operations from the computer to
the RAID drives.
The controller can either be an external dedicated hardware component, internal hardware RAID
controller or a software-based solution.
Some operating systems such as Linux, Windows 2000 and Windows Advanced Server provide
their set of in-built software raid functions.

Software RAID
Software RAID perform all I/O commands and RAID algorithms using the host's microprocessor. The main advantage
of this solution is low cost. There is however limited expandability and a general dip in system performance caused
by PCI bus traffic, CPU utilization and interrupts.

Internal Hardware RAID Controller


An internal hardware controller exists in the form of an integrated RAID onboard capability or a high-speed host bus
slot. The hardware controller handles the RAID operations and I/O commands, draws less processing power and
delivers more robust fault-tolerant features than a software RAID option.

External Hardware RAID Controller

Page 203 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
An external controller is usually fitted in a separate case, housing the disks altogether in a multi-bay holder. External
hardware controllers are powerful machines as it has an inbuilt microprocessor that executes full RAID operations
and algorithm, supports data caching and therefore offers complete operating system independence and salability.
Larger RAID systems are commonly connected to the computer using high-speed interfaces such as iSCSI, EIDE or
Fibre Channel.

Striping
Stripping is a feature of a RAID configuration that offers huge performance gain. Data in a
striped array is written or read across all the array drives simultaneously. When the computer
writes data into the disk, the data is divided across several pieces and inserted into each
individual disk at the same time.
Similarly, when the computer request to read a file, the multiple pieces of data from each disk
drive are extracted together to be processed by the CPU, which effectively increases read/write
time.
Striping involves partitioning each drive's storage space into units which are known as the stripe
block size.

There're two important variables in a striped array, namely the stripe width and stripe size. Both
factors greatly determine the performance of a striped array.

Stripe Width:
The stripe width refers to the number of drives in the array.

Stripe Size:
The stripe size represents the size of a single chunk of unit data to be written into each disk. The
stripe size is configurable and can range from 512 bytes to several megabytes.
The default IDE configuration is 64K. It is commonly a multiple of 8k.

To demonstrate this, assume we need to write a 5MB file into a disk array. If we have an array of
5 drives and writing the data at 100K per unit, we'll need 10 write cycles to complete writing the
entire file. Note that each cycle only writes 500K of the file.
Now if we have an array of 10 drives, we can effectively reduce the write time into half since
each write cycle writes 1MB of the file. This will also mean that only 5 write cycles are needed
to complete writing the file.
So generally, we can see that the more drives that are implemented, the faster its performance. It
is also apparent that for larger data size, increasing the strip size will help.

However, there're certain factors that must be considered when selecting the stripe width and
stripe size.
Firstly, if the stripe size is too small, writing a big file would cause the file to be broken down
into many segments across the drives hence utilizing more disk resources.
A stripe size too big may exceed the size of the data to be stored and result in space wastage.
That is to say if you configure 100K as your stripe size, you'll waste 30K of space if you are to
write a 70K sized data.

Generally, the more drives that are implemented, the better its the performance. However if any
Page 204 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
one disk breaks down, all data will be lost. This reliability is often measured by mean time
between failure (MTBF), which is a inverse proportion to the stripe width.
That is to imply that a set of 3 disks is 1/3 as reliable as a single disk. Hence, increasing the
number of disk drives in the array for data stripping will also increase the risk of a disk failure.

The drawback of using striping is that similar hard disks must be used together in the array to
prevent latency. As the data parts are read from the various drives to be pieced together to form
the original data, the slowest drive will determine when the concatenation completes.

Mirroring
Mirroring is a simple concept of simultaneously duplicating data or having more than 2 or more
copies of data written on separate disks so as to provide redundancy.
If one drive would to fail, the system can still operate because it has another copy of the data.
The drawback is we need to incur more disk space as there is a 100% "disk wastage". In
addition, mirroring requires two identical sized disk to function for optimised performance.

Parity
Parity is a fault tolerance feature that deals with error detection. Parity information is generated
or 'hashed' using the (XOR) logical operation and stored separately.
If the data is corrupted, by analysing the parity bit, the corrupted data could hence be restored.
In RAID 3 & RAID 4, parity information is stored on a single dedicated hard disk. This might
cause a bottleneck as parity data has to be read, computed and written into this disk each time a
read/write operation takes place.
To curb this problem, distributed parity stores the parity bit across the entire array (such as RAID
5).
JBOD
A non-RAID configuration, known as JBOD (Just a Bunch Of Disks) is a popular method for
combining multiple disk drives into a single logical drive.
JBOD features several hard disk concatenated together to create one logical drive. For example,
we can combine a 5GB, 15GB, 20GB and 5.5GB drive into a 45.5GB logical drive. JBOD does
not provide any data redundancy.

RAID Limitations
It is important to understand both the benefits of RAID and its limitations. Most RAID users tend to be
complacent with maintenance and backups due to the common misconception that their data is
foolproof and invulnerable.

RAID does not protect data against a virus attack, human error, logical corruption, data deletion
or a fire breakout. There are cases of more than 1 disk failure.

A typical situation is the users fail to respond in time when the first disk fails, follow by a second
disk failure and hence loss of data.

Page 205 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
It is also too optimistic to assume that the RAID Controller will always function normally.
Backup still remain one of the most critical practices in data operations.

The different combinations eventually make up the different RAID levels from 1 to 10.
Typically, RAID 5 is the most common RAID configuration adopted by corporate users.

SUMMARY

RAID is a technology that is used to increase the performance and/or reliability of data storage.
The abbreviation stands for Redundant Array of Inexpensive Disks. A RAID system consists of
two or more drives working in parallel. These disks can be hard discs, but there is a trend to also
use the technology for SSD (solid state drives). There are different RAID levels, each optimized
for a specific situation. These are not standardized by an industry group or standardization
committee. This explains why companies sometimes come up with their own unique numbers
and implementations. This article covers the following RAID levels:

 RAID 0 – striping
 RAID 1 – mirroring
 RAID 5 – striping with parity
 RAID 6 – striping with double parity
 RAID 10 – combining mirroring and striping

The software to perform the RAID-functionality and control the drives can either be located on a
separate controller card (a hardware RAID controller) or it can simply be a driver. Some versions
of Windows, such as Windows Server 2012 as well as Mac OS X, include software RAID
functionality. Hardware RAID controllers cost more than pure software, but they also offer better
performance, especially with RAID 5 and 6.

RAID-systems can be used with a number of interfaces, including SCSI, IDE, SATA or FC
(fiber channel.) There are systems that use SATA disks internally, but that have a FireWire or
SCSI-interface for the host system.

Sometimes disks in a storage system are defined as JBOD, which stands for ‘Just a Bunch Of
Disks’. This means that those disks do not use a specific RAID level and acts as stand-alone
disks. This is often done for drives that contain swap files or spooling data.

Below is an overview of the most popular RAID levels:

With the aid of diagrams, explain the following RAID levels


(RAID Implementation Types) showing advantages and disadvantages
of each

Page 206 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 RAID 0 – striping
RAID 0
RAID 0 is not really a "true" RAID system, as there is no fault tolerance involved here. What it
does is to stripe each data block and write the striped data into different drives.

As such RAID 0 literally has no redundancy at all compared to other RAID arrays. So why use
RAID 0 at all? For that we need to know how RAID 0 is set up.

RAID 0 (Disk striping):

RAID 0 splits data across any number of disks allowing higher data throughput. An individual
file is read from multiple disks giving it access to the speed and capacity of all of them. This
RAID level is often referred to as striping and has the benefit of increased performance.
However, it does not facilitate any kind of redundancy and fault tolerance as it does not duplicate
data or store any parity information (more on parity later). Both disks appear as a single partition,
so when one of them fails, it breaks the array and results in data loss. RAID 0 is usually
implemented for caching live streams and other files where speed is important and
reliability/data loss is secondary.

1. Data is first sent from server to the RAID controller.


2. Then the RAID controller stripes the data.
3. Finally, the striped data is sent to the array.

Page 207 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
RAID 0 requires at least 2 drives. Files are striped and sent to the drives in the array. This
process gives the performance enhancement of any single level RAID system as there is no
redundancy.

RAID 0 incurs the lowest cost, but has zero fault tolerance. As such, if you lose data, RAID 0
will take the longest time to recover from. Additional backup is needed, unless data is not
important.

However, read and write performance is great. It has its uses such as high end gaming, web
streaming and editing for videos and music and other uses which require fast read write speeds
but do not have critical data.

RAID level 0 – Striping

In a RAID 0 system data are split up into blocks that get written across all the drives in the array.
By using multiple disks (at least 2) at the same time, this offers superior I/O performance. This
performance can be enhanced further by using multiple controllers, ideally one controller per
disk.

Advantages

 RAID 0 offers great performance, both in read and write operations. There is no overhead
caused by parity controls.
 All storage capacity is used, there is no overhead.
 The technology is easy to implement.

Disadvantages

 RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be
used for mission-critical systems.

Page 208 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Ideal use

RAID 0 is ideal for non-critical storage of data that have to be read/written at a high speed, such
as on an image retouching or video editing station.

If you want to use RAID 0 purely to combine the storage capacity of twee drives in a single
volume, consider mounting one drive in the folder path of the other drive. This is supported in
Linux, OS X as well as Windows and has the advantage that a single drive failure has no impact
on the data of the second disk or SSD drive.

 RAID 1 – mirroring
RAID 1
RAID 1 employs mirroring, and sometimes duplexing for added performance. It requires at least
2 hard disks ideally similar to each other, and provides a fairly fast recovery period.

RAID 1 (Disk Mirroring):

RAID 1 writes and reads identical data to pairs of drives. This process is often called data
mirroring and it’s primary function is to provide redundancy. If any of the disks in the array fails,
the system can still access data from the remaining disk(s). Once you replace the faulty disk with
a new one, the data is copied to it from the functioning disk(s) to rebuild the array. RAID 1 is the
easiest way to create failover storage.

1. Data is first sent from server to the RAID controller

Page 209 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
2. The data is then sent to the array. For example, data set ABCD is sent to array 1 and mirrored to
array 2. Data set EFGH is sent to array 3 and mirrored to array 4 and so on.

This diagram above shows RAID 1 with strictly mirroring. It already provides a fairly high fault
tolerance. However, there is another added level of fault tolerance in RAID 1, which is
duplexing.

In duplexing, individual raid controllers are used to


control each array. Array has it's own raid controller,
array 2 has it's own raid controller and so on.

Comparatively,, there is high fault tolerance, costs


which can range from cheap to very expensive, poor
read performance but better write performance
compared to other RAID levels.However, the big
disadvantage is that your storage capabilities are
halved, as mirroring copies the same set of data onto
both disks.

RAID 1 is popular in finance based situations, such as


in accounting, small servers and anyone who want a
fault tolerance with minimal hassle. The costs can also
be very minimal, unless a huge number of arrays is set
up, and duplexing is implemented.
RAID 1 with duplexing

RAID level 1 – Mirroring

Data are stored twice by writing them to both the data drive (or set of data drives) and a mirror
drive (or set of drives). If a drive fails, the controller uses either the data drive or the mirror drive
for data recovery and continues operation. You need at least 2 drives for a RAID 1 array.

Page 210 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Advantages

 RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
 In case a drive fails, data do not have to be rebuild, they just have to be copied to the
replacement drive.
 RAID 1 is a very simple technology.

Disadvantages

 The main disadvantage is that the effective storage capacity is only half of the total drive
capacity because all data get written twice.
 Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the
failed drive can only be replaced after powering down the computer it is attached to. For servers
that are used simultaneously by many people, this may not be acceptable. Such systems
typically use hardware controllers that do support hot swapping.

Ideal use

RAID-1 is ideal for mission critical storage, for instance for accounting systems. It is also
suitable for small servers in which only two data drives will be used.

 RAID 2-

Page 211 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RAID 2

 This uses bit level striping. i.e Instead of striping the blocks across the disks, it stripes the bits
across the disks.
 In the above diagram b1, b2, b3 are bits. E1, E2, E3 are error correction codes.
 You need two groups of disks. One group of disks are used to write the data, another group is
used to write the error correction codes.
 This uses Hamming error correction code (ECC), and stores this information in the redundancy
disks.
 When data is written to the disks, it calculates the ECC code for the data on the fly, and stripes
the data bits to the data-disks, and writes the ECC code to the redundancy disks.
 When data is read from the disks, it also reads the corresponding ECC code from the
redundancy disks, and checks whether the data is consistent. If required, it makes appropriate
corrections on the fly.
 This uses lot of disks and can be configured in different disk configuration. Some valid
configurations are 1) 10 disks for data and 4 disks for ECC 2) 4 disks for data and 3 disks for ECC
 This is not used anymore. This is expensive and implementing it in a RAID controller is complex,
and ECC is redundant now-a-days, as the hard disk themselves can do this.

RAID 2 – the bit-level striping with dedicated Hamming-code parity

In the case of RAID 2 all data are stripped (to the bit levels – not block). Each bit is written on a
different drive/stripe. Such a solution requires the use of Hamming code for error correction.

Page 212 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Hamming code is a linear error-correcting code named after its inventor, Richard Hamming.
Hamming codes can detect up to d – 1 bit errors, and correct (d – 1) / 2 bit errors, where d is the
minimum hamming distance between all pairs in the code words; thus, reliable communication is
possible when the Hamming distance between the transmitted and received bit patterns is less
than or equal to d. By contrast, the simple parity code cannot correct errors, and can detect only
an odd number of errors.

The number of discs in RAID 2 used to store information is equal to the logarithm of the number
of discs that are protecting the mentioned data. All disks in RAID 2 work as one disk which has a
capacity equal to the common capacity of all disks used to store data.

While RAID 2 is being used it is important to synchronize all disks. Such a solution requires that
the controller, which makes disks, will spin at the same angular orientation – in other way the
index will not be reached at the same time. Disintegration will lead to total uselessness of drives
in array.

Such a requirement is not the only drawback. Also the need for long Hamming code generation
may prove to be problematic by slowing the whole system down.

The mode of RAID 2 action may be hard to understand. The need for using Hamming code,
special controllers for disks – it makes RAID 2 a not very popular solution. But if we think about
it in a less pragmatic way, it may prove to be very interesting – mainly due to its modus
operandi. It introduces many more complex solutions than RAID 0 and RAID 1. While
everything works well, RAID 2 proves to be quite a good solution in area of data security. In
case of HDD failure – no matter if it was the disk with data or the Hamming code – any part of
the array may be reconstructed by the other disks used.

While it is interesting and it has its advantages, we have not heard about any commercial
implementations of RAID 2. Solutions based on it were used only in the initial phase of RAID
systems usage – before disks were equipped with their own correction code. Modern HDDs use
various correction and optymalising algorithms. That is why the Hamming system has started to
be less interesting in the area of professional usage and it is no longer implemented in modern
controllers.

 RAID 3-

Page 213 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RAID 3

 This uses byte level striping. i.e Instead of striping the blocks across the disks, it stripes the bytes
across the disks.
 In the above diagram B1, B2, B3 are bytes. p1, p2, p3 are parities.
 Uses multiple data disks, and a dedicated disk to store parity.
 The disks have to spin in sync to get to the data.
 Sequential read and write will have good performance.
 Random read and write will have worst performance.
 This is not commonly used.

RAID 3 – another rare one in practice

RAID 3 works as RAID 0 does – it uses byte-level stripping – but it also uses an additional disk
in the array. It is used to store checksums and it supports a special processor in parity codes
calculating – so we may call it “the parity disk”.

In RAID 3, configuration data are divided into individual bytes and then saved on a disk. Parity
byte is determined for each row of data and saved on the mentioned “parity disk”. In case of
failure it allows to recover data by an appropriate calculation of the remaining bytes and parity
bytes that correspond with them.

Although RAID 3 is rarely used in practice, it is worth pointing out its advantages. First of all is
its resistance to damage of one disk in the arrangement. Secondly, high read speed.
Unfortunately, it also has a couple of drawbacks.

Page 214 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The read speed is more than satisfactory but write speed is on the contrary – the reason being the
necessity of checksums calculating (even RAID hardware controllers cannot solve this problem).
She second disadvantage is a matter of disk failure. When it happens, the whole system will
work much slower. What is more, although RAID 3 is resistant to breakdown (in case of failure
of one disk in the array), replacing a damaged disk is very costly. A third problem is the disk
used for calculating checksums – it is usually the bottleneck in the performance of the entire
array.

As can be easily seen, RAID 3 is not a good, reliable or cheap solution. Therefore, as it was
mentioned earlier, its use is rare in practice. Systems based on RAID 3 are mostly purposed for
implementations where a small number of users refer to the very large files.

 RAID 4-

RAID 4

 This uses block level striping.


 In the above diagram B1, B2, B3 are blocks. p1, p2, p3 are parities.
 Uses multiple data disks, and a dedicated disk to store parity.
 Minimum of 3 disks (2 disks for data and 1 for parity)
 Good random reads, as the data blocks are striped.
 Bad random writes, as for every write, it has to write to the single parity disk.
 It is somewhat similar to RAID 3 and 5, but a little different.
 This is just like RAID 3 in having the dedicated parity disk, but this stripes blocks.

Page 215 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 This is just like RAID 5 in striping the blocks across the data disks, but this has only one parity
disk.
 This is not commonly used.

RAID 4 – smells like RAID 3 and 5

RAID 4 is very similar to RAID 3. The main difference is the way of sharing data. They are
divided in to blocks (16, 32, 64 lub 128 kB) and written on disk s – similar to RAID 0. For each
row of written data, any recorded block is written on a parity disk. In short this means that RAID
4 does not strip data at block levels but it uses byte levels for striping (block-level striping with
a dedicated parity disk).

There are also similarities in relation to RAID 5, but it confines all parity data to a single drive.
RAID 4 does not use distributed parity.

RAID 4 requires at least three disks for complete implementation and configuration. What is
more, it also needs hardware support for parity calculations. This makes it possible to recover
data by the appropriate mathematical operations.

If we asked: what is RAID 4 for? we would point out one particular need. Such a solution will
work very well in the case of really large files – when sequential read and write data process is
used. Using RAID 4 for small portions of data would be not a good idea. The reason is the need
to carry out modifications of parity blocks for each I/O session. The need for continuous
repeating of such an operation would cause large losses of time and slow down a whole system.

RAID 3 and RAID 4 solutions were replaced by RAID 5.

 RAID 5 – striping with parity

Page 216 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RAID 5
The popular kid, RAID 5 is one of the more used RAID array systems.

Here is where things start to get interesting. Instead of striping just the data, BOTH data and
parity is striped onto at least 3 drives. What this does is takes away the bottleneck associated
with a dedicated parity drive (RAID 3 & RAID 4) and distributes it to each drive.

RAID 5 (Striping with parity):

RAID 5 stripes data blocks across multiple disks like RAID 0, however, it also stores parity
information (Small amount of data that can accurately describe larger amounts of data) which is
used to recover the data in case of disk failure. This level offers both speed (data is accessed
from multiple disks) and redundancy as parity data is stored across all of the disks. If any of the
disks in the array fails, data is recreated from the remaining distributed data and parity blocks. It
uses approximately one-third of the available disk capacity for storing parity information.

Page 217 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

1. Data is first sent from server to the RAID controller


2. The RAID controller will then stripe the data and then spread parity across the array.

This makes RAID 5 the fastest read data transaction rate out of all the RAID array systems. Also,
since there is no parity drive, efficiency is much improved.

On the other hand, this array requires a fairly complex RAID controller design (There's always a
trade off isn't there?).

In the instance that one of the drives fails, a replacement drive is needed, but the array is not
destroyed. In the event of a two drive failure, the array itself is left vulnerable until the drive is
replaced and data is reconstructed. Data loss here is inevitable.

RAID level 5

RAID 5 is the most common secure RAID level. It requires at least 3 drives but can work with
up to 16. Data blocks are striped across the drives and on one drive a parity checksum of all the
block data is written. The parity data are not written to a fixed drive, they are spread across all
drives, as the drawing below shows. Using the parity data, the computer can recalculate the data
of one of the other data blocks, should those data no longer be available. That means a RAID 5
array can withstand a single drive failure without losing data or access to data. Although RAID 5
can be achieved in software, a hardware controller is recommended. Often extra cache memory
is used on these controllers to improve the write performance.

Page 218 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Advantages

 Read data transactions are very fast while write data transactions are somewhat slower (due to
the parity that has to be calculated).
 If a drive fails, you still have access to all data, even while the failed drive is being replaced and
the storage controller rebuilds the data on the new drive.

Disadvantages

 Drive failures have an effect on throughput, although this is still acceptable.


 This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced,
restoring the data (the rebuild time) may take a day or longer, depending on the load on the
array and the speed of the controller. If another disk goes bad during that time, data are lost
forever.

Ideal use

RAID 5 is a good all-round system that combines efficient storage with excellent security and
decent performance. It is ideal for file and application servers that have a limited number of data
drives.

Page 219 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 RAID 6 – striping with double parity

RAID 6
Raid 6 is similar to RAID 5, however, it provides increased reliability as it stores an extra parity block.
That effectively means that it is possible for two drives to fail at once without breaking the array.

RAID level 6 – Striping with double parity

RAID 6 is like RAID 5, but the parity data are written to two drives. That means it requires at
least 4 drives and can withstand 2 drives dying simultaneously. The chances that two drives
break down at exactly the same moment are of course very small. However, if a drive in a RAID
5 systems dies and is replaced by a new drive, it takes hours or even more than a day to rebuild
the swapped drive. If another drive dies during that time, you still lose all of your data. With
RAID 6, the RAID array will even survive that second failure.

Page 220 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Advantages

 Like with RAID 5, read data transactions are very fast.


 If two drives fail, you still have access to all data, even while the failed drives are being replaced.
So RAID 6 is more secure than RAID 5.

Disadvantages

 Write data transactions are slower than RAID 5 due to the additional parity data that have to be
calculated. In one report I read the write performance was 20% lower.
 Drive failures have an effect on throughput, although this is still acceptable.
 This is complex technology. Rebuilding an array in which one drive failed can take a long time.

Ideal use

RAID 6 is a good all-round system that combines efficient storage with excellent security and
decent performance. It is preferable over RAID 5 in file and application servers that use many
large drives for data storage.

Page 221 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 RAID 0+1
RAID 0+1
Requiring at least 4 drives, RAID 0+1 is a combination of, firstly, stripe (RAID 0) and then
mirroring (RAID 1). Hence the term "RAID 01".

1. The data is sent from server to RAID controller


2. RAID controller will then stripe the data
3. Then data set ABCD will be sent to array 1, and mirrored to array 2. Data set EFGH will be sent to
array 3, and mirrored to array 4 and so on.

What happens during drive failure is that if a drive in "array 1" fails, that whole array is lost, due
to the nature of RAID 0. However, "array 2" will still function as normal.

However, "array 2" will then function solely as a RAID 0. If any of the drives in "array 2b" fails,
the system will go down and you will lose the entire array.

Another disadvantage here is storage capacity is halved, due to mirroring.

RAID 1+0 or 10 – combining mirroring and striping


Page 222 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RAID 1+0
In RAID 1+0, or RAID 10, the drives in the array is mirrored first (RAID 1) and then striped
(RAID 0). This provides a better fault tolerance than RAID (0+1). At least 4 drives is needed.

1. Data is first transmitted to RAID controller


2. Then, the data is written to "array 1" and is then mirrored to "array 2"
3. Then, "array 1" and "array 2" is striped onto "array 3" and "array 4"

RAID 10 can survive multiple drive failure, as long as the drive failure is in different sets.

The disadvantage here is storage capacity is halved, due to mirroring.

RAID 10 combines the mirroring of RAID 1 with the striping of RAID 0. Or in other words, it
combines the redundancy of RAID 1 with the increased performance of RAID 0. It is best
suitable for environments where both high performance and security is required.

Page 223 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

RAID level 10 – combining RAID 1 & RAID 0

It is possible to combine the advantages (and disadvantages) of RAID 0 and RAID 1 in one
single system. This is a nested or hybrid RAID configuration. It provides security by mirroring
all data on secondary drives while using striping across each set of drives to speed up data
transfers.

Page 224 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Advantages

 If something goes wrong with one of the disks in a RAID 10 configuration, the rebuild time is
very fast since all that is needed is copying all the data from the surviving mirror to a new drive.
This can take as little as 30 minutes for drives of 1 TB.

Disadvantages

 Half of the storage capacity goes to mirroring, so compared to large RAID 5 or RAID 6 arrays,
this is an expensive way to have redundancy.

RAID is no substitute for back-up!


All RAID levels except RAID 0 offer protection from a single drive failure. A RAID 6 system
even survives 2 disks dying simultaneously. For complete security, you do still need to back-up
the data from a RAID system.

 That back-up will come in handy if all drives fail simultaneously because of a power spike.
 It is a safeguard when the storage system gets stolen.
 Back-ups can be kept off-site at a different location. This can come in handy if a natural disaster
or fire destroys your workplace.
 The most important reason to back-up multiple generations of data is user error. If someone
accidentally deletes some important data and this goes unnoticed for several hours, days or
weeks, a good set of back-ups ensure you can still retrieve those files.

Page 225 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

INPUT /OUTPUT DEVICES


With aid of clearly labelled diagrams, describe
communication in terms of:
 Operation of parallel port

The Parallel Port


Parallel Port Basics

Page 226 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Notice how the first 25 pins on the Centronics end match up with the pins of the first connector. With
each byte the parallel port sends out, a handshaking signal is also sent so that the printer can latch the
byte.

Parallel ports were originally developed by IBM as a way to connect a printer to your PC. When
IBM was in the process of designing the PC, the company wanted the computer to work with
printers offered by Centronics, a top printer manufacturer at the time. IBM decided not to use
the same port interface on the computer that Centronics used on the printer.

Instead, IBM engineers coupled a 25-pin connector, DB-25, with a 36-pin Centronics connector
to create a special cable to connect the printer to the computer. Other printer manufacturers
ended up adopting the Centronics interface, making this strange hybrid cable an unlikely de facto
standard.
Page 227 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
When a PC sends data to a printer or other device using a parallel port, it sends 8 bits of data (1
byte) at a time. These 8 bits are transmitted parallel to each other, as opposed to the same eight
bits being transmitted serially (all in a single row) through a serial port. The standard parallel
port is capable of sending 50 to 100 kilobytes of data per second.

Let's take a closer look at what each pin does when used with a printer:

 Pin 1 carries the strobe signal. It maintains a level of between 2.8 and 5 volts, but drops below
0.5 volts whenever the computer sends a byte of data. This drop in voltage tells the printer that
data is being sent.
 Pins 2 through 9 are used to carry data. To indicate that a bit has a value of 1, a charge of 5 volts
is sent through the correct pin. No charge on a pin indicates a value of 0. This is a simple but
highly effective way to transmit digital information over an analog cable in real-time.
 Pin 10 sends the acknowledge signal from the printer to the computer. Like Pin 1, it maintains a
charge and drops the voltage below 0.5 volts to let the computer know that the data was
received.
 If the printer is busy, it will charge Pin 11. Then, it will drop the voltage below 0.5 volts to let the
computer know it is ready to receive more data.
 The printer lets the computer know if it is out of paper by sending a charge on Pin 12.
 As long as the computer is receiving a charge on Pin 13, it knows that the device is online.

 The computer sends an auto feed signal to the printer through Pin 14 using a 5-volt charge.
 If the printer has any problems, it drops the voltage to less than 0.5 volts on Pin 15 to let the
computer know that there is an error.
 Whenever a new print job is ready, the computer drops the charge on Pin 16 to initialize the
printer.
 Pin 17 is used by the computer to remotely take the printer offline. This is accomplished by
sending a charge to the printer and maintaining it as long as you want the printer offline.
 Pins 18-25 are grounds and are used as a reference signal for the low (below 0.5 volts) charge.

SPP/EPP/ECP

The original specification for parallel ports was unidirectional, meaning that data only traveled in
one direction for each pin. With the introduction of the PS/2 in 1987, IBM offered a new
bidirectional parallel port design. This mode is commonly known as Standard Parallel Port
Page 228 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
(SPP) and has completely replaced the original design. Bidirectional communication allows each
device to receive data as well as transmit it. Many devices use the eight pins (2 through 9)
originally designated for data. Using the same eight pins limits communication to half-duplex,
meaning that information can only travel in one direction at a time. But pins 18 through 25,
originally just used as grounds, can be used as data pins also. This allows for full-duplex (both
directions at the same time) communication.

Enhanced Parallel Port (EPP) was created by Intel, Xircom and Zenith in 1991. EPP allows for
much more data, 500 kilobytes to 2 megabytes, to be transferred each second. It was targeted
specifically for non-printer devices that would attach to the parallel port, particularly storage
devices that needed the highest possible transfer rate.

Close on the heels of the introduction of EPP, Microsoft and Hewlett Packard jointly announced
a specification called Extended Capabilities Port (ECP) in 1992. While EPP was geared toward
other devices, ECP was designed to provide improved speed and functionality for printers.

In 1994, the IEEE 1284 standard was released. It included the two specifications for parallel port
devices, EPP and ECP. In order for them to work, both the operating system and the device must
support the required specification. This is seldom a problem today since most computers support
SPP, ECP and EPP and will detect which mode needs to be used, depending on the attached
device. If you need to manually select a mode, you can do so through the

Page 229 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 Operation of serial port

The Serial Port


Recommended Standard 232 (RS-232) emerged in the 1960s as a common interface standard for
data communications equipment. This data communications was often an exchange of data
between a mainframe computer and a remote terminal via an analogue telephone line, and a
modem was required at each end of the connection to carry out the necessary signal conversion
(digital-to-analogue and vice-versa).

A standard was needed to ensure reliable communication and to enable equipment produced by
different manufacturers to interoperate. The standard specified signal voltages, signal timing, the
function of each circuit in the interface, and a protocol for the exchange of information. It also
provided the specifications for physical connectors. In the four decades since the standard first
appeared, the Electronic Industries Association has made a number of modifications to the
standard. The most recent version, EIA232F, was introduced in 1997. It renames some of the
signal lines, and introduces some new ones, including a shield conductor.

RS-232 defines the connection between data terminal equipment (DTE) and data circuit-
terminating equipment (DCE). Data terminal equipment is any end user device, such as a
computer, that can be used to send data over a network. Data circuit-terminating equipment is a
device that provides the interface between the DTE and the network, and is often a modem or
terminal adapter.

An RS-232-compatible interface was commonly used for computer serial communication (COM)
ports, which were originally intended for connecting the computer to a modem. While the
original RS232 standard specified 25-pin connections, many of the pins were not used in
practice, and a 9-pin connection was implemented on most computers.

Although the RS-232 port has now largely been superseded by USB for connecting peripheral
devices to personal computers, it is still often provided to enable the connection of legacy
devices. The illustration below shows the 25-pin and 9-pin DTE-to-DCE connections that would
result if the EIA232 standard were strictly followed. The most commonly used signals are shown
in red.

Page 230 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

A 25-pin DTE-to-DCE connection

A 9-pin DTE-to-DCE connection

RS-232 allows data rates of up to 20 kbps, over cables of up to 15 metres. Control circuits are
used to manage the connection between the DTE and DCE, and a special hardware circuit called
a UART (Universal Asynchronous Receive/Transmit) or a USRT (Universal Synchronous
Receive/Transmit) controls the serial port in a computer.

Each data or control circuit operates in only one direction, and since the Transmit Data (TxD)
and Receive Data (RxD) are separate circuits, the interface can operate in full duplex mode. The
EIA232 standard uses negative, bipolar logic in which a negative voltage is used to represent a

Page 231 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
logic '1', and a positive voltage represents a logic '0'. A typical DTE-DCE interface setup is
shown below.

A typical DTE-DCE interface

Connectors and Signals

RS-232 devices are either DTE or DCE devices. Computer terminals are usually equipped with
male connectors with DTE pin functions, while modems have female connectors with DCE pin
functions. Although the standard specifies twenty different signal connections, most devices use
only a few of these signals, enabling the smaller 9-pin (DB9) connectors to be used. The more
commonly used signals are shown in the table below.

Page 232 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Commonly-used RS-232 signals
Signal Description
TxD Transmitted Data - data transmitted from the DTE to the DCE
RxD Received Data - data transmitted from the DCE to the DTE
RTS Request To Send - set to 0 (asserted) by the DTE to prepare the DCE to receive data
Clear To Send - set to 0 (asserted) by the DCE to acknowledge RTS and allow the DTE to
CTS
transmit
DTR Data Terminal Ready - set to 0 (asserted) by the DTE to indicate that it is ready to be connected
DSR Data Set Ready - set to 0 (asserted) by the DCE to indicate an active connection
Data Carrier Detect - set to 0 (asserted) by the DCE when a connection has been established with
DCD
a remote device
Ring Indicator - set to 0 (asserted) by the DCE when it detects a ring signal from the telephone
RI
line

In a standard connection between a DCE device and a DTE device, the cable used will connect
the same pin numbers in each connector (a "straight-through" cable). The standard pin
assignments for DB25 and DB9 connectors are given in the following tables.

DB25 Pin Assignments


Pin DTE (male connector) DCE (female connector)
1 Shield Shield
2 Transmitted Data Received Data
3 Received Data Transmitted Data
4 Request to Send Clear to Send
5 Clear to send Request to send
6 DCE Ready DCE Ready
7 Signal Ground Signal Ground
8 Received Line Signal Detect Received Line Signal Detect
9 Reserved for testing Reserved for testing
10 Reserved for testing Reserved for testing
11 Unassigned Unassigned
12 Second Received Line Signal Detect Second Received Line Signal Detect
13 Second Clear to Send Second Request to send
14 Second Transmitted Data Second Received Data
15 Transmitter Signal Timing (DCE Source) Transmitter Signal Timing (DCE Source)
16 Second Received Data Second Transmitted Data
17 Receiver Signal Timing (DCE Source) Receiver Signal Timing (DCE Source)
18 Local Loopback Local Loopback
19 Second Request to Send Second Clear to Send
Page 233 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
20 DTE Ready DTE Ready
21 Remote Loopback Remote Loopback
22 Ring Indicator Ring Indicator
23 Data Signal Rate Selector Data Signal Rate Selector
24 Transmitter Signal Timing (DTE Source) Transmitter Signal Timing (DTE Source)
25 Test Mode Test Mode

25-pin connector:

1. Not Used
2. Transmit Data - Computer sends information to the modem.
3. Receive Data - Computer receives information sent from the modem.
4. Request To Send - Computer asks the modem if it can send information.
5. Clear To Send - Modem tells the computer that it can send information.
6. Data Set Ready - Modem tells the computer that it is ready to talk.
7. Signal Ground - Pin is grounded.
8. Received Line Signal Detector - Determines if the modem is connected to a working
phone line.
9. Not Used: Transmit Current Loop Return (+)
10. Not Used
11. Not Used: Transmit Current Loop Data (-)
12. Not Used
13. Not Used
14. Not Used
15. Not Used
16. Not Used
17. Not Used
18. Not Used: Receive Current Loop Data (+)
19. Not Used
20. Data Terminal Ready - Computer tells the modem that it is ready to talk.
21. Not Used
22. Ring Indicator - Once a call has been placed, computer acknowledges signal (sent from
modem) that a ring is detected.
23. Not Used
24. Not Used
25. Not Used: Receive Current Loop Return (-)

Page 234 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
DB9 Pin Assignments
Pin DTE (male connector) DCE (female connector)
1 Received Line Signal Detect Received Line Signal Detect
2 Received Data Transmitted Data
3 Transmitted Data Received Data
4 DTE Ready DTE Ready
5 Signal Ground Signal Ground
6 DCE Ready DCE Ready
7 Request to Send Clear to Send
8 Clear to Send Request to Send
9 DCE Ready DCE Ready

1. Carrier Detect - Determines if the modem is connected to a working phone line.


2. Receive Data - Computer receives information sent from the modem.
3. Transmit Data - Computer sends information to the modem.
4. Data Terminal Ready - Computer tells the modem that it is ready to talk.
5. Signal Ground - Pin is grounded.
6. Data Set Ready - Modem tells the computer that it is ready to talk.
7. Request To Send - Computer asks the modem if it can send information.
8. Clear To Send - Modem tells the computer that it can send information.
9. Ring Indicator - Once a call has been placed, computer acknowledges signal (sent from
modem) that a ring is detected.

A connection between two DTE devices (e.g. between two computers) requires a null modem
that acts as a DCE between the two devices. This is essentially a crossover cable in which a
number of signal lines are cross connected (i.e. TxD to RxD, DTR to DSR, and RTS to CTS). A
9-pin null modem connection is shown below.

A 9-pin null modem connection

Page 235 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
A loopback connector can be used for testing, and is simply a DB9 female connector without a
cable, internally wired to reroute signals back to the sender. The connector is attached to a DTE
device such as a personal computer.

A DB9 loopback connector

Types of devices used with the communication modes listed above

 Printers
 Mouse

Page 236 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
With aid of clearly labelled diagrams, explain the following modes of
communication:
 Simplex
 Half Duplex
 Full Duplex
Transmission mode means transferring of data between two devices. It is also known as
communication mode. Buses and networks are designed to allow communication to occur
between individual devices that are interconnected. There are three types of transmission mode:-

 SIMPLEX MODE

 HALF-DUPLEX MODE

 FULL-DUPLEX MODE

Simplex, half duplex and full duplex are three kinds of communication channels in
telecommunications and computer networking. These communication channels provide pathways
to convey information. A communication channel can be either a physical transmission medium
or a logical connection over a multiplexed medium. The physical transmission medium refers to
the material substance that can propagate energy waves, such as wires in data communication. And
the logical connection usually refers to the circuit switched connection or packet-mode virtual
circuit connection, such as a radio channel.

Page 237 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
1) SIMPLEX

A simplex communication channel only sends information in one direction. For example, a radio
station usually sends signals to the audience but never receives signals from them, thus a radio
station is a simplex channel. It is also common to use simplex channel in fiber optic
communication. One strand is used for transmitting signals and the other is for receiving signals.
But this might not be obvious because the pair of fiber strands are often combined to one cable.
The good part of simplex mode is that its entire bandwidth can be used during the transmission.

In Simplex mode, the communication is unidirectional, as on a one-way street. Only one of the
two devices on a link can transmit, the other can only receive. The simplex mode can use the
entire capacity of the channel to send data in one direction.
Example: Keyboard and traditional monitors. The keyboard can only introduce input, the
monitor can only give the output.

2) HALF DUPLEX

In half duplex mode, data can be transmitted in both directions on a signal carrier except not at the
same time. At a certain point, it is actually a simplex channel whose transmission direction can be
switched. Walkie-talkie is a typical half duplex device. It has a “push-to-talk” button which can be
used to turn on the transmitter but turn off the receiver. Therefore, once you push the button, you
cannot hear the person you are talking to but your partner can hear you. An advantage of half-
duplex is that the single track is cheaper than the double tracks.

Page 238 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

In half-duplex mode, each station can both transmit and receive, but not at the same time. When
one device is sending, the other can only receive, and vice versa. The half-duplex mode is used
in cases where there is no need for communication in both direction at the same time. The entire
capacity of the channel can be utilized for each direction.
Example: Walkie- talkie in which message is sent one at a time and messages are sent in both
the directions.

3) FULL DUPLEX

A full duplex communication channel is able to transmit data in both directions on a signal carrier
at the same time. It is constructed as a pair of simplex links that allows bidirectional simultaneous
transmission. Take telephone as an example, people at both ends of a call can speak and be heard
by each other at the same time because there are two communication paths between them. Thus,
using the full duplex mode can greatly increase the efficiency of communication.

In full-duplex mode, both stations can transmit and receive simultaneously. In full duplex mode,
signals going in one direction share the capacity of the link with signals going in other direction,
this sharing can occur in two ways:

 Either the link must contain two physically separate transmission paths, one for sending and other for
receiving.
Page 239 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Or the capacity is divided between signals travelling in both directions.

Full-duplex mode is used when communication in both direction is required all the time. The
capacity of the channel, however must be divided between the two directions.
Example: Telephone Network in which there is communication between two persons by a
telephone line, through which both can talk and listen at the same time.

Page 240 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

With aid of clearly labelled diagrams, describe the following


video displays:
 CRT
Be guided by
1) Physical construction
2) Operation
3) Scanning methods
4) Types of displays used (MDA, SVGA, & VGA )
5) Windows, accelerators and local bus

Write brief notes on the following display units in terms of physical construction and
operation

CATHODE RAY TUBE (CRT)

 A CRT is an electronic tube designed to display electrical data

Consists of 4 major components which are:


1. Electron gun- used for producing a strain of electrons. The electron gun assembly consists of
an indirectly heated cathode (K), a control grid (G), and an accelerated anode
2. Focusing and accelerating anodes- used for producing a narrow and sharply focus beam of
electrons
3. Horizontal and deflection plates- used for controlling the path of the beam
4. Evacuated glass envelope- has a phosphorescent screen which produces bright spot when
struck by a high velocity electron beam
Page 241 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Scanning methods used in CRTs

 Raster scanning
 Interlaced scanning

Operation of CRT
 Works by moving an electron beam back and forth across the back of the screen. Each time
the beam makes a pass across the screen, it lights up phosphor dots on the inside of the
glass tube, thereby illuminating the active portions of the screen. By drawing many such
lines from the top to the bottom of the screen, it creates an entire screenful of images.

Working of CRT

 Heater element is energized by alternating current to obtain high emission of electron


from cathode. Control grid is based negative with respect to cathode it controls the
density of electron beam to focus the electron beam on the screen focusing anode is used.
The focusing anode operate at a potential of twelve hundred (1200 V) and accelerating
anode at 2000 V to accelerate the electron beam.

 Two pairs of deflection plates provided in the CRT these deflection plates are mounted at
right angle to each other to provide electron beam deflection along vertical and horizontal
axis of the screen. The screen consists of a glass which is coated by some florescent
material lie zinc silicate. Which is semitransparent phosphor substance. When high
velocity electron beam strikes the phosphorescent screen the light emits from it. The
property of phosphor to emit light when its atoms are excited is called fluorescence.

Applications of CRT

 In cathode ray oscilloscope


 As a display device in radar
 In televisions
 In computer monitors

Page 242 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

CRT Screen, Principle of Working


When the two metal plates are connected to a high voltage source, the negatively charged plate
called cathode, emits an invisible ray. The cathode ray is drawn to the positively charged plate,
called the anode, where it passes through a hole and continues travelling to the other end of the
tube. When the ray strikes the specially coated surface, the cathode ray produces a strong
fluorescence, or bright light. When an electric field is applied across the cathode ray tube, the
cathode ray is attracted by the plate bearing positive charges. Therefore a cathode ray must consist
of negatively charged particles. A moving charged body behaves like a tiny magnet, and it can
interact with an external magnetic field. The electrons deflected by the magnetic field. And also
when the external magnetic field is reversed, the beam of electronics is deflected in the opposite
direction.

In a cathode ray tube, the cathode is a heated filament and it placed in vacuum. The ray is a stream
of electrons that naturally pour off a heated cathode into the vacuum. Electrons are negative. The
anode is positive, so it attracts the electrons pouring off the cathode. In a TV’s cathode ray tube,
the stream of electrons is focused by a focusing anode into a tight beam and then accelerated by
an accelerating anode. This tight, high-speed beam of electrons flies through the vacuum in the

Page 243 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
tube and hits the flat screen at the other end of the tube. This screen is coated with phosphor, which
glows when struck by the beam.

Operation of CRT

Cathode Ray Tube (CRT) is a computer display screen, used to display the output in a standard
composite video signal. The working of CRT depends on movement of an electron beam which
moves back and forth across the back of the screen. The source of the electron beam is the electron
gun; the gun is located in the narrow, cylindrical neck at the extreme rear of a CRT which produces
a stream of electrons through thermionic emission. Usually, A CRT has a fluorescent screen to
display the output signal. A simple CRT is shown in below.

Cathode Ray Tube

The operation of a CRT monitor is basically very simple. A cathode ray tube consists of one or
more electron guns, possibly internal electrostatic deflection plates and a phosphor target. CRT
has three electron beams – one for each (Red, Green, and Blue) is clearly shown in figure. The
electron beam produces a tiny, bright visible spot when it strikes the phosphor-coated screen. In
every monitor device the entire front area of the tube is scanned repetitively and systematically in
a fixed pattern called a raster. An image (raster) is displayed by scanning the electron beam across
the screen. The phosphor’s targets are begins to fade after a short time, the image needs to be
refreshed continuously. Thus CRT produces the three color images which are primary colors. Here
we used a 50 Hz rate to eliminate the flicker by refreshing the screen.

Page 244 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Main parts of the cathode ray tube are cathode, control grid, deflecting plates and screen.

Cathode

The heater keeps the cathode at a higher temperature and electrons flow from the heated cathode
towards the surface of the cathode. The accelerating anode has a small hole at its center and is
maintained at a high potential, which is of positive polarity. The order of this voltage is 1 to 20
kV, relative to the cathode. This potential difference creates an electric field directed from right to
left in the region between the accelerating anode and the cathode. Electrons pass through the hole
in the anode travel with constant horizontal velocity from the anode to the fluorescent screen. The
electrons strike the screen area and it glows brightly.

The Control Grid

The control grid regulates the brightness of the spot on the screen. By controlling the number of
electrons by the anode and hence the focusing anode ensures that electrons leaving the cathode in
slightly different directions are focused down to a narrow beam and all arrive at the same spot on
the screen. The whole assembly of cathode, control grid, focusing anode, and accelerating
electrode is called the electron gun.

Deflecting Plates

Two pairs of deflecting plates allow the beam of electrons. An electric field between the first pair
of plates deflects the electrons horizontally, and an electric field between the second pair deflects
them vertically, the electrons travel in a straight line from the hole in the accelerating anode to the
center of the screen when no deflecting fields are present, where they produce a bright spot.

Screen

This may be circular or rectangular. Screen is coated with special type of fluorescent material.
Fluorescent material absorbs its energy and re-emits light in the form of photons when electron
beam hits the screen. When it happens some of them bounces back just like bouncing of cricket
ball from a wall. These are called as secondary electrons. They must be absorbed and returned
back to cathode, if it is not so they accumulate near screen and produce space charge or electrons
cloud. To avoid this, aquadag coating is applied on funnel part of CRT from inside.

Advantages of CRT

1. CRT’s are less expensive than other display technologies.


2. They operate at any resolution, geometry and aspect ratio without decreasing the image quality.
3. CRTs produce the very best color and gray-scale for all professional calibrations.
4. Excellent viewing angle.
5. It maintains good brightness and gives long life service.

Page 245 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Features of CRT

The use of CRT technology has quickly declined since the introduction of LCDs but they are still
unbeatable in certain ways. CRT monitors are widely used in a number of electrical devices such
as computer screens, television sets, radar screens, and oscilloscopes used for scientific and
medical purposes.

Page 246 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

A CRT works by sweeping an electron beam of varying intensity across a phosphor-coated


screen. The basic components of the CRT are described below:

 Electron Gun -- The electron gun, which consists of the cathode, choke, accelerator, and
lensing region, is the device which generates and focuses the electron beam used to
project an image on the phosphor screen.

 Cathode -- The cathode is a grounded metal plate that is super-heated so that electrons
are literally jumping off the surface.

 Accelerator Plate -- This metal ring is held at a large, positive voltage and is used to
"grab" loose electrons from the cathode and hurl them forwards into the lensing chamber
(towards the right in the diagram).

 Choke -- This metal ring is located between the cathode and accelerator plate and held at
a slightly negative charge. The electric fields from the choke help columnate the
electrons; they also can be used to quickly modulate the number of electrons in the beam
and, thus, the brightness or intensity of the picture.

 Lensing Region -- The lensing region consists of two adjacent metal tubes that are
located just after the accelerator. The two tubes are held at different potentials, causing an
electrostatic lens to form at their junction. The electrons that have jumped off the
cathode begin to focus. Ideally, the focal point will occur at the point when the beam
strikes the display, thereby providing pinpoint resolution on the screen. The last metal

Page 247 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
tube of the lensing chamber is held at the highest potential of all the electron gun
components so that exiting electrons have a very high forward velocity.

 Steering Magnets -- These two sets of electromagnets are fed the retrace signals that
synchronize the drawing of the picture on the screen. The flux between each pair of
magnets will bend the electron beam, one in the horizontal direction and the other in the
vertical direction.

 Phosphor Screen -- If all works well, a pinpoint electron beam strikes the screen with
the appropriate intensity and causes the phosphor to fluoresce. The intensity modulation
is synchronized with the horizontal and vertical retraces so that one frame of video is
displayed. The process repeats itself rapidly (24 frames/second for analog television) so
that the moving scene appears seamless.

Page 248 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

How It Works: CRT Monitor

Page 249 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Standard glass-tube televisions and your CRT monitor work quite similarly. A glass cathode-ray
tube (1), which contains a vacuum, has three electron guns (2) at its narrow end, each containing
an anode and cathode assembly. As you may recall from high-school science, the cathodes emit
electrons; the anodes draw the electrons away from the cathodes, focusing and accelerating them
into electron beams (3). The deflection yoke (4), around the tube base, precisely manipulates the
three beams via electromagnetic force, working with the CRT's circuitry to sweep them across
the screen in precise, horizontal lines. Where a beam hits the screen, it causes a red, green, or
blue (RGB) phosphor dot (5) to glow; the screen's inner surface is coated with these colored
phosphors. (The beams, though colorized in the illustration, are actually invisible.)

CRTs come in two main varieties: shadow mask and aperture grill. In shadow-mask models, the
RGB phosphors on the inside of the screen are arranged as a staggered pattern of dots (see the
inset); in the latter, they're not dots but repeating vertical RGB stripes. In a shadow-mask CRT,
when the monitor receives commands from your PC's graphics adapter, the electron guns fire
their three beams, in concert, through tiny holes in the shadow mask (6), a metal screen just
behind the phosphor-coated display glass. (Aperture-grill monitors, popularly known as
Trinitrons or Diamondtrons, work similarly, but vertical wires, not a mask, funnel the beams.)
Their channeled beams illuminate a trio of phosphor dots (a triad (7)) lining the inside of the
glass. A pixel (8)—the smallest image element you can see—comprises one or more triads; how
many depends on the resolution you specify. The lower the resolution, the more triads that are
assigned to each pixel.

The electron guns blaze across the screen, row by row, illuminating phosphors in their wake.
Varying the beams' intensity strengthens or weakens the glow from a given phosphor dot; by
careful manipulation of every one, the triads and pixels, seen by the eye as single units, create the
illusion of different-color dots.

Phosphors don't glow for long, though. Once the guns have scanned the whole screen, they
repeat the process—typically, 60 to 80 times a second. (This number is what is known as the
refresh rate.) To comprehend the staggering scale of the task: A CRT running at 1,280x1,024 at
a 75Hz refresh rate illuminates and re-illuminates nearly 100 million pixels per second.

CATHODE RAY TUBE

Page 250 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Explain the working of Cathode Ray Tube? Also explain CRT with its diagram &its
components?

Cathode Ray Tube (CRT):

Cathode Ray Tube(CRT) or video screens are the most widely recognized yield gadgets on PC
today. A light emission (cathode beams), produced by an electron weapon, through focusing and
deflection systems that direct the beam towards an indicated position on the phosphor-covered
screen. The phosphor at that point produces a little spot of light at each position reached by the
electron shaft. Since the light produced by phosphor blurs quickly, some technique is required
for keeping up the screen picture. One approach to keep the phosphor shining is to redraw the
photo more than once directing the electron beam back over the same points again and again.
This type of display is called a refresh CRT.

What is Cathode Ray Tube? Explain the working of CRT with its components?

Components of CRT:-

(i) Electron Gun (EG) (ii) Focusing System (iii) Deflection System
(iv) Phosphor Coated Screen

Electron Gun: –

EG contains two essential parts:

1. A Heated Metal Cathode


2. A Control Grid.

A Heat is provided to the cathode by directing a current through a curl of wire, called the
filament, inside the barrel-shaped cathode structure. This makes the electrons be “bubbled off”
the hot cathode surface. In the vacuum tube inside the CRT envelope, the free, adversely charged
electrons are then quickened towards phosphor covering by a high positive voltage. The
accelerating voltage can be produced with a positively charged metal covering within the CRT
envelope close to the phosphor screen, or an accelerating anode can be utilized. At times the EG
is worked to contain the quickened anode and centering framework inside a similar unit. Force of
the electron beam is controlled by setting voltage levels on the control grid, which is a metal
Page 251 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
chamber that fits over the cathode. A high negative voltage connected to the control grid will
stop the beam by repulsing electrons and preventing them from going through the little opening
toward the finish of the control matrix structure. A little negative voltage on the control grid
essentially diminishes the quantity of electrons going through. Since the measure of light
transmitted by the phosphor covering relies upon the quantity of electrons striking the screen, we
control the shine of a show by fluctuating the voltage on the control network. We indicate the
power level for singular screen positions with illustrations programming orders. The motivation
behind the electron beam in the CRT is to deliver an electric beam with the accompanying
properties:- (a) It must be precisely centered with the goal that it delivers a sharp spot of
light where it strikes the phosphor. (b) It must have high speed since the shine of the picture
relies upon the speed of the electron shaft. (c) Means must be given to control the stream of
electrons with the goal that the power of the hint of the shaft can be controlled.

Focusing System: –

The focusing system in a CRT is expected to force the electron beam to melt into a little spot as
it strikes the phosphor. Something else, the electron beam would spread out as it approaches the
screen. Focusing is accomplished with either electric or attractive fields. Electrostatic focusing,
the electron beam goes through a positively charged metal chamber that structures an
electrostatic focal point. The activity of the electrostatic focal point centers the electron bar at the
focal point of the screen, in the very same way that an optical focal point centers a light emission
at a specific foal remove. Comparative focal point centering impacts can be expert with an
attractive field set up a curl mounted around the outside of the CRT envelope. Attractive focal
point centering produces the littlest spot estimate particle the screen and is utilized as a part of
extraordinary reason gadgets.

What is Cathode Ray Tube? Explain the working of CRT with its components?

Deflection System: –

It is utilized to control the bearing of the electron shaft. Likewise, with centering, redirection of
the electron beam can be controlled either by electric or attractive fields. CRT’s are currently
regularly developed with attractive redirection curls mounted outwardly of the CRT envelope as
appeared. Two sets of loops are utilized, with the curls in each match mounted on inverse sides
of the neck of the CRT envelope. One set is mounted on the best and base of the neck and the
other combine is mounted on inverse sides on the neck. The attractive field delivered by each
Page 252 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
match of curls brings about across redirection drive that is opposite both to the heading of the
attractive field and to the bearing of go of the electron shaft. Flat avoidance is refined with one
set of curls and vertical diversion by the other combine. The correct redirection sums are
achieved by changing the current through the curls. At the point when electrostatic diversion is
utilized, two sets of parallel plates are mounted inside the CRT envelope. One set of plates are
mounted on a level plane to control the vertical diversion and the other match is mounted
vertically to control flat.

What is Cathode Ray Tube? Explain the working of CRT with its components?

Phosphorus-covered Screen: –

At the very rare end of CRT is the phosphorus-covered Screen, which has an interesting property
that enables the whole system to work. Phosphorus shine when they are assaulted by a high-
energy electron beam. They continue to glow for a distinct period of time after being exposed to
the electron beam. The glow given off after the electron beam is removed is known as
phosphorescence and the duration of phosphorescence is known as the phosphorus persistence.
Lower persistence phosphorus requires higher refresh rate to keep up a photo on the screen
without a glimmer. Higher persistence phosphorus requires lower refresh rate to keep up a
photo on the screen without glint. A phosphor with low persistence is useful for animation. A
phosphor with high persistence is useful for highly complex display.

Page 253 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The CRT Display

The origins of the cathode-ray tube, or CRT, stretch back to the late 19th century, and the
invention of the electric light bulb by Thomas A. Edison in 1879. In later experiments with this
device, it was discovered that a current would pass from the heated filament to a separate plate
within the bulb, if that plate was at a positive potential with respect to the filament (the
thermionic emission of electrons, originally referred to as the “Edison effect”). Two other
discoveries which soon followed paved the way for the CRT as the display we know today: first,
in 1897, the German physicist Karl Braun invented what he called an “indicator tube”, based on
the recent discovery that certain materials could be made to emit light when struck by the stream
of electrons in the Edison-effect bulbs. (Actually, at the time, the electron was not yet known as a
unique particle; Braun and others referred to the mechanism that produced this effect as “cathode
rays”, thus originating the name for the final display device.) Later, in 1903, the American
inventor Lee Deforest showed that the “cathode rays” (the electrons being emitted in these
devices) could be controlled by placing a conducting grid between the emitting electrode (the
cathode) and the receiving electrode (the anode), and varying the potential on this grid.
Combined, these meant that not only could a beam of electrons- the “cathode ray” – be made to
produce light, but the intensity of that beam could be varied by a controlling voltage. (DeForest
had actually invented the first electronic component capable of the amplification of signals –
what in America is called the “vacuum tube”, but which in the UK took the more descriptive
name “thermionic valve”.) The modern cathode-ray tube is in essence a highly modified vacuum
tube or valve, and is one of the few remaining practical applications of vacuum-tube technology.

A cross-sectional schematic view of a typical monochrome CRT is shown in Figure 4-2. The
cathode, its filament, and the controlling grids and other structures form the electron gun, located
Page 254 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
in the neck of the tube. The plate, or anode, of the tube is in this case the screen itself, covering
the entire intended viewing area (and generally somewhat beyond). This is the first obvious
modification from the more typical vacuum-tube structure; the anode is located a considerable
distance from the cathode, and is greatly enlarged. This screen structure comprises both the
electrical anode of the tube, and the light-emitting surface. Typically, a layer of light-emitting
chemicals (the phosphor) is placed on the inner surface of the faceplate glass, and is then covered
by a thin, vacuum-deposited metal layer, most often aluminum. The aluminum layer both serves
as an electrical contact, and protects the phosphor from direct bombardment by the electron
beam. As the light emitted by the phosphor is proportional to the energy transferred from the
incoming beam, this stream of electrons must be accelerated to a sufficiently high level. For this
reason, the screen is maintained at a very high positive potential with respect to the cathode;
usually several thousand volts as an absolute minimum for the smallest tubes, and up to 50 kV or
more for the larger sizes. (The screen itself is not adequate to perform the task of accelerating the
electrons of the beam; there is almost always an electrode at the anode potential as part of the
electron gun structure itself, connected to the screen proper by a layer of conductive material on
the inside of the CRT. Therefore, the screen at the face of the tube is more properly referred to as
the second anode.)

Figure 4-2 Basic monochrome CRT. In this, the oldest of current electronic display devices,
light is produced when a beam of electrons, accelerated by the high potential on the front
surface of the tube, strikes a chemical layer (the phosphor). The beam is directed across the
screen of the CRT by varying magnetic fields produced by the deflection coils.

Page 255 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

LIQUID CRYSTAL DISPLAY (LCD)


Be guided by
1) Physical construction
2) Operation
3) Scanning methods
4) Types of displays used (MDA, SVGA, & VGA )
5) Windows, accelerators and local bus

Write brief notes on the following display units in terms of physical construction and
operation of LCD

 Liquid Crystal Display


 Liquid crystal (LC) is an organic substance that has both a liquid form and a crystal
molecular structure. In this liquid, the rod-shaped molecules are normally in a
parallel array, and an electric field can be used to control the molecules
 A polarizer is applied to the front and an analyzer/reflector is applied to the back of
the cell. When randomly polarized light passes through the front polarizer it
becomes linearly polarized. It then passes through the front glass and is rotated by
the liquid crystal molecules and passes through the rear glass
 Utilize two sheets of polarizing material with a liquid crystal solution between
them.
 An electric current passes through the liquid causing the crystals to align so that
light cannot pass through them.
 Each crystal, therefore, is like a shutter, either allowing light to pass through or
blocking the light thereby creating an image on the screen

Page 256 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page 257 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Construction and Working Principle of LCD Display


What is a LCD (Liquid Crystal Display)?

A liquid crystal display or LCD draws its definition from its name itself. It is combination of two
states of matter, the solid and the liquid. LCD uses a liquid crystal to produce a visible image.
Liquid crystal displays are super-thin technology display screen that are generally used in laptop
computer screen, TVs, cell phones and portable video games. LCD’s technologies allow displays
to be much thinner when compared to cathode ray tube (CRT) technology.

Liquid crystal display is composed of several layers which include two polarized panel filters and
electrodes. LCD technology is used for displaying the image in notebook or some other electronic
devices like mini computers. Light is projected from a lens on a layer of liquid crystal. This
combination of colored light with the grayscale image of the crystal (formed as electric current
flows through the crystal) forms the colored image. This image is then displayed on the screen.

An LCD

An LCD is either made up of an active matrix display grid or a passive display grid. Most of the
Smartphone’s with LCD display technology uses active matrix display, but some of the older
displays still make use of the passive display grid designs. Most of the electronic devices mainly
depend on liquid crystal display technology for their display. The liquid has a unique advantage
of having low power consumption than the LED or cathode ray tube.

Liquid crystal display screen works on the principle of blocking light rather than emitting light.
LCD’s requires backlight as they do not emits light by them. We always use devices which are
made up of LCD’s displays which are replacing the use of cathode ray tube. Cathode ray tube
draws more power compared to LCD’s and are also heavier and bigger.

Page 258 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
How LCDs are constructed?

LCD Layered Diagram

Simple facts that should be considered while making an LCD:

1. The basic structure of LCD should be controlled by changing the applied current.
2. We must use a polarized light.
3. Liquid crystal should able be to control both of the operation to transmit or can also able to
change the polarized light.

As mentioned above that we need to take two polarized glass pieces filter in the making of the
liquid crystal. The glass which does not have a polarized film on the surface of it must be rubbed
with a special polymer which will create microscopic grooves on the surface of the polarized glass
filter. The grooves must be in the same direction of the polarized film. Now we have to add a
coating of pneumatic liquid phase crystal on one of the polarized filter of the polarized glass. The
microscopic channel cause the first layer molecule to align with filter orientation. When the right
angle appears at the first layer piece, we should add a second piece of glass with the polarized film.
The first filter will be naturally polarized as the light strikes it at the starting stage.

Thus the light travels through each layer and guided on the next with the help of molecule. The
molecule tends to change its plane of vibration of the light in order to match their angle. When
the light reaches to the far end of the liquid crystal substance, it vibrates at the same angle as that
of the final layer of the molecule vibrates. The light is allowed to enter into the device only if the
second layer of the polarized glass matches with the final layer of the molecule.

How LCDs Work?

The principle behind the LCD’s is that when an electrical current is applied to the liquid crystal
molecule, the molecule tends to untwist. This causes the angle of light which is passing through
the molecule of the polarized glass and also cause a change in the angle of the top polarizing filter.
As a result a little light is allowed to pass the polarized glass through a particular area of the LCD.
Thus that particular area will become dark compared to other. The LCD works on the principle of
blocking light. While constructing the LCD’s, a reflected mirror is arranged at the back. An

Page 259 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
electrode plane is made of indium-tin oxide which is kept on top and a polarized glass with a
polarizing film is also added on the bottom of the device. The complete region of the LCD has to
be enclosed by a common electrode and above it should be the liquid crystal matter.

Next comes to the second piece of glass with an electrode in the form of the rectangle on the
bottom and, on top, another polarizing film. It must be considered that both the pieces are kept at
right angles. When there is no current, the light passes through the front of the LCD it will be
reflected by the mirror and bounced back. As the electrode is connected to a battery the current
from it will cause the liquid crystals between the common-plane electrode and the electrode shaped
like a rectangle to untwist. Thus the light is blocked from passing through. That particular
rectangular area appears blank.

Advantages of an LCD’s:

 LCD’s consumes less amount of power compared to CRT and LED


 LCD’s are consist of some microwatts for display in comparison to some mill watts for LED’s
 LCDs are of low cost
 Provides excellent contrast
 LCD’s are thinner and lighter when compared to cathode ray tube and LED

Disadvantages of an LCD’s:

 Require additional light sources


 Range of temperature is limited for operation
 Low reliability
 Speed is very low
 LCD’s need an AC drive

Applications of Liquid Crystal Display

Liquid crystal technology has major applications in the field of science and engineering as well
on electronic devices.

 Liquid crystal thermometer


 Optical imaging
 The liquid crystal display technique is also applicable in visualization of the radio frequency
waves in the waveguide
 Used in the medical applications

Few LCD Based Displays

 LCD monitor
 LCD display smartphone
 LCD display camera

Page 260 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
An LCD has two polarized layers on top of each other. Normally they are both polarized in the same way, so that
light gets through both layers just fine. One (or both, I'm not sure) of the layers is made of liquid crystals which have
the ability to change the direction of their polarization when a voltage is applied to them. When the voltage is
applied, the crystals' polarization shifts so that it is at 90 degrees with respect to the second layer, and no light gets
through the layers. This creates an area which looks dark. Different areas are controlled by voltages from whatever
circuitry controls the device.

First of all, Liquid crystal displays do not emit light. They only control whether light gets through them or not.

The specifics are quite technical and rely on something called the "polarization" of light.

Now, liquid crystals are actually small thin rod like molecules that like to move in unison when you apply a voltage
across them. This is kind of like a school of fish. There are so many of them and not every one of them has exactly
the same orientation with respect to another but they are all pointed in more or less the same direction. Yet, in the
blink of an eye, they all turn and are moving together in another direction. That is the response of liquid crystal
molecules to applied voltages.

Based on what direction the molecules are pointed compared to the polarization of incoming light and the thickness
of the liquid crystal sample, the incoming light's polarization either gets rotated by 90 degrees passing through the
sample or not at all.

You can take advantage of this polarization rotation with the use of polarizers (explained in the link above). Placing
a polarizer on the output of the sample allows light to be let through ONLY when the polarization of the light
matches the polarization orientation of the polarizer and you have the beginnings of a display!

For example, turning the voltage on, rotates the LC molecules one way and you get light through. Turning it off, and
no light gets through.

Placing the LC molecules into pixel format and putting Red Green and Blue filters above them, you can get color!
Now, some LCDs like your watch don't have what is called a "backlight" and therefore you only get black and a
greyish background. They use the light around you to pass through the liquid crystals. LCD monitors used in
gameboys and computers have backlights. They are necessary for vibrant color incorporation (RGB filters absorb a
lot of light) and high brightness levels.

These LCDs are more sophisticated than the ones in your watch and require a more advanced controlling mechanism
to operate it at the speeds and color levels desired when watching DVDs and playing games. They accomplish this
through the incorporation of what is called an "active matrix". An LCD with an active matrix just has a matrix of
transistors behind the screen controlling each pixel. These transistors are extremely fast and through the use of
addressing and a controller computer, the pixels on your LCD can be efficiently managed such that it meets the
requirements for movies, games and everyday applications such as "Word".

At least, in order for LCDs to be successful, they have to match CRTs in every way. At least the one I am writing
with is a 17" LCD and it has more vibrant colors, better contrast and weighs considerably less (and takes up less
room) than an equivalent 17" CRT (normal) monitor. I must also mention that when they say a CRT monitor is 17",
its actually like 16" viewable area. With an LCD, if it says 17" then you can SEE 17". Now if they were to just bring
down the price tag!

Light is polarized. That is, it has components which oscillate up and down and left and right. There are materials
which can only allow certain polarizations through them. Polarized lenses on sunglasses help to reduce glare by not
allowing the polarizations that come from reflections through but allowing other light through.

Because all light can be broken down into two perpendicular polarizations, two types of polarizing film can be used
to block out all light. That is, if you take two pairs of polarized sunglasses and rotate them so that the lens of one is

Page 261 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
over top of the lens of the other and the glasses are at right angles to each other, no light should come through the
combination of the two lenses. The first lens will block out light in one polarization and the second lens will block
out the rest.

An LCD depends on this sort of blocking.

In an LCD, there are two polarizing films arranged in a very similar manner as what I just described, so that no light
can pass through them. A special type of material -- a "liquid crystal" which has a certain structure but can tend to
"unwind" in the presence of heat or electricity is placed in between them. This crystal's structure twists and, as it
twists, can cause light of one polarization to twist with it.

As a consequence, if the two films are placed exactly the right distance away from each other with the liquid crystal
between them, light will pass through the first film, get polarized, and will then twist down the liquid crystal until it
is perpendicular to its original polarization and will pass through the second film. Thus, because of the liquid crystal,
light WILL pass through this arrangement, however it will be polarized on the other end (this is one of the reasons
for the way LCDs look -- the particular quality of the image, especially when looked upon at certain angles).

Now, what allows a computer or some other controller to actually make a display out of this is that those liquid
crystals can actually be manipulated by electricity to "straighten out." By applying an electric current to the liquid
crystal, it will stop twisting the light. As a consequence, light at that point will once again get blocked by the
combination of the two polarized films.

A matrix of these LCD pixels can be built and each pixel can be turned on (causing a black lack of light) or turned
off (causing light to pass through) in such a way that allows images to be displayed.

Other arrangements of film, crystal, and film can also be used to cause an inverse effect -- so that when electricity is
not applied, no light can pass through. Similarly, the light that sits behind each pixel can be a different color. By
putting a red pixel, a green pixel, and a blue pixel in close proximity, colors can be formed.

LCD technology is constantly evolving. These are just the basics of what makes it work. Different liquid crystals are
being used to create different LCD materials, and different types of control are being used to create different types
of LCD displays. It can be very complicated, but all of these new technologies depend on a liquid crystal which can
bend and unbend light and polarization films which can block out light.

Page 262 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

GAS PLASMA
Be guided by
1) Physical construction
2) Operation
3) Scanning methods
4) Types of displays used (MDA, SVGA, & VGA )
5) Windows, accelerators and local bus

What's the difference between LCD and plasma?

A plasma screen looks similar to an LCD, but works in a completely different way: each pixel is
effectively a microscopic fluorescent lamp glowing with plasma. A plasma is a very hot form of
gas in which the atoms have blown apart to make negatively charged electrons and positively
charged ions (atoms minus their electrons). These move about freely, producing a fuzzy glow of
light whenever they collide. Plasma screens can be made much bigger than ordinary cathode-ray
tube televisions, but they are also much more expensive.

Write brief notes on the following display units in terms of physical construction and
operation of gas plasma

Page 263 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Flat panel display
 Plasma creates a picture from a gas filled with xenon and neon atoms and millions
of electrically charged atoms and electrons that collide when you turn the power
on.
 When a voltage is applied, the electrodes get charged and cause the ionization of
the gas resulting in plasma. This also includes the collision between the ions and
electrons resulting in the emission of photon light.
 A gas-plasma display works by sandwiching neon gas between two plates.
 The print on one plate contains vertical conductive lines and the other plate has
horizontal lines, the two plates form a grid.
 When electric current is passed through a horizontal and vertical line, the gas at
the intersection glows, creating a point of light, or pixel thereby creating an image.

What is plasma?

The central element in a fluorescent light is a plasma, a gas made up of free-flowing ions
(electrically charged atoms) and electrons (negatively charged particles). Under normal
conditions, a gas is mainly made up of uncharged particles. That is, the individual gas atoms
include equal numbers of protons (positively charged particles in the atom's nucleus) and
electrons. The negatively charged electrons perfectly balance the positively charged protons, so
the atom has a net charge of zero.
Page 264 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
If you introduce many free electrons into the gas by establishing an electrical voltage across it,
the situation changes very quickly. The free electrons collide with the atoms, knocking loose
other electrons. With a missing electron, an atom loses its balance. It has a net positive charge,
making it an ion.

In a plasma with an electrical current running through it, negatively charged particles are rushing
toward the positively charged area of the plasma, and positively charged particles are rushing
toward the negatively charged area.

In this mad rush, particles are constantly bumping into each other. These collisions excite the gas
atoms in the plasma, causing them to release photons of energy.

Xenon and neon atoms, the atoms used in plasma screens, release light photons when they are
excited. Mostly, these atoms release ultraviolet light photons, which are invisible to the human
eye. But ultraviolet photons can be used to excite visible light photons, as we'll see in the next
section.

Inside a Plasma Display

The xenon and neon gas in a plasma television is contained in hundreds of thousands of tiny cells
positioned between two plates of glass. Long electrodes are also sandwiched between the glass
plates, on both sides of the cells. The address electrodes sit behind the cells, along the rear glass
plate. The transparent display electrodes, which are surrounded by an insulating dielectric
material and covered by a magnesium oxide protective layer, are mounted above the cell,
along the front glass plate.

Page 265 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Both sets of electrodes extend across the entire screen. The display electrodes are arranged in
horizontal rows along the screen and the address electrodes are arranged in vertical columns. As
you can see in the diagram below, the vertical and horizontal electrodes form a basic grid.

To ionize the gas in a particular cell, the plasma display's computer charges the electrodes that
intersect at that cell. It does this thousands of times in a small fraction of a second, charging each
cell in turn.

When the intersecting electrodes are charged (with a voltage difference between them), an
electric current flows through the gas in the cell. As we saw in the last section, the current creates
a rapid flow of charged particles, which stimulates the gas atoms to release ultraviolet photons.

The released ultraviolet photons interact with phosphor material coated on the inside wall of
the cell. Phosphors are substances that give off light when they are exposed to other light. When
an ultraviolet photon hits a phosphor atom in the cell, one of the phosphor's electrons jumps to a
higher energy level and the atom heats up. When the electron falls back to its normal level, it
releases energy in the form of a visible light photon.

The phosphors in a plasma display give off colored light when they are excited. Every pixel is
made up of three separate subpixel cells, each with different colored phosphors. One subpixel
has a red light phosphor, one subpixel has a green light phosphor and one subpixel has a blue
light phosphor. These colors blend together to create the overall color of the pixel.

By varying the pulses of current flowing through the different cells, the control system can
increase or decrease the intensity of each subpixel color to create hundreds of different
combinations of red, green and blue. In this way, the control system can produce colors across
the entire spectrum.

Page 266 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The main advantage of plasma display technology is that you can produce a very wide screen
using extremely thin materials. And because each pixel is lit individually, the image is very
bright and looks good from almost every angle. The image quality isn't quite up to the standards
of the best cathode ray tube sets, but it certainly meets most people's expectations.

The biggest drawback of this technology has been the price. However, falling prices and
advances in technology mean that the plasma display may soon edge out the old CRT sets.

Page 267 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page 268 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Plasma Displays
Plasma displays are closely related to the simple neon lamp. It has long been known that certain
gas mixtures will, if subjected to a sufficiently strong electric field, break down into a “plasma”
which both conducts an electric current and converts a part of the electrical energy into visible
light. This effect produces the familiar orange glow of the neon lamp or neon sign, and can
readily be used as the basis of a matrix display simply by placing this same gas between the
familiar array of row and column electrodes carried by a glass substrate (Figure 4-12a). A dot of
light can be produced at any desired location in the array simply by placing a sufficiently high
voltage across the appropriate row-column electrode pair. The plasma display panel, or PDP, is
clearly an emissive display type, in that it generates its own light. However, unlike the CRT,
there is no easy means of controlling the intensity of the light produced at each cell or pixel. In
order to produce a range of intensities, or a gray scale, plasma displays generally rely on

Page 269 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
temporal modulation techniques, varying the duration of the “on” time of each pixel (generally
across multiple successive frames) in order to provide the appearance of different intensities.

Plasma displays may use either direct current (DC) or alternating current (AC) drive; each
has certain advantages and disadvantages. The DC type has the advantage of simplicity, both in
the basic structure and its drive, but can have certain unique reliability problems owing to the
direct exposure of the electrodes to the plasma. In the AC type, the electrodes may be covered by
an insulating protective layer, and coupled to the plasma itself capacitively. This results in an
interesting side-effect; residual charge in the “capacitor” structure thus formed in a given cell of
the AC display in the “on” state pre-biases that cell toward that state. Even after the power is
removed, then, the panel retains a “memory” in those cells which were on, and the image can
then be restored at the next application of power to the panel.

Color is achieved in plasma panels in the same way as in CRTs; through the use of phosphors
which emit different colors of light when excited. In color plasma panels, the gas mixture is
modified to optimize it for ultraviolet (UV) emission rather than visible light; it is the UV light
that excites the phosphors in this type of display, rather than an electron beam. A typical AC
color-plasma structure is shown in Figure 4-12b; note that in this type of display, barriers are
built on or into the substrate glass, in order to prevent adjacent sub-pixels from exciting each
others’ phosphors. Again, due to the difficulty of directly modulating the light output of each
cell, temporal modulation techniques are used to provide a “gray scale” capability in these
displays.

Page 270 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Figure 4-12 Plasma displays. In the typical monochrome plasma display panel (PDP), light
is produced as in a neon bulb – a glowing plasma appears between the electrodes when a
gas mixture is subjected to a sufficiently high voltage across them. In a color plasma panel,
shown here as an AC type, the gas mixture is optimized for ultraviolet emission, which then
excites color phosphors similar to those used in CRTs.

The fundamental mechanism behind the plasma display panel generally requires much higher
voltages and currents than most other “flat-panel” technologies; the drive circuitry required is
therefore relatively large and robust, and the structures of the display itself are larger than in
other technologies. Owing to these factors, plasma displays have in practice been restricted to
larger sizes – from perhaps 50-125 cm (20-60 inches) diagonal – and relatively low pixel counts.
For this reason, plasma technology has not enjoyed the high unit volumes of other types, such as
the LCD, but has seen significant success in many larger-screen applications such as television
and “presentation” displays. The plasma display, especially in its color form, competes well
against the CRT in those applications where, for reasons of space restrictions or environmental
concerns, the much higher cost can be justified.

Page 271 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

With aid of clearly labelled diagrams, describe


different printer types and principles of operations
 Dot Matrix,
 Laser,
 Inkjet

Classification of Printers
Printer classification based on their technology. Typical printer types are:
1. Dot matrix printer
2. Ink-jet printer
3. Laser printer

Dot Matrix Printer

 A dot matrix printer or impact matrix printer refers to a type of computer printer with a
print head that runs back and forth on the page and prints by impact, striking an ink-
soaked cloth ribbon against the paper, much like a typewriter.
 Dot-matrix technology uses a series or matrix of pins to create printed dots arranged to
form characters on a piece of paper. Because the printing involves mechanical pressure,
these printers can create carbon copies and carbonless copies.
 The print head mechanism pushes each pin into the ribbon, which then strikes the paper.
Many offices and government agencies use them because they can make multiple copies
at lowest cost. Although dot matrix printers have been replaced in most homes and
offices by newer, sexier inkjet and laser printers , they still retain a substantial portion of
the market in their niches.

Page 272 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
It is a popular computer printer that prints text and graphics on the paper by using tiny dots to form
the desired shapes. It uses an array of metal pins known as print head to strike an inked printer
ribbon and produce dots on the paper. These combinations of dots form the desired shape on the
paper. Generally they print with a speed of 50 to 500 characters per second as per the quality of
the printing is desired. The quality of print is determined by the number of pins used (varying from
9 to 24).

The key component in the dot matrix printer is the ‘printhead’ which is about one inch long and
contains a number of tiny pins aligned in a column varying from 9 to 24. The printhead is driven
by several hammers which force each pin to make contact with the paper at the certain time. These
hammers are pulled by small electromagnet (also called solenoids) which is energized at a specific
time depending on the character to be printed. The timings of the signals sent to the solenoids are
programmed in the printer for each character.

The printer receives the data from the computer and translates it to identify which character is to
be printed and the print head runs back and forth, or in an up and down motion, on the page and
prints the dots on the paper.

Advantages:
1.They can print on multi-part stationary or make carbon copies
Page 273 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
2. Low printing cost
3. They can bear environmental conditions.
4. Long life

Disadvantage
1. Noise
2. Low resolution
3. Very limiter Color performance
4. Low speed

Dot Matrix works similar to a typewriter, but a little more complex. Dot Matrix produces
documents by pushing small pins, known as ‘wire’ or ‘rod’ against a ribbon dipped in ink to the
surface of a paper, and creates the text. This rod is controlled by tiny electromagnets or
solenoids. Between the ribbon and the surface there is a plate with holes that guide the pins and
separate the rest of the ribbon from the paper. The pins are a part of the printerhead. The printer
prints one line of text at a time. There are two different types of printers: serial dot matrix
printers and line dot matrix printers. Serial dot matrix uses a horizontally moving print head.
However, in Line dot matrix uses a fixed printer head that is almost as wide as the paper. Serial
dot matrix printers can produce 50-550 characters per second (cps), while line dots can produce
1000 cps.

Dot Matrix printers were introduced by Digital Equipment Corporation in 1970 and became one
of the popular printers at that time for use in applications such as receipts, ATM machines, data
logging, aviation or other point-of-sales terminals. Though, the printer is slow, loud and
expensive in printing each page, it is popular because of its carbon and carbonless copy
capability, which allows it to create copies in a single print. After initial purchase, the printer
uses a ribbon dipped in ink, which is fairly cheap and easy to replace. It isn’t the widely used
printer today, but it still has a presence in a niche market.

HOW IT WORKS

Dot matrix printers, known also as impact printers, represent the oldest printing technology, are still
the widespread today, grace of it's best cost per page ratio. Dot matrix printers are divided on two main
groups: serial dot matrix printers and line dot matrix printers (or simply line printers). In serial dot
matrix printers the characters are formed by the print head (or printhead). Such a print head has a
number of print wires (pins) arranged in vertical columns and electro-magnetic mechanism able to
shoot these wires.
There are two main printhead technologies - in the first one electromagnetic field shoots the print
head's wire. In the second one, the so called permanent magnet printheads, a spring shoots the
printhead wire and the magnetic field just holds the spring in stressed and ready to shoot position.
When the electromagnetic field equalizes the magnetic field, the spring is released to shoot the wire.
Both print head mechanisms are shown in action at the picture bellow.

Page 274 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Dot matrix printer head mechanisms in action:

Classical printhead mechanism is showed from the left side. The permanent magnet printer head
mechanism you may see at right.

In general the permanent magnet printheads are faster and are used in heavy-duty printers. Some of
the most popular print heads of this type are: Epson DFX, IBM 4226, Fujitsu 5600 and 6400, and all Oki
print heads.

How the serial dot matrix printers work?

As the printer head moves in horizontal direction, the printhead controller sends electrical signals which
forces the appropriate wires to strike against the inked ribbon, making dots on the paper and forming
the desired characters. The most commonly used printer heads has 9 print wires in one column (9-pin
printheads) or 24 print wires in two columns (24-pin printheads), for better print quality. In some
heavy-duty dot matrix printers there are also used 18 wire print heads (18-pin printheads) which have 2
columns, 9 wires in each.

The distance between wires in column may give us the vertical printing resolution. For example: 9 wire
print head with distance 0.35 mm between adjacent wires will result in 25.4/0.35=72.5 dots/inch (dots
per inch DPI) vertical printing resolution for one pass printed line of characters. 24 wire print heads has
2 columns - 12 wires in each, with a vertical displacement of ½ step. So if the distance between adjacent
wires is 0.21 mm, then one column will print with 25.4/0.21=120.9 dots/inch (DPI) vertical resolution,

Page 275 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
but since the second column print between the dots printed from the first one, the overall vertical
resolution will be 240 DPI. Please note that the first laser printers released on the market had the same
resolution.

Ink Jet Printers

An inkjet printer is a type of computer printer that creates a digital image by propelling droplets
of ink onto paper. Inkjet printers are the most commonly used type of printer and range from
small inexpensive consumer models to very large professional machines that can cost up to
thousands of dollars. Its consumable called inkjet cartridge.

Page 276 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Inkjet printers are most popular printers for home and small scale offices as they have a reasonable
cost and a good quality of printing as well. A typical inkjet printer can print with a resolution of
more than 300 dpi and some good quality inkjet printers are able to produce full colored hard
copies at 600 dpi.

An inkjet printer is made of the following parts:

 Printhead – It is the heart of the printer which holds a series a nozzles which sprays the
ink drops over the paper.
 Ink cartridge – It is the part that contains the ink for printing. Generally monochrome
(black & white) printers contain a black colored ink cartridges and a color printer contains
two cartridges – one with black ink and other with primary colors (cyan, magenta and
yellow).

 Stepper motor – It is housed in the printer to move the printer head and ink cartridges
back and forth across the paper.
 Stabilizer bar – A stabilizer bar is used in printer to ensure the movement of print head is
précised and controlled over the paper.
 Belt – A belt is used to attach the print head with the stepper motor.
 Paper Tray – It is the place where papers are placed to be printed.
 Rollers – Printers have a set of rollers that helps to pull paper from the tray for printing
purpose.
 Paper tray stepper motor- another stepper motor is used to rotate the rollers in order to
pull the paper in the printer.
 Control Circuitry – The control circuit takes the input from the computer and by decoding
the input controls all mechanical operation of the printer.

Similar to other printers, inkjet printers have a ‘print head’ as a key element. The print head has
many tiny nozzles also called as jets. When the printer receives the command to print something,
the print head starts spraying ink over the paper to form the characters and images.

There are mainly two technologies that are used to spray the ink by nozzles.
These are:

· Thermal Bubble – This technology is also known as bubble jet is used by various
manufacturers like Canon and Hewlett Packard. When printer receives commands to print
something, the current flows through a set of tiny resistors and they produce heat. This heat in turn
vaporizes the ink to create a bubble. As the bubble expands, some of the ink moves out of the
nozzle and gets deposited over the paper. Then the bubble collapses and due to the vacuum it pulls

Page 277 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
more ink from ink cartridge. There are generally 300 to 600 nozzles in a thermal printer head which
can spray the ink simultaneously.

· Piezoelectric – In the piezoelectric technology, a piezo crystal is situated at the end of the ink
reservoir of a nozzle. When printer receives the command to print, an electric charge is applied to
the crystal which in turn starts vibrating and a small amount of ink is pushed out of the nozzle.
When the vibration stops the nozzle pulls some more ink from the cartridge to replace the ink
sprayed out. This technology is patented by Seiko Epson Corporation.

Advantages:
1. Low printer cost
2. Compact size
3. Low noise
4. no warm up time compare to laser printer

Disadvantage

1. The ink is often very expensive (for a typical OEM cartridge cost RM70 for 16ml, RM4375
per liter)

This point no longer exist in some latest models of Inkjet Printer. For Example, HP 46 Black can print 1,500 pages
at the price of RM 37

2. Lifetime of inkjet prints produced by inkjet printer is limited. They will eventually fade and
the color balance may change.
3. Easy get “blur” if get water drop.
4. Easy to get clogging on inkjet nozzles.
5. Must print it once every few days. Make sure print head won’t dried up.

The inkjet printer works in a complicated way. It has a series of microscopic nozzles that spray a
stream of ink directly onto the paper. The nozzles either have a high pressure pump or tiny

Page 278 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
heating elements behind them that helps deposit ink on the paper. There are two main
technologies that are used in an inkjet printer: continuous (CIJ) and Drop-on-demand (DOD). In
continuous technology, a high-pressure pump direct liquid ink from the cartridge through a
gunbody and a microscopic nozzle, creating a continuous stream of ink droplets that are
deposited on the paper. Extra unwanted ink is dropped into a gutter, which is recycled when the
printer is active again. Drop-On-Demand is divided into thermal DOD and piezoelectric DOD.
The thermal DOD uses a heating element to heat the ink in a chamber, which cools when applied
to the paper. The piezoelectric DOD uses a piezoelectric material behind each nozzle instead of a
heating element. In DOD, the printer cartridges fires ink only at special points on the surface that
is required for creating an image.

Inkjet printers produce cheaper copies but prints slower compared to laser printers. These are
also small and compact, with the printer fitting right on the desk or workstation. The initial cost
of these printers is quite cheap, but Officejet is usually more expensive than the basic model as it
incorporates other features. Inkjet printers are honed for their ability to print good quality and
sharp text and images and to print on almost any kind of paper. Advantages of an inkjet include
quieter in operation, high print quality, no warm up time and low cost per page. While,
disadvantages are the ink is expensive (cheaper alternates are available from third party
cartridges), ink is not waterproof, the nozzle is prone to clogging and the ink dries up if not used
for long periods of time.

Page 279 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

How Inkjet Printers Work

No matter where you are reading this article from, you most likely have a printer nearby. And
there's a very good chance that it is an inkjet printer. Since their introduction in the latter half of
the 1980s, inkjet printers have grown in popularity and performance while dropping significantly
in price.

Page 280 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

An inexpensive color inkjet printer made by Hewlett Packard

An inkjet printer is any printer that places extremely small droplets of ink onto paper to create
an image. If you ever look at a piece of paper that has come out of an inkjet printer, you know
that:

 The dots are extremely small (usually between 50 and 60 microns in diameter), so small that
they are tinier than the diameter of a human hair (70 microns)!
 The dots are positioned very precisely, with resolutions of up to 1440x720 dots per inch (dpi).
 The dots can have different colors combined together to create photo-quality images.

How inkjet nozzles work

How does the ink get onto the page? It's a slightly different process in a bubble jet and an
inkjet...

Bubble jets

In Canon Bubble Jet printers, it goes a bit like this:

Page 281 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

1. Under instructions from your computer, an electronic circuit in the printer figures out which
nozzles have to be fired to print a particular character at a certain point on the page. Hundreds
of nozzles are involved in making a single character and each one is only about a tenth as thick
as a human hair!
2. The circuit activates each of the nozzles by passing an electric current through a small resistor
inside it.
3. When electricity flows through the resistor, it heats up.
4. Heat from the resistor boils the ink inside the nozzle immediately next to it.
5. As the ink boils, it forms into a bubble of ink vapor. The bubble expands enormously and bursts.
6. When the bubble pops, it squirts the ink it contained onto the page in a precisely formed dot.
7. The collapsing bubble creates a partial vacuum in the nozzle that draws in more ink from the ink
tank, ready for printing the next dot.
8. Meanwhile the entire print head (light orange) is moving to the side ready to print the next
character.

Ink jets

In a piezoelectric inkjet, it's slightly different:

Page 282 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

1. An ink tank (black) supplies the ink dispenser (green) through a narrow tube by capillary action.
2. A droplet of ink from the tank sits waiting at the very end of the tube.
3. When the printer circuit (not shown) wants to fire an ink droplet, it energizes two electrical
contacts (red) attached to the piezoelectric crystal.
4. The energized piezoelectric crystal (dark red) flexes outward (toward the right in this picture).
5. It squashes against a membrane (dark blue), pushing that toward the right as well.
6. The membrane pushes against a hole in the ink dispenser (green), increasing the pressure there.
7. The pressure forces the waiting ink droplet from the tube toward the paper.

Inside an Inkjet Printer

Parts of a typical inkjet printer include:

 Print head assembly


 Print head - The core of an inkjet printer, the print head contains a series of nozzles that
are used to spray drops of ink.

Page 283 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The print head assembly

 Ink cartridges - Depending on the manufacturer and model of the printer, ink cartridges
come in various combinations, such as separate black and color cartridges, color and
black in a single cartridge or even a cartridge for each ink color. The cartridges of some
inkjet printers include the print head itself.
 Print head stepper motor - A stepper motor moves the print head assembly (print head
and ink cartridges) back and forth across the paper. Some printers have another stepper
motor to park the print head assembly when the printer is not in use. Parking means
that the print head assembly is restricted from accidentally moving, like a parking brake
on a car.

Page 284 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Stepper motors like this one control the movement of most


parts of an inkjet printer.

 Belt - A belt is used to attach the print head assembly to the stepper motor.
 Stabilizer bar - The print head assembly uses a stabilizer bar to ensure that movement is
precise and controlled.

Page 285 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Here you can see the stabilizer bar and belt.

 Paper feed assembly


 Paper tray/feeder - Most inkjet printers have a tray that you load the paper into. Some
printers dispense with the standard tray for a feeder instead. The feeder typically snaps
open at an angle on the back of the printer, allowing you to place paper in it. Feeders
generally do not hold as much paper as a traditional paper tray.
 Rollers - A set of rollers pull the paper in from the tray or feeder and advance the paper
when the print head assembly is ready for another pass.

Page 286 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The rollers move the paper through the printer.

 Paper feed stepper motor - This stepper motor powers the rollers to move the paper in
the exact increment needed to ensure a continuous image is printed.
 Power supply - While earlier printers often had an external transformer, most printers sold
today use a standard power supply that is incorporated into the printer itself.
 Control circuitry - A small but sophisticated amount of circuitry is built into the printer to
control all the mechanical aspects of operation, as well as decode the information sent to the
printer from the computer.

The mechanical operation of the printer is controlled by a


small circuit board containing a microprocessor and memory.

Page 287 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 Interface port(s) - The parallel port is still used by many printers, but most newer printers use
the USB port. A few printers connect using a serial port or small computer system interface
(SCSI) port.

While USB taking over, many printers still use a parallel port.

Heat vs. Vibration

Different types of inkjet printers form their droplets of ink in different ways. There are two main
inkjet technologies currently used by printer manufacturers:

View of the nozzles on a thermal bubble inkjet


print head

 Thermal bubble - Used by manufacturers such as Canon and Hewlett Packard, this method is
commonly referred to as bubble jet. In a thermal inkjet printer, tiny resistors create heat, and
this heat vaporizes ink to create a bubble. As the bubble expands, some of the ink is pushed out
of a nozzle onto the paper. When the bubble "pops" (collapses), a vacuum is created. This pulls
Page 288 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
more ink into the print head from the cartridge. A typical bubble jet print head has 300 or 600
tiny nozzles, and all of them can fire a droplet simultaneously.
 Piezoelectric - Patented by Epson, this technology uses piezo crystals. A crystal is located at the
back of the ink reservoir of each nozzle. The crystal receives a tiny electric charge that causes it
to vibrate. When the crystal vibrates inward, it forces a tiny amount of ink out of the nozzle.
When it vibrates out, it pulls some more ink into the reservoir to replace the ink sprayed out.

Let's walk through the printing process to see just what happens.

Click "OK" to Print


When you click on a button to print, there is a sequence of events that take place:

1. The software application you are using sends the data to be printed to the printer driver.
2. The driver translates the data into a format that the printer can understand and checks to see
that the printer is online and available to print.
3. The data is sent by the driver from the computer to the printer via the connection interface
(parallel, USB, etc.).
4. The printer receives the data from the computer. It stores a certain amount of data in a buffer.
The buffer can range from 512 KB random access memory (RAM) to 16 MB RAM, depending on
the model. Buffers are useful because they allow the computer to finish with the printing
process quickly, instead of having to wait for the actual page to print. A large buffer can hold a
complex document or several basic documents.
5. If the printer has been idle for a period of time, it will normally go through a short clean cycle to
make sure that the print head(s) are clean. Once the clean cycle is complete, the printer is ready
to begin printing.
6. The control circuitry activates the paper feed stepper motor. This engages the rollers, which
feed a sheet of paper from the paper tray/feeder into the printer. A small trigger mechanism in
the tray/feeder is depressed when there is paper in the tray or feeder. If the trigger is not
depressed, the printer lights up the "Out of Paper" LED and sends an alert to the computer.
7. Once the paper is fed into the printer and positioned at the start of the page, the print head
stepper motor uses the belt to move the print head assembly across the page. The motor
pauses for the merest fraction of a second each time that the print head sprays dots of ink on
the page and then moves a tiny bit before stopping again. This stepping happens so fast that it
seems like a continuous motion.
8. Multiple dots are made at each stop. It sprays the CMYK colors in precise amounts to make any
other color imaginable.
9. At the end of each complete pass, the paper feed stepper motor advances the paper a fraction
of an inch. Depending on the inkjet model, the print head is reset to the beginning side of the
page, or, in most cases, simply reverses direction and begins to move back across the page as it
prints.
10. This process continues until the page is printed. The time it takes to print a page can vary widely
from printer to printer. It will also vary based on the complexity of the page and size of any
images on the page. For example, a printer may be able to print 16 pages per minute (PPM) of
black text but take a couple of minutes to print one, full-color, page-sized image.

Page 289 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
11. Once the printing is complete, the print head is parked. The paper feed stepper motor spins the
rollers to finish pushing the completed page into the output tray. Most printers today use inks
that are very fast-drying, so that you can immediately pick up the sheet without smudging it.

Paper and Ink


Inkjet printers are fairly inexpensive. They cost less than a typical black-and-white laser printer,
and much less than a color laser printer. In fact, quite a few of the manufacturers sell some of
their printers at a loss. Quite often, you can find the printer on sale for less than you would pay
for a set of the ink cartridges!

Why would they do this? Because they count on the supplies


you purchase to provide their profit. This is very similar to the
way the video game business works. The hardware is sold at or
below cost. Once you buy a particular brand of hardware, then
you must buy the other products that work with that hardware.
In other words, you can't buy a printer from Manufacturer A and
ink cartridges from Manufacturer B. They will not work
together. This printer sells for
less than $100.
Another way that they have
reduced costs is by
incorporating much of the actual print head into the
cartridge itself. The manufacturers believe that since
the print head is the part of the printer that is most likely to
wear out, replacing it every time you replace the cartridge
increases the life of the printer.

The paper you use on an inkjet printer greatly determines the


quality of the image. Standard copier paper works, but doesn't
provide as crisp and bright an image as paper made for an
inkjet printer. There are two main factors that affect image
quality: A typical color ink cartridge:
This cartridge has cyan,
 Brightness magenta and yellow inks in
 Absorption separate reservoirs.

The brightness of a paper is normally determined by how rough the surface of the paper is. A
course or rough paper will scatter light in several directions, whereas a smooth paper will reflect
more of the light back in the same direction. This makes the paper appear brighter, which in turn
makes any image on the paper appear brighter. You can see this yourself by comparing a photo
in a newspaper with a photo in a magazine. The smooth paper of the magazine page reflects light
back to your eye much better than the rough texture of the newspaper. Any paper that is listed as
being bright is generally a smoother-than-normal paper.

Page 290 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The other key factor in image quality is absorption. When the ink is sprayed onto the paper, it
should stay in a tight, symmetrical dot. The ink should not be absorbed too much into the paper.
If that happens, the dot will begin to feather. This means that it will spread out in an irregular
fashion to cover a slightly larger area than the printer expects it to. The result is an page that
looks somewhat fuzzy, particularly at the edges of objects and text.

Imagine that the dot on the left is on coated paper and the dot
on the right is on low-grade copier paper. Notice how
irregular and larger the right dot is compared to the left one.

As stated, feathering is caused by the paper absorbing the ink. To combat this, high-quality inkjet
paper is coated with a waxy film that keeps the ink on the surface of the paper. Coated paper
normally yields a dramatically better print than other paper. The low absorption of coated paper
is key to the high resolution capabilities of many of today's inkjet printers. For example, a typical
Epson inkjet printer can print at a resolution of up to 720x720 dpi on standard paper. With coated
paper, the resolution increases to 1440x720 dpi. The reason is that the printer can actually shift
the paper slightly and add a second row of dots for every normal row, knowing that the image
will not feather and cause the dots to blur together.

Inkjet printers are capable of printing on a variety of media. Commercial inkjet printers
sometimes spray directly on an item like the label on a beer bottle. For consumer use, there are a
number of specialty papers, ranging from adhesive-backed labels or stickers to business cards
and brochures. You can even get iron-on transfers that allow you to create an image and put it on
a T-shirt! One thing is for certain, inkjet printers definitely provide an easy and affordable way to
unleash your creativity.

Page 291 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

LASER PRINTERS:

Laser printing is the most advance technology. In Laser printing, a computer sends data to the
printer. Printer translates this data into printable image data. This kind of printers uses
xerographic principle. A laser beam discharges photo sensitive drum. A Latent Image is created
on drum, during development process toner is attracted to the drum surface and then transferred
to the paper. Its consumable called toner cartridge or laser toner.

Laser printers are the most popular printers that are mainly used for large scale qualitative printing.
They are among the most popularly used fastest printers available in the market. A laser printer
uses a slight different approach for printing. It does not use ink like inkjet printers, instead it uses
a very fine powder known as ‘Toner’. Components of a laser printer is shown in the following
image:

Page 292 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 The CONTROL CIRCUITRY is the part of the printer that talks with the computer and
receives the printing data.
 A Raster Image Processor (RIP) converts the text and images in to a virtual matrix of
dots.

Page 293 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 The PHOTOCONDUCTING DRUM which is the key component of the laser printer
has a special coating which receives the positive and negative charge from a CHARGING
ROLLER.
 A rapidly switching LASER BEAM scans the charged drum line by line.
 When the beam flashes on, it reverses the charge of tiny spots on the drum, respecting to
the dots that are to be printed black.
 As soon the laser scans a line, a stepper motor moves the drum in order to scan the next
line by the laser.
 A DEVELOPER ROLLER plays the vital role to paste the tonner on the paper. It is
coated with charged tonner particles. As the drum touches the developer roller, the
charged tonner particles cling to the discharged areas of the drum, reproducing your images
and text reversely.
 Meanwhile a paper is drawn from the PAPER TRAY with help of a belt. As the paper
passes through a CHARGING WIRE it applies a charge on it opposite to the toner’s
charge.
 When the paper meets the DRUM, due to the opposite charge between the paper and toner
particles, the toner particles are transferred to the paper.
 A CLEANING BLADE then cleans the drum and the whole process runs smoothly
continuously.
 Finally paper passes through the FUSER which is a heat and presser roller, melts the
toner and fixes on the paper perfectly.

Advantages:
1. Low cost per page. Compare to inkjet printer
2. Low noise
3. High speed
4. High printing quality

Disadvantage

1. Laser printers are more expensive, but getting more affordable these year. (Now can get RM
2xx for mono laser printer, below RM 600 for color laser printer)
2. Their size are generally larger.

Page 294 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Laser printer is a type of printer that produces high quality text and graphics by passing a laser
beam on plain paper. This process uses a xerographic printing process, which uses a cylindrical
drum coated with selenium to print an image. Laser printers are quite huge and bulky and require
fast paper feeders. These are most found in offices and commercial places that require high
quality papers, printed fast. Laser printers are expensive units and require a dedicated place.
Maintenance level is high for the device and is also expensive. Laser printers are more common
in black and white, while colors printers cause extra. Laser printer was developed at Xerox in
1969 by Gary Starkweather. He modified an existing xerographic copier and fitted it with
printing capabilities.

The printer works similar to a photocopier. The machine uses the data sent from the computer to
create a raster line or scan line. The Raster Image Processing (RIS), typically built into the
printer, creates a bitmap of the final image in the raster memory. After this, an electrostatic
charge is applied to the photosensitive drum. The system also applies AC bias to the roller to
remove any previous charges and a DC bias on the drum surface to ensure a uniform negative
potential. A laser is then fired on the electro statically charged light-sensitive drum. The drum
then attracts the toner in the places the charge is still present. The drum then becomes hot and
fuses the toner to the paper, which then produces the image. Laser printers are quite extensive
cleaning incase of jams and requires maintenance. However, one toner can produce up to 5,000
pages before replacement.

Page 295 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page 296 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page 297 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Page 298 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Principle operation of a laser printer

1. Cleaning
 When an image has been deposited on the paper and the drum has been separated
any remaining toner must be removed from the drum
 A printer blade is used to scrap all the excess toner

2. Conditioning
 Involves removing the latent image from the drum and conditioning the drum for a
new latent image
3. Writing
 Involves scanning the photo sensitive drum with the laser beam
4. Developing
 The toner is applied to the latent image on the drum

5. Transferring
 The toner attached to the latent image is transferred to the paper
6. Fusing
 Toner is permanently fused on the paper. The printing paper is rolled between a
heated roller and a pressure roller, the loose toner is melted and fused with the fibers
in the paper
 The paper is then moved to the output tray as printed page

Page 299 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

How Laser Printers Work

The term inkjet printer is very descriptive of the process at


work -- these printers put an image on paper using tiny jets of
ink. The term laser printer, on the other hand, is a bit more
mysterious -- how can a laser beam, a highly focused beam of
light, write letters and draw pictures on paper?

In this article, we'll unravel the mystery behind the laser printer,
tracing a page's path from the characters on your computer
screen to printed letters on paper. As it turns out, the laser
printing process is based on some very basic scientific
principles applied in an exceptionally innovative way.
Hewlett Packard LaserJet
The Basics: Static Electricity 4050T

The primary principle at work in a laser printer is static electricity, the same energy that makes
clothes in the dryer stick together or a lightning bolt travel from a thundercloud to the ground.
Static electricity is simply an electrical charge built up on an insulated object, such as a balloon
or your body. Since oppositely charged atoms are attracted to each other, objects with opposite
static electricity fields cling together.

Page 300 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The path of a piece of paper through a laser printer

A laser printer uses this phenomenon as a sort of "temporary glue." The core component of this
system is the photoreceptor, typically a revolving drum or cylinder. This drum assembly is
made out of highly photoconductive material that is discharged by light photons.

The basic components of a laser printer

Page 301 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The Basics: Drum

Initially, the drum is given a total positive charge by the charge corona wire, a wire with an
electrical current running through it. (Some printers use a charged roller instead of a corona
wire, but the principle is the same.) As the drum revolves, the printer shines a tiny laser beam
across the surface to discharge certain points. In this way, the laser "draws" the letters and
images to be printed as a pattern of electrical charges -- an electrostatic image. The system can
also work with the charges reversed -- that is, a positive electrostatic image on a negative
background.

The laser "writes" on a photoconductive revolving


drum.

After the pattern is set, the printer coats the drum with positively charged toner -- a fine, black
powder. Since it has a positive charge, the toner clings to the negative discharged areas of the
drum, but not to the positively charged "background." This is something like writing on a soda
can with glue and then rolling it over some flour: The flour only sticks to the glue-coated part of
the can, so you end up with a message written in powder.

With the powder pattern affixed, the drum rolls over a sheet of paper, which is moving along a
belt below. Before the paper rolls under the drum, it is given a negative charge by the transfer
corona wire (charged roller). This charge is stronger than the negative charge of the electrostatic
image, so the paper can pull the toner powder away. Since it is moving at the same speed as the
drum, the paper picks up the image pattern exactly. To keep the paper from clinging to the drum,
it is discharged by the detac corona wire immediately after picking up the toner.

Page 302 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The basic components of a laser printer

The Basics: Fuser

Finally, the printer passes the paper through the fuser, a pair of heated rollers. As the paper
passes through these rollers, the loose toner powder melts, fusing with the fibers in the paper.
The fuser rolls the paper to the output tray, and you have your finished page. The fuser also heats
up the paper itself, of course, which is why pages are always hot when they come out of a laser
printer or photocopier.

Page 303 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

So what keeps the paper from burning up? Mainly, speed -- the paper passes through the rollers
so quickly that it doesn't get very hot.

After depositing toner on the paper, the drum surface passes the discharge lamp. This bright
light exposes the entire photoreceptor surface, erasing the electrical image. The drum surface
then passes the charge corona wire, which reapplies the positive charge.

Page 304 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The basic components of a laser printer

Conceptually, this is all there is to it. Of course, actually bringing everything together is a lot
more complex. In the following sections, we'll examine the different components in greater detail
to see how they produce text and images so quickly and precisely.

The Controller: The Conversation

Before a laser printer can do anything else, it needs to receive the page data and figure out how
it's going to put everything on the paper. This is the job of the printer controller.

The printer controller is the laser printer's main onboard computer. It talks to the host computer
(for example, your PC) through a communications port, such as a parallel port or USB port. At
the start of the printing job, the laser printer establishes with the host computer how they will
exchange data. The controller may have to start and stop the host computer periodically to
process the information it has received.

Page 305 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

A typical laser printer has a few different types of


communications ports.

In an office, a laser printer will probably be connected to several separate host computers, so
multiple users can print documents from their machine. The controller handles each one
separately, but may be carrying on many "conversations" concurrently. This ability to handle
several jobs at once is one of the reasons why laser printers are so popular.

The Controller: The Language

For the printer controller and the host computer to communicate, they need to speak the same
page description language. In earlier printers, the computer sent a special sort of text file and a
simple code giving the printer some basic formatting information. Since these early printers had
only a few fonts, this was a very straightforward process.

These days, you might have hundreds of different fonts to choose from, and you wouldn't think
twice about printing a complex graphic. To handle all of this diverse information, the printer
needs to speak a more advanced language.

The primary printer languages these days are Hewlett Packard's Printer Command Language
(PCL) and Adobe's Postscript. Both of these languages describe the page in vector form -- that
is, as mathematical values of geometric shapes, rather than as a series of dots (a bitmap image).
The printer itself takes the vector images and converts them into a bitmap page. With this
system, the printer can receive elaborate, complex pages, featuring any sort of font or image.
Also, since the printer creates the bitmap image itself, it can use its maximum printer resolution.

Some printers use a graphical device interface (GDI) format instead of a standard PCL. In this
system, the host computer creates the dot array itself, so the controller doesn't have to process
anything -- it just sends the dot instructions on to the laser.

But in most laser printers, the controller must organize all of the data it receives from the host
computer. This includes all of the commands that tell the printer what to do -- what paper to use,
how to format the page, how to handle the font, etc. For the controller to work with this data, it
has to get it in the right order.

Page 306 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
The Controller: Setting up the Page

Once the data is structured, the controller begins putting the page together. It sets the text
margins, arranges the words and places any graphics. When the page is arranged, the raster
image processor (RIP) takes the page data, either as a whole or piece by piece, and breaks it
down into an array of tiny dots. As we'll see in the next section, the printer needs the page in this
form so the laser can write it out on the photoreceptor drum.

In most laser printers, the controller saves all print-job data in its own memory. This lets the
controller put different printing jobs into a queue so it can work through them one at a time. It
also saves time when printing multiple copies of a document, since the host computer only has to
send the data once.

The Laser Assembly

Since it actually draws the page, the printer's laser system -- or laser scanning assembly -- must
be incredibly precise. The traditional laser scanning assembly includes:

 A laser
 A movable mirror
 A lens

The laser receives the page data -- the tiny dots that make up the text and images -- one
horizontal line at a time. As the beam moves across the drum, the laser emits a pulse of light for
every dot to be printed, and no pulse for every dot of empty space.

The laser doesn't actually move the beam itself. It bounces the beam off a movable mirror
instead. As the mirror moves, it shines the beam through a series of lenses. This system
Page 307 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
compensates for the image distortion caused by the varying distance between the mirror and
points along the drum.

Writing the Page

The laser assembly moves in only one plane, horizontally. After each horizontal scan, the printer
moves the photoreceptor drum up a notch so the laser assembly can draw the next line. A small
print-engine computer synchronizes all of this perfectly, even at dizzying speeds.

Some laser printers use a strip of light emitting diodes (LEDs) to write the page image, instead
of a single laser. Each dot position has its own dedicated light, which means the printer has one
set print resolution. These systems cost less to manufacture than true laser assemblies, but they
produce inferior results. Typically, you'll only find them in less expensive printers.

Photocopiers

Laser printers work the same basic way as photocopiers,


with a few significant differences. The most obvious
difference is the source of the image: A photocopier scans
an image by reflecting a bright light off of it, while a laser
printer receives the image in digital form.

Another major difference is how the electrostatic image is


created. When a photocopier bounces light off a piece of
paper, the light reflects back onto the photoreceptor from the
white areas but is absorbed by the dark areas. In this process,
the "background" is discharged, while the electrostatic image
retains a positive charge. This method is called "write-
white."

In most laser printers, the process is reversed: The laser


discharges the lines of the electrostatic image and leaves the
background positively charged. In a printer, this "write-
black" system is easier to implement than a "write-white"
system, and it generally produces better results.

Toner Basics

Page 308 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
One of the most distinctive things about a laser printer (or photocopier) is the toner. It's such a
strange concept for the paper to grab the "ink" rather than the printer applying it. And it's even
stranger that the "ink" isn't really ink at all.

So what is toner? The short answer is: It's an electrically-charged powder with two main
ingredients: pigment and plastic.

The role of the pigment is fairly obvious -- it provides the coloring (black, in a monochrome
printer) that fills in the text and images. This pigment is blended into plastic particles, so the
toner will melt when it passes through the heat of the fuser. This quality gives toner a number of
advantages over liquid ink. Chiefly, it firmly binds to the fibers in almost any type of paper,
which means the text won't smudge or bleed easily.

Photo courtesy Xerox


A developer bead coated with small toner particles

Applying Toner

So how does the printer apply this toner to the electrostatic image on the drum? The powder is
stored in the toner hopper, a small container built into a removable casing. The printer gathers
the toner from the hopper with the developer unit. The "developer" is actually a collection of
small, negatively charged magnetic beads. These beads are attached to a rotating metal roller,
which moves them through the toner in the toner hopper.

Because they are negatively charged, the developer beads collect the positive toner particles as
they pass through. The roller then brushes the beads past the drum assembly. The electrostatic
image has a stronger negative charge than the developer beads, so the drum pulls the toner
particles away.

Page 309 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

In a lot of printers, the toner hopper, developer and


drum assembly are combined in one replaceable
cartridge.

The drum then moves over the paper, which has an even stronger charge and so grabs the toner.
After collecting the toner, the paper is immediately discharged by the detac corona wire. At this
point, the only thing keeping the toner on the page is gravity -- if you were to blow on the page,
you would completely lose the image. The page must pass through the fuser to affix the toner.
The fuser rollers are heated by internal quartz tube lamps, so the plastic in the toner melts as it
passes through.

But what keeps the toner from collecting on the fuser rolls, rather than sticking to the page? To
keep this from happening, the fuser rolls must be coated with Teflon, the same non-stick
material that keeps your breakfast from sticking to the bottom of the frying pan.

Color Printers

Initially, most commercial laser printers were limited to monochrome printing (black writing on
white paper). But now, there are lots of color laser printers on the market.

Essentially, color printers work the same way as monochrome printers, except they go through
the entire printing process four times -- one pass each for cyan (blue), magenta (red), yellow and
black. By combining these four colors of toner in varying proportions, you can generate the full
spectrum of color.

Page 310 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Inside a color laser printer

There are several different ways of doing this. Some models have four toner and developer units
on a rotating wheel. The printer lays down the electrostatic image for one color and puts that
toner unit into position. It then applies this color to the paper and goes through the process again
for the next color. Some printers add all four colors to a plate before placing the image on paper.

Some more expensive printers actually have a complete printer unit -- a laser assembly, a drum
and a toner system -- for each color. The paper simply moves past the different drum heads,
collecting all the colors in a sort of assembly line.

Advantages of a Laser

So why get a laser printer rather than a cheaper inkjet printer? The main advantages of laser
printers are speed, precision and economy. A laser can move very quickly, so it can "write" with
much greater speed than an ink jet. And because the laser beam has an unvarying diameter, it can
draw more precisely, without spilling any excess ink.

Laser printers tend to be more expensive than inkjet printers, but it doesn't cost as much to keep
them running -- toner powder is cheap and lasts a long time, while you can use up expensive ink
cartridges very quickly. This is why offices typically use a laser printer as their "work horse,"
their machine for printing long text documents. In most models, this mechanical efficiency is
complemented by advanced processing efficiency. A typical laser-printer controller can serve
everybody in a small office.

Page 311 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
When they were first introduced, laser printers were too expensive to use as a personal printer.
Since that time, however, laser printers have gotten much more affordable. Now you can pick up
a basic model for just a little bit more than a nice inkjet printer.

As technology advances, laser-printer prices should continue to drop, while performance


improves. We'll also see a number of innovative design variations, and possibly brand-new
applications of electrostatic printing. Many inventors believe we've only scratched the surface of
what we can do with simple static electricity!

Page 312 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

COMPUTER POWER SUPPLIES AND SYSTEM


PROTECTION

What is a power supply and what does it do?

 The power supply unit (PSU) in a PC regulates and delivers the power to the components in the
case.
 Standard power supplies turn the incoming 110V or 220V AC (Alternating Current) into various
DC (Direct Current) voltages suitable for powering the computer's components.
 Power supplies are quoted as having a certain power output specified in Watts, a standard power
supply would typically be able to deliver around 350 Watts.
 The more components (hard drives, CD/DVD drives, tape drives, ventilation fans, etc) you have
in your PC the greater the power required from the power supply.
 By using a PSU that delivers more power than required means it won't be running at full capacity,
which can prolong life by reducing heat damage to the PSU's internal components during long
periods of use.
 Always replace a power supply with an equivalent or superior power output (Wattage).

Page 313 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

There are 3 types of power supply form factors in common use:

 AT Power Supply - used in very old PCs.


 ATX Power Supply - still used in some PCs.
 ATX-2 Power Supply - commonly in use today.

The voltages produced by AT/ATX/ATX-2 power supplies are:

 +3.3 Volts DC (ATX/ATX-2)


 +5 Volts DC (AT/ATX/ATX-2)
 -5 Volts DC (AT/ATX/ATX-2)
 +5 Volts DC Standby (ATX/ATX-2)
 +12 Volts DC (AT/ATX/ATX-2)
 -12 Volts DC (AT/ATX/ATX-2)

Page 314 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
What is the Difference between AT and ATX power supply.
Main Power Connector

 The main power connector on AT and ATX power supplies are very different, and require
different motherboards because of this. The main power connector on an AT power supply is
actually two separate six-pin connectors that plug into the motherboard side by side in a single
row. The ATX main power connector is a single 20 or 24-pin connector that places the pins on
two rows.

Power Switch

 The power switch of AT style power supplies is integrated directly into the power supply itself.
This is a physical switch that turns the power supply on and off. ATX style power supplies use a
"soft switch" that is controlled by the motherboard. This enables a computer with an ATX power
supply to power off via software.

Wattage

 Older power supplies provide a lower wattage rating than newer ones. Newer ATX style power
supplies typically provide 300 or more watts, whereas AT style power supplies typically provide
wattage of less than 250.

Connectors

 Though AT and ATX power supplies share many connectors, ATX power supplies may have
connectors, such as SATA and 4-pin ATX12V, that never appeared on AT power supplies due to
the technology post-dating the AT power supply. Additionally, an AT power supply has more
mini-Molex connectors for devices such as floppy drives.

 AT is an old standard that has been totally replaced by ATX


 AT boards are wider compared to ATX by almost 4 inches
 ATX allows board makers to customize the ports in the back with back plates which is not
possible with AT
 AT computers had their power switches connected directly to the power supply while in ATX
systems, the switch is connected to the motherboard

Page 315 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Common PSU Connectors


Type Description Illustration

A 20-pin or 24-pin connector that provides


power to the motherboard. On some PSUs, the
P1 is split into one 20-pin connector and one 4-
P1
pin connector which can be combined if
required (see illustration) to form a 24-pin
connector.

A 4-pin power connector that goes to the


ATX12V
motherboard in addition to a 20-pin P1 to
(or P4)
supply power to the processor.

A 4-pin peripheral power connector that


Molex supplies power to IDE disk drives and CD-
ROM/DVD drives.

Berg A 4-pin power connector that supplies power to


(or Mini- the floppy disk drive (it can also be used as an
Molex) auxiliary connector for AGP video cards).

This is a 15-pin power connector mainly used for


Serial ATA
SATA hard drives.

Page 316 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

A 6-pin or (more recently) 8-pin power


connector used for PCI Express graphics cards.
Some 8-pin connections allow for either a 6-pin
PCI Express
or an 8-pin card to be connected by using two
separate connectors on the same cable (one
with 6 pins and another with 2 pins).

P1 (PC Main / ATX connector)

The primary task of the Power Supply Unit (PSU) is to provide your motherboard with power.
This is done via the 20-pins or 24-pins connector.

A 24-pins cable is backwards compatible with a 20-pins motherboard, often this cable can be
split into 20- and 4-pins (like in the image above).

P4 (EPS Connector)

At some point in time the motherboard’s pins were no longer sufficient to provide the processor
(cpu) with power. With overclocked cpu’s drawing as much as 200W a need to provide power
directly to the CPU was created. Nowadays it is the P4, or EPS connector, to provide the cpu
with power.

Page 317 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Cheap motherboards are equipped with a 4-pins connector. More expensive “overclocking”
motherboards have 8-pin connectors. The extra 4 pins ensure that enough power can be provided
to the cpu when overclocking. For regular usage there is absolutely no need for the additional
pins.
Most PSU’s provide two cables; one with 4-pins and one with 8-pins. Obviously you only need
to use one of these cables. It is also possible that your 8-pin cable can be split into two segments
to provide backwards compatibility with cheaper motherboards.

PCI-E Connector (6-pin en 6+2 pin)

The motherboard can provide a maximum of 75W through its PCI-E interface slot. Faster
dedicated graphics cards require much more power. To solve that issue the PCI-E connector was
introduced.

Page 318 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The
left 2 pins of the 6+2 pin connector on the right is detached to provide backwards compatibility with 6-pin
graphic cards.

The PCI-E 6-pin connector can supply an additional 75W per cable. So if your Graphic card
contains a single 6-pin connectors it can draw up to 150W (75W from the motherboard + 75W
from the cable).
More expensive graphic cards require the 6+2 pin PCI-E connector. With it’s 8 pins this
connector can provide up to 150W per cable. A graphics card with a single 6+2-pin connector
can draw up to 225W (75W from the motherboard + 150W from the cable).

Molex (4 Pin Peripheral Connector)

Molex connectors have been around for a very long time and can deliver 5V (red) or 12V
(Yellow) to hardware peripherals. In the past these guys were often used to connect Hard drives,
CD-ROM players, etc. Even some graphics cards like the Geforce 7800 GS were equipped with
Molex.
However their power draw is limited so nowadays most of their purpose has been replaced by
PCI-E cables and SATA cables. All that is left is powering case fans.

Page 319 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Thanks to its angular side you cannot go wrong when connecting a Molex cable. Keep in mind
that they can be extremely difficult to detach.

SATA Connector

The SATA connector is the guy that made the Molex obsolete. All modern DVD-players, hard
disk drives and SSD’s are powered by SATA power.

Thanks to their L-shape the SATA power connector can only connected the right way.

Mini-Molex / Floppy connector

Completely obsolete, but some PSU’s still come with a mini-molex connector. These guys were
used to power floppy disk drives. For those of you who do not remember; these were square
magnetic disks that could contain up to 1.44 MB of data. Basically they were superseded by the
USB stick.

Page 320 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Adapter: Molex to SATA Power cable

Old power supply unit or simply lacking the required number of SATA power connectors? Use a
Molex to SATA connector to power your latest hard disk drive.

Adapter: Molex to PCI-E 6-pins

Need another PCI-6 pin cable to power your graphics card? Use the “2x Molex to 1x PCI-E 6-
pin” adapter. Please make sure you connect both molex to different cable strains. This reduces
the risk of overloading your power supply. If you don’t 75W will flow through your Molex
cable.

Page 321 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Adapter: ATX adapter

With the introduction of ATX12 V2.0 are change was made to a system with a 24-pins
connector. The older ATX12V (1.0, 1.2, 1.2 and 1.3) used a 20-pins connector. In total there are
ober 12 versions of the ATX standard, but they are so similar that you do not need to worry
about compatibility
To create backwards compatibility most modern power supplies allow you to disconnect the last
4 pins of the main connector. It is also possible to create forward compatibility by using an
adapter.

Page 322 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

An ATX 20pins to ATX v2 24pins adapter. This cable also demonstrates the ability to detach the last 4
pins.

Page 323 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The main PSU connectors and their pin outputs are illustrated in the diagram below.

Common PSU connectors and their pin outputs

For an ATX power supply, state the voltage levels for the following color
codes
Page 324 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 Red
 Blue
 Yellow
 Orange
 Black

Identify the color codes for the ATX power supply cables

Page 325 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Complete the table below which shows the color codes of ATX power supply.

PIN Signal voltage Color


1 3.3v
2 3.3v
3 Black
4 Red
5 GND
6 5V
7 GND
8 POWER GOOD
9 5V/SB(standby)
10 Yellow

Identify the color codes for the ATX power supply cables shown below

PIN SIGNAL COLOUR


1 3.3V
2 3.3V
3 GROUND
4 5V
5 GROUND
6 5V
7 GROUND
8 POWER OK
9 5V
10 12V

24-pin ATX 12V 2.x Power Supply Connector

Page 326 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Color Signal Pin Pin Signal Color
+3.3 V Orange
Orange +3.3 V 1 13
+3.3 V sense Brown
Orange +3.3 V 2 14 −12 V Blue

Black Ground 3 15 Ground Black

Red +5 V 4 16 Power on Green

Black Ground 5 17 Ground Black

Red +5 V 6 18 Ground Black

Black Ground 7 19 Ground Black

Grey Power good 8 20 Reserved None

Purple +5 V standby 9 21 +5 V Red

Yellow +12 V 10 22 +5 V Red


Yellow +12 V 11 23 +5 V Red
Orange +3.3 V 12 24 Ground Black

Identify the voltage levels for the following ATX power supply color cables

PIN SIGNAL COLOR


1 Orange
2 Orange
3 Black
4 Red
5 Black
6 Red
7 Black
8 Green
9 Purple
10 yellow

With reference to power supplies explain the following


 Power good signal
Page 327 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Power good delay

Power good signal


The Power Good signal (power-good) prevents a computer from attempting to operate on improper
voltages and damaging itself by alerting it to improper power supply.

 When we first turn on the power supply, voltages are not immediately available on the power
supply outputs: they increase until reaching their correct values. This increase happens is a
fraction of a second (maximum of 20 ms or 0.02 s to be more exact).
 In order to prevent these lower-than-normal voltages to be provided to the computer, the power
supply has a signal called “power good” (also called “PWR_OK” or simply “PG”), which tells to
the computer that the +12 V, +5 V and +3.3 V outputs are in their correct value and thus can be
used, and the power supply is ready to work in a continuous fashion. This signal is available
through pin eight (gray wire) from the main power supply connector.
 There is also another reason for this signal to exist: the under voltage protection (UVP)

 When the power supply first starts up, it takes some time for the components to get "up to speed"
and start generating the proper DC voltages that the computer needs to operate. Before this time,
if the computer were allowed to try to boot up, strange results could occur since the power might
not be at the right voltage. It can take a half-second or longer for the power to stabilize, and this is
an eternity to a processor that can run half a billion instructions per second! To prevent the
computer from starting up prematurely, the power supply puts out a signal to the motherboard
called "Power Good" (or "PowerGood", or "Power OK", or "PWR OK" and so on) after it
completes its internal tests and determines that the power is ready for use. Until this signal is sent,
the motherboard will refuse to start up the computer.
 In addition, the power supply will turn off the Power Good signal if a power surge or glitch
causes it to malfunction. It will then turn the signal back on when the power is OK again, which
will reset the computer. If you've ever had a brownout where the lights flicker off for a split-
second and the computer seems to keep running but resets itself, that's probably what happened.
Sometimes a power supply may shut down and seem "blown" after a power problem but will
reset itself if the power is turned off for 15 seconds and then turned back on.
 The nominal voltage of the Power Good signal is +5 V, but in practice the allowable range is
usually up to a full volt above or below that value. All power supplies will generate the Power
Good signal, and most will specify the typical time until it is asserted. Some extremely el-cheapo
power supplies may "fake" the Power Good signal by just tying it to another +5 V line. Such a
system essentially has no Power Good functionality and will cause the motherboard to try to start
the system before the power has fully stabilized. Needless to say, this type of power supply is to
be avoided. Unfortunately, you cannot tell if your power supply is "faking" things unless you
have test equipment. Fortunately, if you buy anything but the lowest-quality supplies you don't
really need to worry about this.

Power good delay

Page 328 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 Power Good Delay (PG Delay) is the amount of time it takes a power supply to start up
completely and begin delivering the proper voltages to the connected devices.
 According to the Power Supply Design Guide for Desktop Platform Form Factors, Power Good
Delay, should be 100 ms to 500 ms.
 Power Good Delay is also sometimes called PG Delay or PWR_OK Delay.

State all DC levels required in PC systems in terms of voltages and currents.

Page 329 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
VOLTAGE PURPOSE
Used on some older types of serial port amplifier circuits. Generally unused on newer
-12V
systems. Current is usually limited to 1A.
Used on some early personal computers for floppy disk controllers and some ISA add-on
-5V
cards. Generally unused on newer systems. Current is usually limited to 1A.
The zero volt ground (also called common or earth) and reference point for other system
0V
voltages.
Used to supply power for the processor, some types of memory, some AGP video cards,
+3.3V and other digital circuits (most of these components required a +5V supply in older
systems).
Still used to supply the motherboard and some of the components on the motherboard.
Note that there is also a 5V standby voltage present when the system is powered down
+5V
which can be grounded (e.g. by the user pressing the power switch on the front of the
case) to restore power to the system.
Primarily used for devices such as disk drives and cooling fans which have motors of one
+12V sort or another. These devices have their own power connectors that come directly from
the power supply unit.

Voltage Use
+12V Disk drive, fans, cooling devices
-12V Serial ports
+3.3V Newer CPUs, video cards
+5V Motherboard, Motherboard components
0V Ground, used to complete circuits with other voltages

With aid of clearly labelled diagrams, describe the operation of an


 On-line Uninterruptable Power Supply (UPS).
 Off -line Uninterruptable Power Supply (UPS).
Page 330 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

On line UPS

PRINCIPLE OPERATION

Page 331 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 During normal or even abnormal line conditions, the inverter supplies energy from
the mains through the rectifier, which charges the batteries continuously. In addition
to that it can also provide power factor correction.
 When the line fails, the inverter still supplies energy to the loads from the batteries.
 As a consequence, no transfer time exists during the transition from normal to stored
energy modes.
 Online UPS system is the most reliable UPS configuration due to its simplicity (only
three elements), and the continuous charge of the batteries, which means that they are
always ready for the next power outage.
 This kind of UPS provides total independence between input and output voltage
amplitude and frequency.
 When an overload occurs, the bypass switch connects the load directly to the utility
mains, in order to guarantee the continuous supply of the load, thereby avoiding the
damage to the UPS module (bypass operation).
 In this situation, the output voltage must be synchronized with the utility phase,
otherwise the bypass operation will not be allowed.
 Typical efficiency of the online ups systems are up to 94%, which is limited due to
the double conversion effect.

Off line UPS

Page 332 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

PRINCIPLE OPERATION
 The inverter is off when the mains power is on and the output voltage is derived directly
from the mains. The inverter turns on only when the mains supply fails. Its switching
time is less than 5 ms. These UPS are generally used with PCs or computers or other
appliances where a small duration (5 ms or less) interrup-tion in power supply can be
tolerated. Usually, sealed batteries or lead-acid batteries are used. The running time of
these supplies is also low (about 10 to 30 minutes).
 Offline UPS has high efficiencies, since charger is not continuously on.
 The power handling capacity of charger is reduced.
 Offline UPS are not very costly.
 Internal control is simpler in offline Uninterruptible Power Supply.

Page 333 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

The Main Power Problems


Something as simple as a power surge may not seem detrimental—in fact it may go unnoticed until
equipment fails. At the other end of the spectrum, blackouts can cause entire systems to immediately go
dark. While power anomalies are inevitable, their effects should not affect your systems, if the proper
steps are taken to protect them.

With aid of diagrams, define the following terms:


 Surge
 Spike
 Noise
 Black out
 Brown out

a) Surge

 Dramatic increase in voltage above the normal flow of electrical current.


 A power surge lasts for a few nanoseconds, or one-billionth of a second.
 Surge is an unexpected increase in voltage in an electrical current that causes damage to
electrical equipment.
 A power surge takes place when the voltage is 110% or more above normal.

Surges/Spikes (voltage increase from lightning, etc.) can damage equipment incrementally or
catastrophically

Surges and spikes are short-term voltage increases. They are typically caused by lightning strikes, power
outages, short circuits or malfunctions caused by power utility companies. They cause data corruption,
catastrophic and costly equipment damage and incremental damage that degrades equipment performance
and shortens its useful lifespan.

Common causes of surges/spikes:


 Utility company load shifting
 Miswired electrical systems
 Lightning strikes
Problems caused by surges/spikes:
Page 334 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
 System lockups
 Incremental or catastrophic equipment damage
 Lost productivity
b) Spike

 Sudden increase in voltage that lasts for a short period and exceeds 100 percent of the normal
voltage on a line.
 Spikes can be caused by lightning strikes, but can also occur when the electrical system comes
back on after a blackout.
 High-voltage spikes occur when there is a sudden voltage peak of up to 6,000 volts. These spikes
are usually the result of nearby lightning strikes,

c) Black out (power outage)

 Complete loss of AC power. A blown fuse, damaged transformer, or downed power line can cause a
blackout.
 A power failure or blackout is a zero-voltage condition that lasts for more than two cycles. It may be
caused by tripping a circuit breaker, power distribution failure or utility power failure. A blackout can
cause data loss or corruption and equipment damage.
 Blackout refers to the total loss of power to an area and is the most severe form of power outage that can
occur
 A blackout, or power outage, is a complete loss of utility power, whether short- or long-term. Blackouts
cause reduced productivity, lost revenue, system crashes and data loss. Unplanned outages may occur as
aging electrical grids and building circuits are overwhelmed by high demand. Blackouts are particularly
dangerous at sites where safety or life support rely on power, such as hospitals, treatment centers and power
plants.

Blackouts, a complete loss of power, result in lost productivity, time and money.

Common causes of blackouts/power outages:

 Utility company failure


 Accidental AC line disconnection
 Tripped circuit breakers
 Severe weather

Page 335 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Problems caused by blackouts/power outages:

 Data loss
 System downtime
 Lost productivity
 Lost revenue

d) Brown out

 Brownout / under voltage / Sag


 A brownout is a voltage deficiency that occurs when the need for power exceeds power
availability. Brownouts typically last for a few minutes, but can last up to several hours,
as opposed to short-term fluctuations like surges or spikes. They are caused by the
disruption of an electrical grid and may be imposed by utility companies when there is an
overwhelming demand for power. Brownouts, more common than blackouts, cause
equipment failures, incremental damage, decreased equipment stability and data loss.

 Reduced voltage level of AC power that lasts for a period of time.


 Brownouts occur when the power line voltage drops below 80 percent of the normal
voltage level.
 Overloading electrical circuits can cause a brownout
 A brownout is a steady lower voltage state. An example of a brownout is what happens
during peak electrical demand in the summer, when utilities can’t always meet the
requirements and must lower the voltage to limit maximum power. When this happens,
systems can experience glitches, data loss and equipment failure.
 A brownout is a drop in voltage in an electrical power supply.
 The term brownout comes from the dimming experienced by lighting when the voltage
sags.
 Brownouts can cause poor performance of equipment or even incorrect operation

87% of power problems are caused by brownouts, not blackouts

Page 336 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Common causes of brownouts/under voltages/sags:

 Inadequate utility service


 Heavy power draw in area/facility
 Poor electrical circuit design

Problems caused by brownouts/under voltages/sags:

 Active data loss


 System lockups
 Lost productivity
 Slow electronic degradation

e) Swell / Overvoltage

Swells are basically the opposite of a brownout: instead of a voltage deficiency, or sag, a swell is
a voltage increase for a long duration (seconds to a minute), as opposed to a brief increase, like a
surge/spike. A swell is caused when the power being provided outweighs the power accepted by
connected equipment, resulting in an increase in voltage. Much like sags, deterioration may not
be apparent until it's too late, resulting in lost data and damaged equipment.

A swell is the opposite of a sag; an increase in voltage instead of a deficiency.

Common causes of swells/overvoltage’s:

 Sudden/large load reductions


 Oversupply of power from utility source
 Fault on a 3-phase system

Problems caused by swells/overvoltage:

 Slow electronic degradation


 Flickering lights
 Overheating and stress on equipment

Page 337 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
f) Line Noise

Line noise refers to distortion on AC, telephone/DSL, network or coaxial lines caused by
Electromagnetic Interference (EMI) and Radio Frequency Interference (RFI). Line noise is
unavoidable and will appear on every signal at some point, though it is not always detrimental, or
even noticeable. It causes incremental electronic circuit damage, data corruption, audio/video
quality problems and confusion between system components. Line noise generated by electronic
devices varies greatly and can be produced by energy disturbances from a variety of sources,
both natural and man-made.

Electrical noise can confuse system logic and damage electronic components, resulting in random server
lockups and premature board failure.

Common causes of line noise:

 Radio transmissions
 High voltage power lines
 Severe weather
 Fluorescent lights

Problems caused by line noise:

 System lockups
 Audio static
 Video "snow"
 Slow electronic degradation

Page 338 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
g) Surge protector

 A surge protector is an appliance or device designed to protect electrical devices from voltage
spikes.
 A surge protector attempts to limit the voltage supplied to an electric device by either blocking or
shorting to ground any unwanted voltages above a safe threshold.
 A surge protector is an electrical device that is used to protect equipment against power
surges and voltage spikes while blocking voltage over a safe threshold (approximately
120 V).
 When a threshold is over 120V, a surge protector shorts to ground voltage or blocks the
voltage. Without a surge protector, anything higher than 120V can create component
issues, such as permanent damage, reduced lifespan of internal devices, burned wires and
data loss.
 A surge protector is usually installed in communications structures, process control
systems, power distribution panels or other substantial industrialized systems. Smaller
versions are typically installed in electrical service entrances located office buildings and
residences

The Solution
Affordable solutions protect equipment, data and productivity against the hazards of power
problems. Solutions are available for any size application, from home to enterprise business, and
offer varying levels of protection, ranging from protection against common hazards like surges
and line noise, to the most complete protection available against all hazards.

The chart below illustrates which solutions fit certain needs:

Surge/Spike Line Noise Brownout Swell Blackout

Surge Protector Good Good — — —

Standby UPS Good Good Good Good Good

Line-Interactive UPS Good Good Better Better Good

On-Line UPS Best Best Best Best Best

Page 339 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Surge Protectors

Protect all computers and electronics

Surge protectors provide heavy-duty surge/spike protection and line noise filtration. Premium
surge protectors incorporate more and substantially stronger protective components, as well as
isolated filter banks that eliminate interference between devices plugged into the same surge
protector. Select models include data line protection (telephone/DSL, coaxial and/or Ethernet).

Standby UPS Systems

Protect PCs and workstations

Standby UPS systems provide surge/spike/line noise protection like surge protectors, and they
add battery backup to keep connected equipment operating without interruption during
blackouts. They also provide limited brownout protection by switching to battery power to
correct undervoltages. Select models include data line protection and communication ports that
enable automatic shutdown of connected computers during extended blackouts.

Page 340 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

Line-Interactive UPS Systems

Protect workstations, servers, data centers and network equipment

In addition to the protection features offered by standby UPS systems, line-interactive UPS
systems add automatic voltage regulation (AVR). AVR allows the UPS system to adjust voltage
to safe levels during brownouts without switching to battery power, reducing battery wear and
preserving charge levels for blackout protection.

On-Line UPS Systems

Protect servers, VoIP systems and other mission-critical equipment

On-Line UPS systems offer the best protection available against all power problems. True on-
line operation with continuous AC-to-DC-to-AC double conversion completely isolates
electronics from power problems. Precision-regulated output power with pure sine waveform
guarantees maximum stability for connected equipment.

Page 341 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Differentiate
 Surge protector and a surge suppressor
 A suppressor regulates the voltage and makes the power constant in a case of a spike or surge.
 A protector simply detects the surge and turns the unit off.
 Suppressor is good for things like computers you don't want to keep turning on and off.
 Suppressor suppress surge but protector get its fuse blown on surge.

Surge Protector Surge Suppressor


 Surge protectors, are a device used to  Surge suppressors are devices used to
protect against electrical surges. provide a constant voltage to any
connected electrical devices.

 When a surge is detected, a surge  If the voltage given to an electrical device


protector diverts the surge to the is too high or too low, it could cause
ground, preventing it from reaching the damage. Surge suppressors help prevent
connected device. this, adjusting the provided voltage up or
down to keep it at the correct levels.

 Surge protectors are normally used  Surge suppressors are not used as often as
with expensive electronic devices, surge protectors, but can be useful in
such as computers or televisions. Since certain situations. Some households or
electric surges can happen almost business may receive so called "dirty
anywhere under the right conditions, power," where the power fluctuates
surge protectors are usually considered frequently. Surge suppressors can be used
an easy and cheap investment. in such situations to even out the power
supply and help protect electrical devices.

 A protector simply detects the surge  A suppressor regulates the voltage and
and turns the unit off. makes the power constant in a case of a
spike or surge

Differentiate
Page 342 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

 Power conditioner and a Surge protector/suppressor


The difference between a power conditioner and a surge protector mostly lies in their intended
function. A power conditioner takes in power and modifies it based on the requirements of the
machinery to which it is connected. A surge protector doesn't alter the power flowing through it
at all, unless that power is over a certain amount. When the power exceeds the set amount, it
blocks it from passing through. It is not uncommon for the two devices to be in the same unit.

Both power conditioners and surge suppressors are important parts of modern electronics. They
protect the inner workings of devices, often without users even realizing it. Many people go the
extra step of placing additional protective devices between the wall outlets and the electronic
products.

A power conditioner modifies voltage as it passes through. Some systems require very tight or
nonstandard power tolerances, and they use power conditioners to alter the power to meet their
requirements. They are also a common method of prolonging the lifespan of electric devices, as
the properly formed electricity creates less wear on the internal parts of the device.

Most electric systems have power conditioners built into them, usually as very small devices that
are integrated right into an internal circuit board. They monitor the voltage moving across the
board and keep it within a specific tolerance. There are larger power conditioners available,
ranging from small ones in high-end surge protectors all the way to car-sized industrial units
connected to factory machines.

Surge protectors prevent power overloads. When power exceeds a certain amount, they stop it
from passing through. Different surge protectors do this in different ways, but the most common
method is creating a shunt to a ground wire.

This connection to the ground only happens when the power is prevented from passing through
the unit; otherwise, the unit would constantly waste electricity. If a surge protector is improperly
plugged in, such as through a two- or three-pronged adapter, it cannot send power to the ground.
In this case, the surge protector may overload and catch on fire or even send the surge through to
the connected device.

It isn't unusual for a power conditioner and a surge protector to be placed in the same unit. Since
these systems both work on passing voltage, it makes sense to put them together. Some systems
have a very advanced power conditioner that works as a surge protector when needed; this is
common in battery backup systems.

Page 343 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Describe the operation of the following and give advantages and
disadvantages of each

 Switched mode power supply (SMPS)


 Linear Mode Power Supply (LMPS)

Switched Mode Power Supply (SMPS)

 A switched-mode power supply (switching-mode power supply, SMPS, or switcher) is an


electronic power supply that incorporates a switching regulator to convert electrical power
efficiently.
 The pass transistor of a switching-mode supply continually switches between low-dissipation,
full-on and full-off states, and spends very little time in the high dissipation transitions, which
minimizes wasted energy.
 Voltage regulation is achieved by varying the ratio of on-to-off time.
 Switched-mode power supply regulates either output voltage or current by switching ideal storage
elements, like inductors and capacitors, into and out of different electrical configurations. Ideal
switching elements

 Switched Mode Power Supply uses a switching regulator to convert electric power efficiently.
 SMPS transfers electric power from a source (AC mains) to the load by converting the
characteristics of current and voltage.
 SMPS always provide a well regulated power to the load irrespective of the input variations.
 SMPS incorporates a Pass transistor that switches very fast typically at 50Hz and 1 MHz between
the on and off states to minimize the energy waste.
 SMPS regulates the output power by varying the on to off time using minimum voltage so that
efficiency is very high compared to the linear power supply.

Page 344 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
Input Rectifier and Filter Stage
 The process of converting AC to DC is called Rectification. SMPS converts AC to DC
 The rectifier produces an unregulated DC voltage which is then sent to a large filter capacitor.

Inverter Chopper Stage


 The inverter “Chopper” stage converts DC (whether directly from the input or from the rectifier
and filter stage described above) to AC by running it through a power oscillator.
 Power oscillator has a very small output transformer with few windings of kilohertz (kHz).

Output transformer
 The transformer converts the voltage up or down to the required output level..

Output Rectifier and Filter


 The AC output from the transformer is rectified and converted to DC

Chopper Controller
 A feedback circuit monitors the output voltage and compares it with a reference voltage.
 If there is an error in the output voltage the feedback circuit compensates.
 This part of power supply is called switching regulator.
 Chopper Controller performs the function of switching regulator

ADVANTAGES
1. The switch mode power supply has a smaller in size.
2. The SMPS has light weight.
3. It has a better power efficiency typically 60 to 70 percent.
4. It has a strong anti-interference.
5. SMPS has wide output range.
6. Low heat generation in SMPS.

1. Greater efficiency because the switching transistor dissipates little power when acting as a
switch
2. Smaller size and lighter weight from the elimination of heavy line-frequency transformers
3. High efficiency: The switching action means the series regulator element is either on or off
and therefore little energy is dissipated as heat and very high efficiency levels can be
achieved.
4. Compact: As a result of the high efficiency and low levels of heat dissipation, the switch
mode power supplies can be made more compact.
5. Flexible technology: Switch mode power supply technology can be sued to provide high
efficiency voltage conversions in voltage step up or "Boost" applications or step down
"Buck" applications

DISADVANTAGES
Page 345 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
1. The switch mode power supply (SMPS) is more complex.
2. The SMPS has higher output ripple and its regulation is worse.
3. It can be used only as a step down regulator.
4. It has only one output voltage.
5. It has high frequency electrical noise.
6. SMPS also cause harmonic distortion.

1. Greater complexity, due to the generation of high-amplitude, high-frequency energy that the low-
pass filter must block to avoid EMI

2. Noise: The transient spikes that occur from the switching action on switch mode power supplies
are one of the largest problems. spikes or transients can cause electromagnetic or RF interference
which can affect other nearby items of electronic equipment,

3. External components: These components all require space, and add to the cost.

4. Expert design required: It is often possible to put together a switch mode power supply that
works. To ensure that it performs to the required specification can be more difficult. Ensuring the
ripple and interference levels are maintained can be particularly tricky.

Linear Mode Power Supply (LMPS)


 A linear regulator provides the desired output voltage by dissipating excess power in ohmic losses
 A linear regulator regulates either output voltage or current by dissipating the excess electric
power in the form of heat, and hence its maximum power efficiency is voltage-out/voltage-in
since the volt difference is wasted.
 Uses a transformer to convert the voltage from the wall outlet (mains) to a different, usually a
lower voltage.
 The voltage produced by an unregulated power supply will vary depending on the load and on
variations in the AC supply voltage

Page 346 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1

ADVANTAGES

1. Simplicity-One can purchase an entire linear regulator in a package and simply add two filter
capacitors for storage and stability. Even if a Design Engineer plans to design a linear regulator
from scratch, with the help of design books and some little effort, he can achieve it.
2. Quiet Operation & Load-handling Capability:
The linear regulator generates a negligible amount of electrical noise on its output. It’s dynamic
load response time (The time power supply takes to respond to changes in the load current) is
very short.
3. Low Cost:
For output power of less than 10W, linear power supply’s component costs and manufacturing
costs are less than the comparable switching power supply’s cost.
4. Low noise: The use of the linear technology without any switching element means that noise is
kept to a minimum and the annoying spikes found in switching power supplies are now found.
5. Established technology: Linear power supplies have been in widespread use for many years and
their technology is well established and understood.

DISADVANTAGES

1. Range of application: It can be used only as a step down regulator. In case of AC-DC power
supplies, a transformer with rectification and filtering must be placed before the linear power
supply.
2. Number of Outputs: It has only one output voltage. To get additional output voltage, an entire
separate linear regulator must be added.
3. Average Efficiency: Normally linear regulators have 30% to 60% efficiency. It means for every
watt delivered to the load, more than one watt is lost within the supply. This loss is called
headroom loss. It occurs in the pass-transistor. Heat sink is required over the transistor for the
heat dissipation.
4. Efficiency: In view of the fact that a linear power supply uses linear technology, it is not
particularly efficient. Efficiencies of around 50% are not uncommon, and under some conditions
they may offer much lower levels.
5. Size: The use of linear technology means that the size of a linear power supply tends to be larger
than other forms of power supply.
Page 347 of 351
rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
6. Heat dissipation: The use of a series or parallel (less common) regulating element means that
significant amounts of heat are dissipated and this needs to be removed.

Compare and contrast Switched Mode Power Supply (SMPS) &


Linear Mode Power Supply (LMPS)

 The traditional linear power supplies are typically heavy, durable, and have low noise
across low and high frequencies.
 For this reason they are mostly suitable for lower power applications where the weight
does not pose a problem.
 The switching power supplies are much lighter, more efficient, durable, and have limited
high frequency noise due to the design.
 For this reason, the switching power supplies are not suitable for high frequency audio
applications but are great for high power applications.
 Other than that, these two types are pretty much swappable for various applications, and
they cost about the same to make.
 Switching power supplies are used more broadly nowadays than linear power supplies,

BASIS SWITCHED MODE POWER SUPPLY LINEAR MODE POWER SUPPLY


Circuit Design  SMPS's are more complicated and  Simple to moderately complex
difficult to design.
Applications  Higher power applications, can handle  Lower power applications, handles a
a large output current lower output current
Cost factor  More expensive for lower powers than  Linear supplies are less expensive for
linear regulators lower currents/powers.
Part Count  Has a lot of parts  Has few parts
Output voltage  Any voltages available, limited only by  With transformer used, any voltages
transistor breakdown voltages in many available; if transformerless, limited
circuits. Voltage varies little with load. to what can be achieved with a
voltage doubler. If unregulated,
voltage varies significantly with load
Noise  Noisier due to the switching frequency  Low noise
of the SMPS. An unfiltered output may

Page 348 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
cause glitches in digital circuits or
noise in audio circuits.
Reliability  Less reliable  Highly reliable
Efficiency  High efficiency. Efficiencies in the  Low efficiency. Output voltage is
region of 80% are common. The regulated by dissipating excess power
transistors are switched fully on or fully as heat resulting in a typical efficiency
off, so very little resistive losses of 30–40%.
between input and the load.
EMI  Mild high-frequency interference may  Very low, EMI filters reduce the
(electromagnetic be generated by AC rectifier diodes disruptive interference.
interference) under heavy current loading
Leakage  high  low
Risk of equipment  Failure of a component in the SMPS  Very low, unless a short occurs
damage itself can cause further damage to other between the primary and secondary
PSU components; can be difficult to windings or the regulator fails by
troubleshoot. shorting internally.
Size  small size  large size
(power density) (high power density) (low power density)
Weight  light  heaviest
(Power to Weight (high) (low)
Ratio)
Heat  Dissipates very little power (Low heat  Dissipates a lot of power (High heat
loss) loss)
Frequency &  Due to the higher frequency, SMPS's  The low frequency AC needs quite
capacitor size also need much smaller smoothing large capacitors to smooth the
capacitors on the final output. rectified output.

Page 349 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
? MSMPS vs Linear Power Supply
SMPS directly rectifies the mains AC without Linear power supply reduces the voltage to
reducing the voltage. Then the converted DC is the desired value at the beginning by a
switched in high-frequency for a smaller bigger transformer. After that, the AC is
transformer to reduce it to the desired voltage rectified and filtered to make the output
level. Finally, the high-frequency AC signal is DC voltage.
rectified to the DC output voltage.
Voltage Regulation
Voltage regulation is done by controlling the The rectified and filtered DC voltage is
switching frequency. The output voltage is subjected to an output resistance of a
monitored by the feedback circuit and the voltage divider to make the output
variation of voltage is used for the frequency voltage. This resistance is controllable by a
control. feedback circuit that monitors the output
voltage variation.
Efficiency
The heat generation in SMPS is comparatively low The excess power is dissipated as heat to
since the switching transistor operates in the cut- make the voltage constant in a linear
off and starvation regions. The small size of the power supply. Moreover, the input
output transformer also makes the heat loss transformer is much bulkier; thus,
small. Therefore, the efficiency is higher (85- transformer losses are higher. Therefore,
90%). the efficiency of a linear power supply is as
low as 60%.
Build
Transformer size of an SMPS does not need to be Linear power supplies are much bulkier
large as it operates in high-frequency. Therefore, since the input transformer has to be large
the weight of the transformer will also be less. As due to the low frequency it operates on. As
a result, the size, as well as the weight of an more heat is generated in a voltage
SMPS is much lower than a linear power supply. regulator, heat sinks should be used as
well.
Noise and Voltage Distortions
SMPS generates a high-frequency noise due to Linear power supplies do not produce
switching. This passes into the output voltage, as noise in the output voltage. Harmonic
well as to input mains sometimes. Harmonic distortion is much less than that of SMPSs.
distortion in mains power could be also possible
in SMPSs.
Applications
SMPS can be used as portable devices due to the Linear power supplies are much larger and
small build. But as it generates a high-frequency cannot be used for portable devices. Since
noise, SMPSs cannot be used for noise-sensitive they do not generate noise and the output
applications such as RF and audio applications. voltage is also clean, they are used for
most of the electrical and electronic tests
in laboratories.

Page 350 of 351


rmmakaha@gmail.com
COMPUTER SYSTEMS 322/13/SO1
COMPARISON SMPS LMPS
Size and weight Smaller due to higher operating If a transformer is used, large due to low
frequency (typically 50 kHz - 1 operating frequency (mains power
MHz) frequency is at 50 or 60 Hz). Small if
transformer less.
Efficiency, heat, and Regulated using duty cycle control, If regulated, output voltage is
power dissipation which draws only the power regulated by dissipating excess
required by the load. In all SMPS power as heat resulting in typical Output
topologies, the transistors are efficiency of 30-40%; if is
always switched fully on or fully unregulated, transformer iron and
off. copper losses significant.
Complexity Consists of a controller IC, one or Unregulated may be diode and capacitor;
several power transistors and diodes regulated has a voltage regulating IC or
as well as a power transformer, discrete circuit and a noise filtering
inductors, and filter capacitors. capacitor.
Radio frequency Mild high-frequency interference may be
interference EMI/RFI produced due to the generated by AC rectifier diodes under
current being switched on and off heavy current loading, while most other
sharply. Therefore, EMI filters and supply types produce no high-frequency
RF shielding are needed to reduce interference. Some mains hum induction
the disruptive interference into unshielded cables, problematical for
low-signal audio.
Power factor Ranging from low to medium since
Low for a regulated supply because
a simple SMPS without PFC draws
current is drawn from the mains at the
current spikes at the peaks of the
peaks of the voltage sinusoid.
AC sinusoid.

Page 351 of 351


rmmakaha@gmail.com
Working principle
of dot matrix,
inkjet, laser
printers
Dot matrix printer
Definition:
 Dot matrix printers are known as impact printers.

 They create an image on paper by striking pins against


an inked ribbon.

 The ink is transferred to the paper as closely shaped


dots that form each character.

 The more pins, the better the print quality. 24-pin dot
matrix printers can print at near letter-quality.
Introduction & Overview of Dot matrix
printer
Dot-Matrix Printer
 The dot-matrix printer (sometimes called a matrix
printer or an impact printer) allows you to print
documents on paper thanks to the "back and forth"
motion of a carriage housing a print head.

 The head is made up of tiny metal pins, driven by


electromagnets, which strike a carbon ribbon called an
"inked ribbon", located between the head and the
paper.
Continue…

 The carbon ribbon scrolls by so that there is always ink


on it. At the end of each line, a roller makes the sheet
advance.

 The most recent dot-matrix printers are equipped with 24-


needle printer heads, which allows them to print with a
resolution of 216 dpi (dots per inch).
Working principle of Dot matrix printers

 It is widely used to print multipart forms and address


labels. Also known as a "serial dot matrix printer," the
tractor and sprocket mechanism in these devices
handles thicker media better than laser and inkjet
printers.
Continue…
Hammers Hit the Ribbon
 The dot matrix printer uses one or two columns of dot
hammers that are moved across the paper. The hammers
hit the ribbon into the paper, which causes the ink to be
deposited. The more hammers, the higher the resolution.
For example, 9-pin heads produce draft quality text, while
24-pin heads produce typewriter quality output. Speeds
range from 200 to 400 cps, which is about 90 to 180 lpm.

Dot Matrix Mechanism


Continue…
 Dot matrix printers print columns of dots in a serial fashion.
The more dot hammers (pins), the better looking the printed
results.

Dot Matrix Printer


Line Dot Matrix Printers Technology
 There are several printer technologies used in today's
home, office and banking printers.

 Dot matrix printers are divided on two main groups:


 serial dot matrix printers

 line printers (or line dot matrix printers).

 Line printers as well as serial dot matrix printers use pins


to strike against the inked ribbon, making dots on the
paper and forming the desired characters.

 The differences are that line printers use hammer bank


(or print-shuttle) instead of print head, this print-shuttle
has hammers instead of print wires, and these hammers
are arranged in a horizontal row instead in vertical
column.
Continue…
 The hammer bank uses the same technology as the
permanent magnet print head with the small difference
that instead of print wires the print-shuttle has hammers.
Printing mechanism

 The printing mechanism works as follow. The


permanent magnetic field holds the hammer spring in
stressed, ready to strike position. The driver sends
electrical current to hammer coil, which then creates
electromagnetic field opposite to the permanent
magnetic field. When both fields equalize, the energy
stored in the spring is released to strike the hammer
against the ribbon and prints a dot on the paper. The
hammer printhing mechanism is shown in action at the
picture bellow.
Continue…
 The line printer mechanism in action:
How the line dot matrix printers
work?
 During printing process the print-shuttle vibrates in horizontal
direction with high speed while the print hammers are fired
selectively. So each hammer may print a series of dots in
horizontal direction for one pass of the shuttle, then paper
advances at one step and the shuttle prints the following row
of dots
The line printing process

 Line matrix printers are the right solutions for high-volume


impact printing and are superior in speed, reliability and quality.
As price-performance leaders, line printers cost less to service
and less to use. The fastest line matrix printers available on the
market are Tally T6218 and Printronix P5220, with a claimed print
speed between 1800 and 2000 lines per minute (lpm).
Continue…

Line dot matrix printer Specifications


Features:
Print Technology: Line impact dot matrix
Print Speed 500 - 2000 lpm (draft)
LPM (lines per minute)
Graphics Resolution 60 - 240 DPI
Copies (Original +) 5-9
Workload (Duty cycle) 60,000 - 600,000 PPM
PPM (Pages per month)
Price [US$] 3,000 - 13,000 $
Cost Per Page (Cost/cents 0.1 - 0.15 ¢
¢)
Characteristics of Dot- matrix
printers
Dot-matrix printers vary in two important characteristics:
 speed: Given in characters per second (cps), the speed
can vary from about 50 to over 500 cps. Most dot-matrix
printers offer different speeds depending on the quality
of print desired.

 print quality: Determined by the number of pins (the


mechanisms that print the dots), it can vary from 9 to 24.
The best dot-matrix printers (24 pins) can produce near
letter-quality type, although you can still see a difference
if you look closely.

 In addition to these characteristics, you should also


consider the noise factor. Compared to laser and ink-jet
printers, dot-matrix printers are notorious for making a
racket.
Advantages of Dot-matrix printers
The advantages are:
 low purchase cost.

 can handle multipart forms.

 cheap to operate, just new ribbons.

 rugged and low repair cost.


Disadvantages of Dot-matrix
printers
The disadvantages are:
 noisy.

 low resolution. You can see the dots making up each


character.

 Not all can do colour.

 Colour looks faded and streaky.


Future of dot-matrix printers

 The main use of dot-matrix printers is in areas of


intensive transaction-processing systems that churn out
quite a lot of printing.

 Many companies who might have started off with dot-


matrix printers are not easily convinced to go for printers
based on other technologies because of the speed
advantage of dot-matrix printers.
Inkjet Printing
Definition:
Typically, inkjet printing forms images by spraying tiny
droplets of liquid ink onto paper. Small size and precision
placement of the dots of ink produce very near photo-
quality images.
Some inkjet printers employ a hybrid dye-sublimation
process. The color is contained in cartridges, heated,
vaporized, and laid down a strip at a time rather a page
at a time creating an effect closer to continuous tone
than traditional inkjet technology.
Also Known As: bubble jet | thermal ink
Continue…
Examples:
 Currently, inkjet printing is the primary printer technology
for home, home offices, and many small businesses.

 Inkjet printers are inexpensive and produce good color


output but can be slow.

 Best results are normally achieved when printing to


specially coated inkjet or photo papers.
Introduction & Overview of Inkjet
printer
 Inkjet Printer and Bubble Jet Printer
 The inkjet printer technology was originally invented by
Canon. It is based on the principle that a heated fluid
produces bubbles.
 The researcher who discovered this had accidentally
brought a syringe filled with ink into contact with a
soldering iron. This created a bubble in the syringe that
made the ink in the syringe shoot out.
 Today's printer heads are made up of several nozzles
(up to 256), equivalent to several syringes, which are
heated up to between 300 and 400°C several times per
second.
 Each nozzle produces a tiny bubble that ejects an
extremely fine droplet. The vacuum caused by the
decrease in pressure creates a new bubble.
Continue…

 Generally, we make a distinction between the two


different technologies:
 Inkjet printers use nozzles that have their own built-in
heating element. Thermal technology is used here.
 Bubble jet printers use nozzles that have piezoelectric
technology. Each nozzle works with a piezoelectric
crystal that changes shape when excited by its
resonance frequency and ejects an ink bubble.
Inkjet and Bubble-jet Printers Technology
 Inkjet printers technology development starts in the early
1960s. The first inkjet printing device was patented by
Siemens in 1951, which led to the introduction of one of the
first inkjet chart recorders.

 The continuous inkjet printer technology was developed


later by IBM in the 1970s. The continuous inkjet technology
basis is to deflect and control a continuous inkjet droplet
stream direction onto the printed media or into a gutter for
recirculation by applying an electric field to previously charged
inkjet droplets.

 The drop-on-demand inkjet printer technology was led to


the market in 1977 when Seimens introduced the PT-80 serial
character printer. The drop-on-demand printer ejects ink
droplets only when they are needed to print on the media. This
method eliminates the complexity of the hardware required for
the continuous inkjet printing technology. In these first inkjet
printers ink drops are ejected by a pressure wave created by
the mechanical motion of the piezoelectric ceramic.
Continue…

Inkjet printer drop-on-demand technology with


piezoelectric actuator

At the same time Canon developed the bubble jet printer


technology, a drop-on-demand inkjet printing method
where ink drops were ejected from the nozzle by the fast
growth of an ink vapor bubble on the top surface of a small
heater. Shortly thereafter, Hewlett-Packard independently
developed a similar inkjet printing technology and named it
thermal inkjet.
Continue…

Bubble jet printer drop-on-demand technology

The most popular inkjet and bubble-jet printers use serial


printing process. Similarly to dot matrix printers, serial
inkjet printers use print heads with a number of nozzles
arranged in vertical columns. The printing process is the
same as in dot matrix printers.
Continue…

Serial Inkjet printer in action

There are also available inkjet and bubble-jet printers


analogous to line dot matrix printers for high speed
printing applications. The image printing process is
similar to that in LED printers.
Continue…
Line inkjet printer printing process

The greatest advantages of inkjet printers are, quiet


operation, capability to produce color images even with
photographic quality and the low printer prices. The
down side is that although inkjet printers are generally
cheaper to buy than lasers, they are far more expensive
to maintain. When it comes to comparing the cost per
page, ink jet printers work out many times more
expensive than laser printers. There are some
exceptions of course for some heavy-duty industrial
printers. From Tally claim the T3016 SprintJet prints at
only 1/3 of a cent per page.
Continue…
Printer Features: Specifications
Print Technology Inkjet or Bubble-jet
Print Speed 1 - 20 PPM
PPM (pages per minute)
Graphics Resolution 300 - 1200 DPI
Copies (Original +) 0
Workload (Duty cycle) 6,000 - 60,000 PPM
PPM (Pages per month)
Price [US$] 30 - 3,000 $

For large-format printers


up to 19,000 $
Cost Per Page 3.0 - 30.0 ¢
(Cost/cents ¢)
Working principle of Ink-jet printers
 Inkjet printers – let us spray

 Inkjet printers literally spray liquid ink through a miniature


nozzle
similar to your garden hose nozzle. These printers are very
quiet and are moderately priced, and the print quality rivals
that of a laser printer.

 The printhead contains 4 cartridges of different colored ink:


cyan (blue),magenta, yellow and black (CMYK). It moves
along a bar from one side of the paper to the other, writing
as it goes. The formatting information and data sent to it
activates the chambers of the ink cartridges.

 When the designated nozzle is selected, an electrical pulse


flows through thin resistors in the ink chambers that form
the character to be printed.
Continue…
 The resistor is heated and used to heat a thin layer of ink
in each selected chamber, causing the ink to boil or
expand to form a bubble of vapor.

 This expansion causes pressure on the ink, which


pushes it through the nozzle onto the paper. Your page
is printed.
Advantages of Inkjet printers

Advantages:
 the colour is perfect.

 it is faster the dot matrix, daisy wheel and laser printers.

 Most models are relatively light weight and compact so


they don't take up too much space on the desk.
Disadvantages of Inkjet printers

 Due to the cost of ink, running an inkjet printer over time


is a more expensive than a laser printer.

 Prints emerge from the printer slightly wet and may need
time to dry.

 Printing is slower and therefore inkjets aren't designed


for high volume printing.
Introduction & Overview of Laser
printers
 In 1975, IBM introduced the first laser printer, the model
3800. Later, Siemens came out with the ND 2 and Xerox
with the 9700. These self-contained printing presses
were online to a mainframe or offline, accepting print
image data on tape or disk.

In 1984, HP introduced the LaserJet, the first desktop


laser printer, which rapidly became a huge success and
a major part of the company's business. Desktop lasers
made the clackety daisy wheel printers obsolete, but not
dot matrix printers, which are still widely used for labels
and multipart forms.
Continue…

The Laser Mechanism

The Laser Mechanism


The laser printer uses electrostatic charges to (1) create an image
on the drum, (2) adhere toner to the image, (3) transfer the toned
image to the paper, and (4) fuse the toner to the paper. The laser
creates the image by "painting" a negative of the page to be
printed on the charged drum. Where light falls, the charge is
dissipated, leaving a positive image to be printed.
Working principle of Laser printers
How Laser Printers Work
The Basics: Static Electricity
The primary principle at work in a laser printer is static
electricity, the same energy that makes clothes in the dryer
stick together or a lightning bolt travel from a thundercloud to
the ground. Static electricity is simply an electrical charge built
up on an insulated object, such as a balloon or your body.
Since oppositely charged atoms are attracted to each other,
objects with opposite static electricity fields cling together.
Continue…
The path of a piece of paper through a laser printer

A laser printer uses this phenomenon as a sort of "temporary


glue." The core component of this system is the photoreceptor,
typically a revolving drum or cylinder. This drum assembly is
made out of highly photoconductive material that is discharged
by light photons.
Continue…

The basic components of a laser printer


Continue…
 The Basics: Drum
Initially, the drum is given a total positive charge by the
charge corona wire, a wire with an electrical current running
through it. (Some printers use a charged roller instead of a
corona wire, but the principle is the same.) As the drum
revolves, the printer shines a tiny laser beam across the
surface to discharge certain points. In this way, the laser
"draws" the letters and images to be printed as a pattern of
electrical charges -- an electrostatic image. The system can
also work with the charges reversed -- that is, a positive
electrostatic image on a negative background.

The laser "writes" on a


photoconductive revolving drum.
Continue…
 After the pattern is set, the printer coats the drum with positively
charged toner -- a fine, black powder. Since it has a positive
charge, the toner clings to the negative discharged areas of the
drum, but not to the positively charged "background." This is
something like writing on a soda can with glue and then rolling it
over some flour: The flour only sticks to the glue-coated part of
the can, so you end up with a message written in powder.
 With the powder pattern affixed, the drum rolls over a sheet of
paper, which is moving along a belt below. Before the paper
rolls under the drum, it is given a negative charge by the
transfer corona wire (charged roller). This charge is stronger
than the negative charge of the electrostatic image, so the
paper can pull the toner powder away. Since it is moving at the
same speed as the drum, the paper picks up the image pattern
exactly. To keep the paper from clinging to the drum, it is
discharged by the detac corona wire immediately after picking
Continue…
 The Basics: Fuser
Finally, the printer passes the paper through the fuser, a pair of
heated rollers. As the paper passes through these rollers, the
loose toner powder melts, fusing with the fibers in the paper.
The fuser rolls the paper to the output tray, and you have your
finished page. The fuser also heats up the paper itself, of
course, which is why pages are always hot when they come out
of a laser printer or photocopier.
Continue…
 So what keeps the paper from burning up? Mainly, speed -- the
paper passes through the rollers so quickly that it doesn't get
very hot.

 After depositing toner on the paper, the drum surface passes


the discharge lamp. This bright light exposes the entire
photoreceptor surface, erasing the electrical image. The drum
surface then passes the charge corona wire, which reapplies
the positive charge.

 Conceptually, this is all there is to it. Of course, actually


bringing everything together is a lot more complex. In the
following sections, we'll examine the different components in
greater detail to see how they produce text and images so
quickly and precisely.
Continue…
The Controller: The Conversation
Before a laser printer can do anything else, it needs to receive the
Page data and figure out how it's going to put everything on the
paper. This is the job of the printer controller.
The printer controller is the laser
printer's main onboard computer. It talks to the host computer (for
example, your PC) through a communications port, such as a
parallel port or USB port. At the start of the printing job, the laser
printer establishes with the host computer how they will exchange
data. The controller may have to start and stop the host computer
periodically to process the information it has received.

A typical laser printer has a few different types of


communications ports.
Continue…
 In an office, a laser printer will probably be connected to
several separate host computers, so multiple users can print
documents from their machine. The controller handles each
one separately, but may be carrying on many
"conversations" concurrently. This ability to handle several
jobs at once is one of the reasons why laser printers are so
popular.

The Controller: The Language


 For the printer controller and the host computer to

communicate, they need to speak the same page


description language. In earlier printers, the computer sent
a special sort of text file and a simple code giving the printer
some basic formatting information. Since these early
printers had only a few fonts, this was a very straightforward
process.
Continue…
 These days, you might have hundreds of different fonts to
choose from, and you wouldn't think twice about printing a
complex graphic. To handle all of this diverse information,
the printer needs to speak a more advanced language.

 The primary printer languages these days are Hewlett


Packard's Printer Command Language (PCL) and
Adobe's Postscript. Both of these languages describe the
page in vector form -- that is, as mathematical values of
geometric shapes, rather than as a series of dots (a bitmap
image). The printer itself takes the vector images and
converts them into a bitmap page. With this system, the
printer can receive elaborate, complex pages, featuring any
sort of font or image. Also, since the printer creates the
bitmap image itself, it can use its maximum printer
resolution.
Continue…
 Some printers use a graphical device interface (GDI) format
instead of a standard PCL. In this system, the host computer
creates the dot array itself, so the controller doesn't have to
process anything -- it just sends the dot instructions on to the
laser.
 But in most laser printers, the controller must organize all of the
data it receives from the host computer. This includes all of the
commands that tell the printer what to do -- what paper to use,
how to format the page, how to handle the font, etc. For the
controller to work with this data, it has to get it in the right order.

The Controller: Setting up the Page


 Once the data is structured, the controller begins putting the
page together. It sets the text margins, arranges the words and
places any graphics. When the page is arranged, the raster
image processor (RIP) takes the page data, either as a whole
or piece by piece, and breaks it down into an array of tiny dots.
As we'll see in the next section, the printer needs the page in
this form so the laser can write it out on the photoreceptor
drum.
Continue…
 In most laser printers, the controller saves all print-job data in
its own memory. This lets the controller put different printing
jobs into a queue so it can work through them one at a time.
It also saves time when printing multiple copies of a
document, since the host
computer only has to send the data once.

The Laser Assembly


 Since it actually draws the page, the printer's laser system --

or laser scanning assembly -- must be incredibly precise.


The traditional laser scanning assembly includes:
 A laser

 A movable mirror

 A lens
Continue…
 The laser receives the page data -- the tiny dots that make
up the text and images -- one horizontal line at a time. As the
beam moves across the drum, the laser emits a pulse of light
for every dot to be printed, and no pulse for every dot of
empty space.

The laser doesn't actually move the beam itself. It bounces the
beam off a movable mirror instead. As the mirror moves, it
shines the beam through a series of lenses. This system
compensates for the image distortion caused by the varying
distance between the mirror and points along the drum.
Continue…
Writing the Page
 The laser assembly moves in only one plane, horizontally.
After each horizontal scan, the printer moves the
photoreceptor drum up a notch so the laser assembly can
draw the next line. A small print-engine computer
synchronizes all of this perfectly, even at dizzying speeds.
 Some laser printers use a strip of light emitting diodes
(LEDs) to write the page image, instead of a single laser.
Each dot position has its own dedicated light, which means
the printer has one set print resolution. These systems cost
less to manufacture than true laser assemblies, but they
produce inferior results. Typically, you'll only find them in
less expensive printers.

Toner Basics
 One of the most distinctive things about a laser printer (or
photocopier) is the toner. It's such a strange concept for the
paper to grab the "ink" rather than the printer applying it. And
it's even stranger that the "ink" isn't really ink at all.
Continue…

 So what is toner? The short answer is: It's an electrically-


charged powder with two main ingredients: pigment and
plastic.

 The role of the pigment is fairly obvious -- it provides the


coloring (black, in a monochrome printer) that fills in the text
and images. This pigment is blended into plastic particles,
so the toner will melt when it passes through the heat of the
fuser. This quality gives toner a number of advantages over
liquid ink. Chiefly, it firmly binds to the fibers in almost any
type of paper, which means the text won't smudge or bleed
easily.
Continue…

A developer bead coated with small toner


particles

Applying Toner
 So how does the printer apply this toner to the electrostatic
image on the drum? The powder is stored in the toner
hopper, a small container built into a removable casing. The
printer gathers the toner from the hopper with the developer
unit. The "developer" is actually a collection of small,
negatively charged magnetic beads. These beads are
attached to a rotating metal roller, which moves them through
the toner in the toner hopper.
Continue…
 Because they are negatively charged, the developer beads
collect the positive toner particles as they pass through. The
roller then brushes the beads past the drum assembly. The
electrostatic image has a stronger negative charge than the
developer beads, so the drum pulls the toner particles away.

In a lot of printers, the toner hopper, developer and


drum assembly are combined in one replaceable
cartridge.
Continue…
 The drum then moves over the paper, which has an
even stronger charge and so grabs the toner. After
collecting the toner, the paper is immediately discharged
by the detac corona wire. At this point, the only thing
keeping the toner on the page is gravity -- if you were to
blow on the page, you would completely lose the image.
The page must pass through the fuser to affix the toner.
The fuser rollers are heated by internal quartz tube
lamps, so the plastic in the toner melts as it passes
through.
 But what keeps the toner from collecting on the fuser
rolls, rather than sticking to the page? To keep this from
happening, the fuser rolls must be coated with Teflon,
the same non-stick material that keeps your breakfast
from sticking to the bottom of the frying pan.

Color Printers
 Initially, most commercial laser printers were limited to
monochrome printing (black writing on white paper). But
Continue…
 Essentially, color printers work the same way as monochrome
printers, except they go through the entire printing process
four times -- one pass each for cyan (blue), magenta (red),
yellow and black. By combining these four colors of toner in
varying proportions, you can generate the full spectrum of
color.

Inside a color laser printer


Continue…
 There are several different ways of doing this. Some
models have four toner and developer units on a rotating
wheel. The printer lays down the electrostatic image for
one color and puts that toner unit into position. It then
applies this color to the paper and goes through the
process again for the next color. Some printers add all
four colors to a plate before placing the image on paper.

 Some more expensive printers actually have a complete


printer unit -- a laser assembly, a drum and a toner
system -- for each color. The paper simply moves past
the different drum heads, collecting all the colors in a
sort of assembly line.
Advantages of Laser printers

Advantages:
 Colour printing is possible

 Print quality is good

 Noiseless

 Printing speed is high

 Most models are relatively light weight and compact so

they don't take up too much space on the desk


Disadvantages of Laser printers

Disadvantages:
 Not be the printer of choice for everyone, Due to the cost

of ink, running an inkjet printer over time is a more


expensive than a laser printer.
 Prints emerge from the printer slightly wet and may need

time to dry.
 Printing is slower and therefore inkjets aren't designed

for high volume printing

Você também pode gostar