Escolar Documentos
Profissional Documentos
Cultura Documentos
1
Information
Technology.
Fundamentals
Contents
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Random access memory (RAM). . . . . . . . . . . . . . . . . . . . . 6
3. Processors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4. The diagram of system board elements interaction. . . . . 62
4.1. USB (Universal Serial Bus). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2. IEEE 1394 (FireWire, i-Link). . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.3. ATA (Advanced Technology Attachmen) . . . . . . . . . . . . . . . . 74
4.4. SATA (Serial ATA). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.5. PCI (Peripheral component interconnect). . . . . . . . . . . . . . . . 82
4.6. PCI Express. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.7. System resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.8. Audio Codec Bus — Audio Codec (AC) Link. . . . . . . . . . . . . 90
5. Form factor of motherboards. . . . . . . . . . . . . . . . . . . . . . . 93
5.1.AT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2. LPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.3. ATX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.4. BTX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.5. NLX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.6. WTX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.7. FlexATX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
STEP Computer Academy
1. Introduction
Today we are beginning to study the course «Information
Technology Fundamentals», which will allow us to lay the
foundation for the further training in the field of IT.
The current course is studied within the framework of
cooperation between STEP Computer Academy and the Cisco
educational project (Cisco Networking Academy). The company
Cisco is the absolute leader in the market of network equipment
and network solutions. The company Cisco actively supports
educational establishments by offering them participation
in the Cisco Academy project, which is aimed for effective
training of students who is getting an education in the field of
network technology and IT. The training programs at Cisco
Academy are designed to provide students with the skills
required in the sphere of design, construction and maintenance
of networks. Program materials are regularly updated, which
allows the students to obtain relevant knowledge to meet the
requirements of today.
Under the project, Cisco Academy offers a number of courses,
mainly related to network technology. You will study these
courses, if you choose a specialization «Network technology
and system administration». However, Cisco Academy offers
a course that is not directly related to network technology and
is aimed at formation students' general knowledge in IT: this
course is called the IT Essentials. We are now beginning to
explore this course.
4
Lesson 1
5
STEP Computer Academy
6
Lesson 1
9
STEP Computer Academy
Purpose of RAM.
Using the materials of Cisco, you have studied what the
memory is, why it is needed, and how the performance of
the computer depends on its capacity. Let us consider general
information about RAM.
Random access memory is a workspace of the CPU of the
computer, which stores programs and data when the computer
is powered on. RAM is often seen as a temporary storage
because it stores data and programs only when the computer
is on or before pressing the reset button.
Before shutting down or by pressing the reset button, all
data subjected to changes during the work must be saved on
the storage device, which can store information permanently
(usually a hard disk). With the power is on again, the stored
information can be loaded into the memory.
Sometimes people confuse RAM with a disk memory, since
the capacity of both types of memory devices is expressed in
the same units — mega — or gigabytes. Let us try to explain
the connection between the random access memory and disk
memory by using the following simple analogy.
Imagine a small office, where a staff member handles the
information stored in the card file. In our example, a card file
cabinet will serve as the system hard drive, where programs
and data can be stored during a long period of time. The work
table will represent the main memory of the system, which
the employee currently handles; his actions are similar to the
operation of the processor. It has direct access to any documents
on the table. However, before a specific document would be
on the table, it is necessary to find it in the cabinet. The more
cabinets are in the office, the more documents can be stored
10
Lesson 1
Classifications of memory.
ROM (Read Only Memory). The name of this memory
specifies why it is used to read only data. ROM is also often
called non-volatile memory because any data written therein are
saved when the computer is powered off. Therefore, commands
that start the computer, i.e. software that boots the system are
stored in ROM.
RAM (Random Access Memory). The peculiarity of this
type of memory is as follows: access to the data stored in each
memory cell can be obtained at any time, if necessary.
DRAM (Dynamic Random Access Memory). The memory
cells in the DRAM chipset are tiny capacitors, which hold
charge. That is (by the presence or absence of charge) how
the bits are coded.
SRAM (Static RAM). It is named in such a manner because
it does not require periodic regeneration to store its contents,
unlike the dynamic RAM (DRAM). However, this is not its
only advantage. SRAM provides a higher speed of operation
than DRAM and can operate at the same frequency that
modern processors do.
Now let us try to classify types of electronic memory. The
following classifications can be selected based on the structure
and principles of operation:
■■ Dynamic or static.
■■ Asynchronous or synchronous.
■■ Volatile or non-volatile.
12
Lesson 1
Asynchronous memory.
Such memory generates data with lower frequency than the
frequency of the bus, on which it operates. A typical example
of such memory is conventional DRAM, where the waiting
for each read action took about 5 cycles of the bus operation.
As you can imagine, such memory has great latency; however,
at the time when it was popular, processors with which it was
used did not differ with high performance as well. Modified
forms of asynchronous DRAM appeared later; they allowed
to achieve lower latency, but this fact relates to the history
of memory development, and you will learn it further in the
course.
Synchronous memory.
This type of memory exchanges data with controller at
the same frequency at which the memory bus operates. A
typical example is SDRAM, or Synchronous DRAM. Time
access to such memory is determined by the frequency at
which it operates. SDRAM efficiency is much higher than
its predecessors showed. First, the fact that the read scheme
on SDRAM is much more effective than the older types of
memory offered. This allows to obtain a higher speed of the
couple «CPU — RAM». Finally, the third classification:
Volatilee.
This type of memory requires constant power supply to
store information, i.e. it only works when the computer is
powered on. All memory types indicated above are volatile.
14
Lesson 1
Non-volatile.
Unlike the previous type of memory, non-volatile memory
can store information even when there is no power supply.
There are several types of non-volatile memory. A conventional
representative is ROM, Read Only Memory. As it seen from
its name, this memory is designed for read operation only,
the information in such chips was written on the production
stage. Previously it was used to store BIOS, but it was replaced
with programmable types of permanent memory later, due to
the disability to perform overwriting. The last of such types
is EEPROM, an electrically erasable programmable memory,
which we know as Flash-memory. Flash memory can store
information without updating for about 10 years, and is now
used not only as a chip for storing BIOS of the motherboard,
but also as a portable information storage device such as USB
Flash drive.
Today, a volatile synchronous dynamic memory, that is
called SDRAM, or Synchronous Dynamic RAM, is applied
as PC main memory. There have been several generations of
such memory, namely: SDRAM, DDR (Double Data Rate)
SDRAM, DDR2 SDRAM, DDR3 SDRAM and DDR4 SDRAM.
DDR2 (rarely), DDR3 and DDR4 SDRAM are commonly
used these days
Memory modules
Now we will consider how RAM is installed on computers.
First, PC memory was installed in chips mounted directly
on the motherboard; that method satisfied both users and
manufacturers of motherboards up to a certain point, but later
there was a need for a more flexible way of installing memory
15
STEP Computer Academy
16
Lesson 1
17
STEP Computer Academy
18
Lesson 1
19
STEP Computer Academy
3. Processors
First, let us define what a processor is and why we need it.
The main task of the processor is execution of commands and
processing of data; the processor receives both commands
and data from the main memory, it sends back the results of
its operation to the main memory as well.
21
STEP Computer Academy
Clock rate
Processor performance depends largely on the clock rate,
which is usually measured in gigahertz (GHz) today. The clock
rate is determined by the parameters of the quartz resonator,
which is a quartz crystal mounted in a tin envelope. Under
the influence of the voltage, electric current oscillations with
rate defined by the shape and size of the crystal occur in the
quartz crystal. Such current frequency is called clock rate.
The smallest unit of time for the processor as a logic device is
a period of the clock rate or just a clock cycle. The processor
spends a certain number of cycles on each operation (command
execution).
Naturally, the higher the clock rate of the processor, the more
efficient it operates: a greater number of cycles take place and
more commands are performed in a unit of time. It is perfectly
natural that newer processors run at ever higher clock rates (it
is achieved by the improvement of manufacturing methods,
in particular) showing better performance. But the clock rate
is not the only factor determining the performance of the
processor. After all, the number of cycles spent on the commands
execution can also be changed. The first x86 processors spent
about 12 bars on average to perform a command; this criterion
makes up about 4.5 cycles in 286 and 386, and about 2 cycles
in 486, while the average Pentium processor executes one
command per clock cycle. Modern processors can execute
multiple commands at the same time (due to the parallel
execution of commands). Various numbers of cycles that
the processor spends to execute commands make it difficult
to compare them with the only clock rate usage. It is much
easier to use the average number of operations performed per
23
STEP Computer Academy
CPU registers
Although the processor receives data from the main memory
via a bus of some width, it does not mean that it can process
data of the same size. Let us see how it happens.
25
STEP Computer Academy
Address bus.
The address bus is a set of conductors by means of which the
memory cell, to which or from which data is sent, is transferred
to the memory controller. Each conductor transmits a bit of
the address corresponding to a digit in the address. Increasing
the number of conductors (bus sizes) used to form the address
permits to extend the number of addressable cells. The address
bus size determines the maximum memory capacity addressable
by the processor. As you may know, the binary system is used
on computers. For example, if the address bus size would make
up only one bit (one line for data transmission), then only two
values (logic zero — no voltage, logic unit — voltage is on) could
be transferred; thus, addressing two memory location would be
possible. The use of two bits for specifying the address would
26
Lesson 1
allow for addressing four memory cells (00, 01, 10, 11 on the
bus — four different addresses can be specified). In general,
the number of different values taken by n-bit binary number
is equal to 2 to the power of n. Accordingly, with the width of
the address bus of n bits, the number of different memory cells,
which can be addressed, is 2 to the power of n; therefore, it is
said that the processor supports 2 to the power n bytes of RAM,
or that the address space of the processor is 2 to the power of
n bytes. For example, the processor 8086 has a 20-bit address
bus; it can address (2 to the power of 20 = 1048576) bytes of the
main memory, i.e. 1 MB. Thus, the maximum memory capacity
supported by the processor 8086 is 1 MB. The processor 286
has an address bus equal to 24 bits, thus addressing 16 MB
(note: each new bit in the address bus doubles the capacity
of the addressable memory. It is natural, if we remember the
formula «memory capacity = 2 to the power of bus width»).
Modern processors have an address bus that is equal to 36 bits
at the minimum, which corresponds to the supported main
memory of 64 GB. However, there are processors that support
bigger size of the address bus.
Data buses and address buses are independent, so the chips
developers select their size at their own discretion. The size of
these buses is an indicator of the processor options: the size of
the data bus determines the ability of the processor to quickly
share information. The size of the address bus determines the
capacity of the memory supported by the processor.
27
STEP Computer Academy
28
Lesson 1
30
Lesson 1
Decoder
In fact, the executive units in all modern desktop x86-
processors … do not operate with the code in the standard x86.
Each processor has its own «internal» command system, which
has nothing in common with those commands (thereby «code»),
which come from the outside. In general, commands executed
by the core are much easier, «primitive» than commands of the
standard x86. To make the processor «have the appearance» of
the CPU x86, such unit as decoder exists: it is responsible for
transformation of the «external» x86-code into the «internal»
commands executed by the core (at that, one command of
the x86-code is converted in a few more simple «internal»
commands quite often). The decoder is a very important
part of the modern processor: the character of the stream of
commands (permanency) depends on its performance: whether
the stream of commands coming to the executive units will be
constant. After all, they are unable to work with the code x86,
so their behavior (whether they will do something, or stand
idle) depends largely on the speed of the decoder.
31
STEP Computer Academy
Superscalar architecture
Superscalar architecture is the ability to run multiple machine
instructions per clock cycle. The advent of this technology has
led to a substantial increase in performance.
The main feature of all modern processors is that they
are able to run for execution not only the command, which
(according to the code of the program) should be performed
in the current time, but the other commands, following after
it. Consider a simple example. Suppose we need to execute
the following commands:
(1) A = B + C
(2) Z = X + Y
(3) K = A + Z
It is easy to notice that the commands (1) and (2) are
completely independent from each other - they do not intersect
nor at the initial data (variables B and C in the first case, X and
Y in the second), neither at the result placement (variable A
in the first case and Z in the second case). So, if we have more
than one executive unit at the moment, the commands can be
distributed over them and then executed simultaneously, rather
than sequentially. Thus, if we take execution time for each
command equal to N clock cycles, the execution of the whole
sequence would take N * 3 cycles and it would take only N * 2
cycles under the parallel execution (the command (3) cannot
be executed without waiting for the result of execution of the
previous two commands), if we discuss the conventional case.
33
STEP Computer Academy
Pipelined architecture.
Pipelined architecture (or pipelining) was introduced into
the CPU in order to enhance performance. Typically, each
command requires a number of homogenous operations, to
be executed, for example, such as: instruction fetching from
RAM, command decoding, operand addressing in RAM,
operand fetching in RAM, command execution, result record
in RAM. Each of these operations is compared to a pipeline
stage. Let us see how it happens.
34
Lesson 1
Decoding:
1.Commands
2. The address of the data statements
(2nd and 3rd stages)
35
STEP Computer Academy
scheme works just great: for example, if the counter is set to 100
and the «operation threshold» of the branch prediction unit
(N) is equal to two transitions in a row to the same address,
it is easy to notice that 97 transitions from the 98 transitions
will be predicted correctly!
Of course, despite the relatively high efficiency of simple
algorithms, branch prediction mechanisms in modern CPUs
is still constantly being improved and enhanced to become
more complex. However, the things are about the fight for
the unity of interest: for example, to improve the efficiency of
branch prediction unit from 95 to 97 percent, or even from
97 percent to 99...
Branch prediction allows to significantly accelerate the
execution of the program, but there is a problem: if the prediction
is wrong, all contents of the pipeline would also be incorrect and
will have to be cleaned. The longer the pipeline is, the greater
the loss of time required for its cleaning will be; therefore,
the processor manufacturers constantly enhance the branch
prediction mechanism.
Lookahead execution
Another important tool for optimizing the commands execution
by the processor is the lookahead execution. This technology
involves the beginning of the instruction execution before all
operands availability. At the same time all possible actions are
performed and the decoded instruction with one operand is
placed in the execution unit, where it is waiting for the availability
of the second operand that is coming from the other pipeline.
By using this method, the processor is able to view command
in the waiting list and execute those commands, which it will
37
STEP Computer Academy
Prefetch unit
Another technology that allows to increase performance of
the processor is data prefetch. The objective of this unit is to
pre-load the data that the processor will probably need soon.
By the principle of its operation, this tool is very similar to the
branch prediction unit, with the only difference: we are talking
not about the code, but about the data in this case. The general
principle is the same: if the built-in data RAM access analysis
circuit decides that a certain area of the memory will soon be
accessed, it gives the command to load this storage area in a
special, very fast memory called cache (we will talk about this
memory further), before the executable program needs it.
A prefetch unit operating in a «smart» (effective) way can
significantly reduce access time for the necessary data and,
therefore, increase the speed of the program execution. An
efficient prefetch unit also compensates the high latency of the
memory subsystem by loading the required data in the cache,
and thus leveling the delays when accessing them, if they were
not in the cache, but in the main memory.
Of course, negative consequences are inevitable in the case
of a prefetch unit error: loading de facto «unnecessary» data
in the cache, Prefetch displaces other fata from it (which may
be just necessary). In addition, due to the «anticipation» of the
read operation, an additional load on the memory controller
is made (de facto, completely useless in the case of an error ).
38
Lesson 1
Cache memory
Another important component of the modern processor
or rather the sub-system «the processor — RAM» is a cache
memory.
Let us look at how information is exchanged between the
CPU and the main memory. In most cases, RAM does not
satisfy the memory requirements of modern processors in
terms of bandwidth, since it operates at significantly lower
frequencies. Modern processor operates at frequencies of about
3 GHz and, certainly, during the exchange with the memory,
the processor will wait for the arrival of new portions of data
for quite a long time, and therefore be idle. In order to avoid
this, an additional small amount of very fast memory, which
operates without delay at the frequency of the processor, is
set between the memory and the processor. This memory
39
STEP Computer Academy
Multi-processor systems
Multi-processor systems
An idea of using multiple processors instead of one arose a
long time ago. Guided by the principle of «one head is good,
two is better», the processor manufacturers decided to find
48
Lesson 1
49
STEP Computer Academy
51
STEP Computer Academy
Real mode
The original IBM PC used the processor 8088, which could
perform 16-bit instructions, using 16-bit internal registers, and
address only 1 MB of memory, using 20-bit address bus. All
PC software was originally designed for this processor; it was
designed on the basis of 16-bit instruction set and a memory
model with the capacity of 1 MB. For example, DOS: all DOS
software is written based on the 16-bit instructions. Later
processors, such as 286, could also perform the same 16-bit
instructions as the original 8088, but much faster.
52
Lesson 1
Protected mode
The first 32-bit processor designed for PC, was 386. This chip
could perform an entirely new 32-bit instruction set. In order to
take full advantage of that new command set, 32-bit operating
system and 32-bit applications were required. The new mode
has been called protected, because the programs running in
it are protected against overwriting by other programs of the
memory areas used by them.
This protection makes the system more reliable, because
programs with errors will not affect other programs or the
operating system. Having considered the fact that development
of new operating systems and applications that take advantage
of 32-bit protected mode, would take some time, Intel provided
backward-compatible real mode in 386. Due to this, the processor
386 could perform conventional 16-bit applications and operating
systems, and they were performed much faster than on any
53
STEP Computer Academy
You should note that any program that runs in the virtual
real mode can access memory with the capacity of up to 1 MB
and it will be as if the first and the only megabyte of memory
in the system for each such program. The virtual real window
completely mimics the processor 8088 environment and if the
performance is not considered, software in the virtual real mode
is performed as if it would be performed in the real mode on the
very first PCs. At the start of each 16-bit application, Windows
95/98 created a so-called virtual machine DOS, provided it with
1 MB of memory, so 16-bit application was performed on such
machine. Note that all processors started operating in the real
mode with power on, and switching to the 32 bit mode occurred
only when the 32-bit operating system was launched.
56
Lesson 1
Nest CPU
ATX power socket
HDD connector
northbridge
southbridge
As long ago as 25 years back, the system boards (and the vast
majority of other components) of the personal computers were
based on digital chips of small and medium-scale integration
(gates, triggers, registers, etc.). And if you had to deal with the
computers XT / AT, you would see a system board with a hundred
and fifty or two hundreds of integrated circuits (ICs) cases.
57
STEP Computer Academy
pretty quickly that all system controllers that were required for
the modern computers operation might be «packaged» in a
few chips with a high degree of integration. Since those chips
were designed as sets intended to build motherboards, they
became known as sets of integrated circuits (ICs) or chipsets.
Currently chipsets play a determining role in the design and
production of personal computers. If hundreds of chips were
installed on the first system boards, as mentioned earlier, their
number rarely exceeds two dozen today. The role of the system
board chipsets is so great that, as a rule, their new varieties
must be designed with the advent of any new technology or
processor.
Ultimately, the overall capabilities of the personal computer
are determined by the chipset of the motherboard to a large
extent. The chipset forms a backbone of any computer system.
The motherboard chipset consists of two components (which
typically represent independent chipsets associated with each
other). These components are called the northbridge and the
southbridge. The names north and south are historical. They
indicate the location of the bridge chipset with respect to the
bus PCI: the North is up, and the South is below. Why is the
bridge? This name has been given chipsets in accordance
with the functions they perform: chipsets serve to connect
different buses and interfaces. The northbridge is of a particular
complexity for the designer, because it works with the most
high-speed devices, so it must operate very quickly, providing
fast and reliable connection between the processor, the memory,
PCI-E bus, and the southbridge.
The southbridge works with slow devices, such as hard
drives, USB bus, PCI bus, etc.
59
STEP Computer Academy
one. You may ask «why is this empty space needed for?» Some
manufacturers embed sets of graphics in the northbridge just
because of the unused space, and there is also space for further
flight of the developer’s fancy.
However, there are chipsets (they are called integrated),
in which the North and the South bridges are combined in a
single chip, the chipset produced by the company nVidia may
serve as an example.
61
STEP Computer Academy
USB ATA
South Bridge
(ICH)
Fire Wire SATA
Audio PCI
LAN PCI-E 1x
62
Lesson 1
66
Lesson 1
Bus
Bus is a data transmission channel used jointly by the various
units of the system. Information is transmitted over the bus
in the form of groups of bits. The bus may provide a separate
line (parallel bus) for each word, or all bits of the word may
use a single line sequentially in time (serial bus). Figure shows
a typical connection of devices to the data bus.
Many receivers can be connected to the bus. The combination
of control and address signals determines for whom data on the
bus is meant. The control logic initiates special gating signals
to indicate the recipient when it should receive the data.
Output Input
device device
Output Input
RAM ROM
buffer buffer
P
r
o
c
e
s
s
o
r
67
STEP Computer Academy
68
Lesson 1
USB 1.1
Specifications:
■■ two speeds:
■■ high rate of exchange — 12 Mbit / s
■■ low rate of exchange — 1.5 Mbit / s
■■ the maximum cable length for high rate of exchange — 5
m [1]
■■ the maximum cable length for low rate of exchange — 3
m [1]
■■ the maximum number of connected devices (including
replicators) — 127
■■ ability to connect devices with different rates of
exchange
■■ power supply voltage for peripherals — 5
■■ the maximum current consumption per unit — 500 mA
69
STEP Computer Academy
USB 2.0
USB 2.0 differs from USB 1.1 only with higher speed and
small modifications in the protocol for data transmission in
the mode Hi-speed (480Mbit / sec).
There are three types of speed in the operation of USB 2.0:
Low-speed 10-1500 Kbit/s (used for interactive devices:
keyboard, mouse, joystick)
Full-speed 0,5–12 Mbit/s (audio / video devices)
Hi-speed 25-480 Mbit/s (video devices, storage media)
In fact, although the speed of USB 2.0 can reach 480Mbit/s in
theory, such devices as hard disks and any media in general never
reach such speed in the bus exchange in reality, although they
can pick up the speed. This can be explained by sufficiently large
USB bus latency between the request for the data transmission
and the actual start of the transmission.
USB 3.0
USB 3.0 is the third version of the USB bus. USB 3.0 is
backward-compatible with USB 2.0 and USB 1.1 and provides
a new data transmission mode SuperSpeed, as well.
USB 3.0 connectors can be distinguished from older versions
by blue color-marking and the initials SS The companies engaged
in USB 3.0 creation: Intel, Microsoft, Hewlett-Packard, Texas
Instruments, NEC and NXP Semiconductors.
USB 3.0 can transmit data at the peak throughput of up to
5 Gbit/s.
70
Lesson 1
71
STEP Computer Academy
IEEE 1394
In late 1995, IEEE adopted a standard under the serial
number 1394. In Sony digital cameras, the interface IEEE
1394 appeared before the adoption of the standard and was
called iLink.
The interface was originally positioned for the transmission
of video streams. It was to the ground for the external memory
manufacturers, providing high throughput for modern high-
speed drives. Today, many motherboards, as well as almost all
modern laptop models support this interface.
Data transmission rate — 100, 200 and 400 Mbit/s, cable
length is up to 4.5 m.
72
Lesson 1
IEEE 1394a
In 2000, the standard IEEE 1394a was approved. A series of
enhancements was made in order to improve the compatibility
of devices.
The waiting time of 1/3 seconds to reset the bus until the
end of the transition process of installing a reliable connecting
or disconnecting of devices was introduced.
IEEE 1394b
In 2002, the standard IEEE 1394b with new speed emerged:
S800 - 800 Mbit/s and S1600 — 1600 Mbit/s. The maximum
cable length was also increased up to 50, 70 and up to 100
meters if using high-quality fiber-optic cables.
Corresponding devices are denoted FireWire 800 and
FireWire 1600, depending on the maximum speed.
Cables and connectors being used also changed. To achieve
maximum rates at maximum distances the use of optics was
covered: plastic optics — for the length of up 50 meters, glass
optics — for lengths up to 100 meters.
Despite the change in connectors, the standards remained
compatible, that could be achieved by using adapters.
December 12, 2007, the specification S3200 [1] with the
maximum speed up to 3.2 Gb/s was released.
IEEE 1394.1
In 2004, the release of the standard IEEE 1394.1 came out.
That standard was adopted for the possibility of building
large-scale networks and it dramatically increased the number
of the devices to be connected to a giant number — 64,449.
73
STEP Computer Academy
IEEE 1394c
Introduced in 2006, 1394c could be used with Cat 5e cable
from Ethernet. It is possible to use it in parallel with Gigabit
Ethernet, i.e. to use two logical and independent from each
other networks on a single cable. The maximum declared
length is 100 m; the maximum speed corresponds to S800 —
800 Mbit/s.
74
Lesson 1
The controller grounds that contact. If the drive sees that the
contact is grounded (i.e. logical 0 on it), it is set as a master,
otherwise (high impedance state), it is set as a slave.
Setup called cable select was described as an optional
in the specification ATA-1 and became widespread starting
from ATA-5, because it eliminated the need to rearrange the
jumpers on the drives at any reconnecting. If the drive is set
to the mode cable select, it is automatically set as a master or
a slave, depending on its locality on the loop. To provide the
possibility of determination of the loop locality, the loop must
have cable select. The contact 28 (CSEL) in such loop is not
connected to one of the connectors (gray, usually middle).
The controller grounds that contact. If the drive sees that the
contact is grounded (i.e. logical 0 on it), it is set as a master,
otherwise (high impedance state), it is set as a slave .
80-conductor cables designed for UDMA4 are devoid of these
shortcomings. Now the master is always at the end of the loop,
so if only one device is connected, this unnecessary piece of
cable does not appear. They have «factory» cable select — it is
made within the connector simply by eliminating the contact.
As 80-conductor loops required their own connectors in any
case, their widespread application was of no big problem. The
standard also requires the use of connectors of different colors
for easy identification for both producers and assemblers.
Blue connector is intended for connection to the controller,
black — to the master, gray — to the slave.
The terms «master» and «slave» have been borrowed from
the industrial electronics (where this principle is widely used
for the interaction between nodes and devices), but they are
invalid in this case and therefore not used in the current version
77
STEP Computer Academy
78
Lesson 1
79
STEP Computer Academy
SATA/150
Initially SATA standard provides bus operation at the
frequency of 1.5GHz, providing throughput of about 1.2
Gb/s (150 MB / s). (20% loss of productivity is due to the use
of coding system 8B/10B, where 2 service bits are necessary
for every 8 bits of useful information). The throughput of
SATA/150 is slightly higher that the throughput of the bus Ultra
ATA (UDMA/133). The main advantage of SATA to PATA is
the use of serial bus instead of parallel. Despite the fact that
the serial exchange method is essentially slower than parallel,
in this case it is compensated by the possibility to operate at
higher frequencies due to the greater noise immunity of the
cable. This is achieved by 1) fewer conductors and 2) integration
of information conductors in 2 twisted pairs, screened by the
shielding conductors, which are grounded.
SATA/300
The standard SATA/300 operates at the frequency of 3 GHz
and provides bandwidth of up to 2.4 Gb/s (300 Mb/s). It was
first implemented in the controller of the chipset nForce 4
produced by the company NVIDIA. Quite often, the standard
SATA 300 is called SATA II or SATA 3.0. [1] In theory, SATA/150
and SATA/300 devices must be compatible (as SATA/300
controller and SATA/150 device, and SATA/150 controller
and SATA/300 device) through support of rates matching
(downwards). However, some devices and controllers require
manual exposure mode (for example, on the Seagate HDD
that supports SATA/300 a special jumper is provided to forced
activation of the mode SATA/150).
80
Lesson 1
eSATA
eSATA (External SATA) is an interface to connect external
devices that supports the mode «hot swapping» (Hot-plug).
It was created later than SATA (mid 2004).
Key features of eSATA:
■■ Connectors are less fragile and constructively designed
for a larger number of connections. It needs two leads
for connection: data bus and power cable.
■■ It is limited with the length of the data cable (about 2
meters).
■■ Average data operational rate is higher than that of IEEE
1394 or USB.
■■ The CPU is significantly less loaded.
81
STEP Computer Academy
SATA power connector IDE SATA data bus connector An example of the SATA controller
82
Lesson 1
83
STEP Computer Academy
84
Lesson 1
85
STEP Computer Academy
86
Lesson 1
Slots PCI Express x4, x16, x1, again x16, the standard 32-
bit slot PCI at the bottom, on the motherboard DFI LanParty
nForce4 SLI-DR.
87
STEP Computer Academy
88
Lesson 1
92
Lesson 1
5.1.AT
The form factor AT is divided into two different size
modifications — AT and Baby AT. The size of a full-size
AT motherboard is up to 12" wide, which means that such
motherboard is unlikely to fit in most of today's cases. Assembly
of such board will be complicated by the drive box (hard drive
included) and power supply. In addition, placement of the
board components on a large distance from each other can
cause some problems when operating at high frequencies.
Therefore, this size is uncommon after the release of the CPU
386 motherboards.
Thus, the only motherboards made in the form factor AT
available in stock, are those of the appropriate format Baby
AT. The size of the board Baby AT is 8.5" wide and 13" long.
In principle, some manufacturers can reduce the length of the
board to save material or for some other reasons. To mount
the board, the board is provided with three rows of holes.
93
STEP Computer Academy
94
Lesson 1
5.2. LPX
Even before ATX, the first attempts to reduce the cost of PC
resulted in the form factor LPX creation. It was intended for use
in cases Slimline or Low-profile. The problem was solved by a
rather innovative solution — introduction of the rack. Instead
of inserting expansion cards directly into the motherboard,
they are placed in the upright stack connected to the board,
parallel to the motherboard. This allowed to significantly reduce
the height of the case, as the height of expansion cards usually
affects this parameter. Payment for such compactness was the
maximum number of connected cards — 2–3 pieces. Another
innovation, which the boards LPX began to use widely, is a
chip integrated on the motherboard. The size of the case for
the LPX is 9 x 13'', for Mini LPX — 8 x 10''.
After the appearance of NLX, it began to replace LPX
gradually.
95
STEP Computer Academy
5.3. ATX
It is not surprise that the ATX form factor has become so
popular in all its versions.
No one can say that it is not justified. The specification ATX,
proposed by Intel in 1995, was aimed precisely at correcting all
those shortcomings that emerged in the form factor AT over
time. The solution, in fact, was very simple — to turn the Baby
AT board to 90 degrees and perform required enhancements
to the design. By the time, Intel had already had experience
in this area — the form factor LPX. The ATX just embodies
the best behavior both of Baby AT and LPX: extensibility is
taken from Baby AT and by high integration of components
from LPX. Here is what happened as a result:
96
Lesson 1
microATX
ATX form factor was designed at the time of blooming of
Socket 7 systems, which explains why much of it does not
correspond to the modern tendencies. For example, a typical
combination of slots, on the basis of which the specification
was created, looked like 3 ISA/3 PCI/1 adjacent. It sounds quite
irrelevant today, doesn't it? ISA, the absence of AGP, AMR,
etc. Again, in any case, 7slots are not used in 99%of cases,
98
Lesson 1
5.4. BTX
Under the abbreviation BTX (Balanced Technology
Extended), a new standard providing numerous important
case enhancement and components hides. The main objective
99
STEP Computer Academy
100
Lesson 1
The air pipe directs air through the CPU cooler (thermal
module).
101
STEP Computer Academy
102
Lesson 1
103
STEP Computer Academy
104
Lesson 1
5.5. NLX
With time, the specification LPX, similar to the Baby AT,
ceased to meet the requirements of the time. There were new
processors and new technologies. LPX was no longer able to
provide acceptable spatial and thermal conditions for new
low-profile systems. As a result, just as ATX came to replace
Baby AT, in 1997, the specification for the LPX form factor
appeared as the development of the LPX ideas that took into
account the emergence of new technologies. The format aimed
for use in low-profile cases. During its development, both
technical factors (for example, the appearance of AGP and
DIMM modules, integration of audio / video components
on the motherboard) and the need to provide greater ease of
maintenance were taken into account. So, screwdriver is not
required at all for the assembly / disassembly of many systems
on the basis of this form factor.
105
STEP Computer Academy
106
Lesson 1
107
STEP Computer Academy
The only thing to add is that the some places on the board must
be free due to the specification, providing opportunities for the
expansion of functions that will appear in future versions of the
specification. For example, to create motherboards for servers
and workstations based on the form factor NLX.
108
Lesson 1
5.6. WTX
However, on the other hand, powerful workstations and
servers also do not completely satisfy the specification AT and
ATX. There are other kinds of problems, where price does not
play the most important role. Such issues as ensuring proper
cooling, placing large amounts of memory, convenient support
for multiprocessor configurations, large power supply capacity,
placing a greater number of data storage controllers ports and I /
O ports were brought to the forefront. In 1998, the specification
WTX was presented. It was oriented to support dual-processor
motherboards of any configurations, as well as to support current
and future technologies, of video cards and memory.
Probably, we should pay particular attention to two new
components — Board Adapter Plate (BAP) and Flex Slot.
In this specification, the developers attempted to move
away from the conventional model, when the motherboard was
mounted to the case by means of mounting holes placed in certain
places. It is attached to the BAP and the manner of fastening is
left for the conscience of the motherboard manufacturer; the
standard BAP is attached to the case.
Besides the usual things like the size of the board (14 x
16.75''), the characteristics of the power supply (up to 850 W),
etc., WTX specification describes the architecture Flex Slot — in
a sense, AMR for workstations. Flex Slot is designed to improve
ease of maintenance, give more flexibility for the developers,
reduce the entry of the motherboard into the market. Flex Slot
card looks like this:
Any PCI, SCSI or IEEE 1394 controllers, sound, network
interfaces, parallel and serial ports, USB, means for monitoring
the state of the system can be placed on such cards.
109
STEP Computer Academy
110
Lesson 1
5.7. FlexATX
Finally, the form factor Flex ATX appeared similar to the
way, in which the ideas embodied in the Baby AT and LPX gave
raise to the ATX, i.e. as the development of the specifications
microATX and NPX. It is not even a separate specification,
but a supplement to the specifying microATX. Looking at the
success of iMac, which, in fact, offered nothing new except for
appearance, PC producers also decided to go that route. In
February at the Intel Developer Forum, Intel was the first to
present FlexATX — a motherboard, which area for 25–30%
less than microATX offered.
In theory, Flex ATX motherboard can be used in cases
that comply with the ATX 2.03 or microATX 1.0, with some
modifications. However, there are enough boards for today's
cases; the matter concerns those fanciful plastic designs that
require such compactness. At the Intel Developer Forum, Intel
also demonstrated several options of the cases. The designers'
fantasy got going quite well — vases, pyramids, trees, spirals,
a wide range of different designed was proposed. Here are
some turns from the specification to deepen the impression:
«aesthetic value», «greater satisfaction from system possession».
It is a quite well done description of the motherboard PC form
factor, isn't it?
111
STEP Computer Academy
112
Lesson 1
113