Você está na página 1de 20

COMPUTER

A
device that receives, processes, and presents information. The two basic types of
computers are analog and digital. Although generally not regarded as such, the most
prevalent computer is the simple mechanical analog computer, in which gears, levers,
ratchets, and pawls perform mathematical operations—for example, the speedometer and
the watt-hour meter (used to measure accumulated electrical usage). The general public
has become much more aware of the digital computer with the rapid proliferation of the
hand-held calculator and a large variety of intelligent devices and especially with
exposure to the Internet and the World Wide Web. See also Calculators; Internet; World
Wide Web.

An analog computer uses inputs that are proportional to the instantaneous value of
variable quantities, combines these inputs in a predetermined way, and produces outputs
that are a continuously varying function of the inputs and the processing. These outputs
are then displayed or connected to another device to cause action, as in the case of a
speed governor or other control device. Small electronic analog computers are frequently
used as components in control systems. If the analog computer is built solely for one
purpose, it is termed a special-purpose electronic analog computer. In any analog
computer the key concepts involve special versus general-purpose computer designs, and
the technology utilized to construct the computer itself, mechanical or electronic. See
also Analog computer.
In contrast, a digital computer uses symbolic representations of its variables. The
arithmetic unit is constructed to follow the rules of one (or more) number systems.
Further, the digital computer uses individual discrete states to represent the digits of the
number system chosen. A digital computer can easily store and manipulate numbers,
letters, images, sounds, or graphical information represented by a symbolic code.
Through the use of the stored program, the digital computer achieves a degree of
flexibility unequaled by any other computing or data-processing device.

The advent of the relatively inexpensive and readily available personal computer, and the
combination of the computer and communications, such as by the use of networks, have
dramatically expanded computer applications. The most common application now is
probably text and word processing, followed by electronic mail. See also Electronic mail;
Local-area networks; Microcomputer; Word processing.

Computers have begun to meet the barrier imposed by the speed of light in achieving
higher speeds. This has led to research and development in the areas of parallel
computers (in order to accomplish more in parallel rather than by serial computation) and
distributed computers (taking advantage of network connections to spread the work
around, thus achieving more parallelism). Continuing demand for more processing power
has led to significant changes in computer hardware and software architectures, both to
increase the speed of basic operations and to reduce the overall processing time. See also
Computer systems architecture; Concurrent processing; Distributed systems (computers);
Multiprocessing; Supercomputer.

Machine capable of executing instructions to perform operations on data. The


distinguishing feature of a computer is its ability to store its own instructions. This ability
makes it possible for a computer to perform many operations without the need for a
person to enter new instructions each time. Modern computers are made of high-speed
electronic components that enable the computer to perform thousands of operations each
second. Programmable machine that can store, retrieve, and process data. Today's
computers have at least one CPU that performs most calculations and includes a main
memory, a control unit, and an arithmetic logic unit. Increasingly, personal computers
contain specialized graphic processors, with dedicated memory, for handling the
computations needed to display complex graphics, such as for three-dimensional
simulations and games. Auxiliary data storage is usually provided by an internal hard disk
and may be supplemented by other media such as floppy disks or CD-ROMs. Peripheral
equipment includes input devices (e.g., keyboard, mouse) and output devices (e.g.,
monitor, printer), as well as the circuitry and cabling that connect all the components.
Generations of computers are characterized by their technology. First-generation digital
computers, developed mostly in the U.S. after World War II, used vacuum tubes and were
enormous. The second generation, introduced c. 1960, used transistors and were the first
successful commercial computers. Third-generation computers (late 1960s and 1970s)
were characterized by miniaturization of components and use of integrated circuits. The
microprocessor chip, introduced in 1974, defines fourth-generation computers.
Any device capable of carrying out a sequence of operations in a defined manner. The
definition of the operations is called the program. An analog computer performs
computations by manipulating continuous physical variables, such as voltage and time. A
digital computer operates on discrete quantities, most often represented as ‘on-off’,
indicating whether the value of a binary variable is 0 or 1. Numbers and information are
then represented by the binary system. Philosophically the excitement generated by
computers has been in exploring the extent to which mental operations are well-
represented as computations. See also artificial intelligence, Chinese room,
connectionism, Turing machine, von Neumann machine. device capable of performing a
series of arithmetic or logical operations. A computer is distinguished from a calculating
machine, such as an electronic calculator, by being able to store a computer program (so
that it can repeat its operations and make logical decisions), by the number and
complexity of the operations it can perform, and by its ability to process, store, and
retrieve data without human intervention. Computers developed along two separate
engineering paths, producing two distinct types of computer—analog and digital. An
analog computer operates on continuously varying data; a digital computer performs
operations on discrete data.

Computers are categorized by both size and the number of people who can use them
concurrently. Supercomputers are sophisticated machines designed to perform complex
calculations at maximum speed; they are used to model very large dynamic systems, such
as weather patterns. Mainframes, the largest and most powerful general-purpose systems,
are designed to meet the computing needs of a large organization by serving hundreds of
computer terminals at the same time. Minicomputers, though somewhat smaller, also are
multiuser computers, intended to meet the needs of a small company by serving up to a
hundred terminals. Microcomputers, computers powered by a microprocessor, are
subdivided into personal computers and workstations, the latter typically incorporating
RISC processors. Although microcomputers were originally single-user computers, the
distinction between them and minicomputers has blurred as microprocessors have
become more powerful. Linking multiple microcomputers together through a local area
network or by joining multiple microprocessors together in a parallel-processing system
has enabled smaller systems to perform tasks once reserved for mainframes, and the
techniques of grid computing have enabled computer scientists to utilize the unemployed
processing power of connected computers.

Advances in the technology of integrated circuits have spurred the development of


smaller and more powerful general-purpose digital computers. Not only has this reduced
the size of the large, multi-user mainframe computers—which in their early years were
large enough to walk through—to that of large pieces of furniture, but it has also made
possible powerful, single-user personal computers and workstations that can sit on a
desktop. These, because of their relatively low cost and versatility, have largely replaced
typewriters in the workplace and rendered the analog computer inefficient.

Analog Computers

An analog computer represents data as physical quantities and operates on the data by
manipulating the quantities. It is designed to process data in which the variable quantities
vary continuously (see analog circuit); it translates the relationships between the variables
of a problem into analogous relationships between electrical quantities, such as current
and voltage, and solves the original problem by solving the equivalent problem, or
analog, that is set up in its electrical circuits. Because of this feature, analog computers
were especially useful in the simulation and evaluation of dynamic situations, such as the
flight of a space capsule or the changing weather patterns over a certain area. The key
component of the analog computer is the operational amplifier, and the computer's
capacity is determined by the number of amplifiers it contains (often over 100). Although
analog computers are commonly found in such forms as speedometers and watt-hour
meters, they largely have been made obsolete for general-purpose mathematical
computations and data storage by digital computers.

Digital Computers

A digital computer is designed to process data in numerical form (see digital circuit); its
circuits perform directly the mathematical operations of addition, subtraction,
multiplication, and division. The numbers operated on by a digital computer are
expressed in the binary system; binary digits, or bits, are 0 and 1, so that 0, 1, 10, 11, 100,
101, etc., correspond to 0, 1, 2, 3, 4, 5, etc. Binary digits are easily expressed in the
computer circuitry by the presence (1) or absence (0) of a current or voltage. A series of
eight consecutive bits is called a “byte”; the eight-bit byte permits 256 different “on-off”
combinations. Each byte can thus represent one of up to 256 alphanumeric characters,
and such an arrangement is called a “single-byte character set” (SBCS); the de facto
standard for this representation is the extended ASCII character set. Some languages,
such as Japanese, Chinese, and Korean, require more than 256 unique symbols. The use
of two bytes, or 16 bits, for each symbol, however, permits the representation of up to
65,536 characters or ideographs. Such an arrangement is called a “double-byte character
set” (DBCS); Unicode is the international standard for such a character set. One or more
bytes, depending on the computer's architecture, is sometimes called a digital word; it
may specify not only the magnitude of the number in question, but also its sign (positive
or negative), and may also contain redundant bits that allow automatic detection, and in
some cases correction, of certain errors (see code; information theory). A digital computer
can store the results of its calculations for later use, can compare results with other data,
and on the basis of such comparisons can change the series of operations it performs.
Digital computers are used for reservations systems, scientific investigation, data-
processing and word-processing applications, desktop publishing, electronic games, and
many other purposes.

Processing of Data

The operations of a digital computer are carried out by logic circuits, which are digital
circuits whose single output is determined by the conditions of the inputs, usually two or
more. The various circuits processing data in the computer's interior must operate in a
highly synchronized manner; this is accomplished by controlling them with a very stable
oscillator, which acts as the computer's “clock.” Typical computer clock rates range from
several million cycles per second to several hundred million, with some of the fastest
computers having clock rates of about a billion cycles per second. Operating at these
speeds, digital computer circuits are capable of performing thousands to trillions of
arithmetic or logic operations per second, thus permitting the rapid solution of problems
that would be impossible for a human to solve by hand. In addition to the arithmetic and
logic circuitry and a small number of registers (storage locations that can be accessed
faster than main storage and are used to hold the intermediate results of calculations), the
heart of the computer—called the central processing unit, or CPU—contains the circuitry
that decodes the set of instructions, or program, and causes it to be executed.

Storage and Retrieval of Data

Associated with the central processing unit is the storage unit, or memory, where results
or other data are stored for periods of time ranging from a small fraction of a second to
days or weeks before being retrieved for further processing. Once made up of vacuum
tubes and later of small doughnut-shaped ferromagnetic cores strung on a wire matrix,
main storage now consists of integrated circuits, each of which contains thousands of
semiconductor devices. Where each vacuum tube or core represented one bit and the total
memory of the computer was measured in thousands of bytes (or kilobytes, KB), each
semiconductor device now represents millions of bytes (or megabytes, MB) and the total
memory of mainframe computers is measured in billions of bytes (or gigabytes, GB).
Random-access memory (RAM), which both can be read from and written to, is lost each
time the computer is turned off. Read-only memory (ROM), which cannot be written to,
maintains its content at all times and is used to store the computer's control information.

Programs and data that are not currently being used in main storage can be saved on
auxiliary storage, or external storage. Although punched paper tape and punched cards
once served this purpose, the major materials used today are magnetic tape and magnetic
disks, which can be read from and written to, and two types of optical disks, the compact
disc (CD) and its successor the digital versatile disc (DVD). DVD is an improved optical
storage technology capable of storing vastly greater amounts of data than the CD
technology. CD–Read-Only Memory (CD-ROM) and DVD–Read-Only Memory (DVD-
ROM) disks can only be read—the disks are impressed with data at the factory but once
written cannot be erased and rewritten with new data. The latter part of the 1990s saw the
introduction of new optical storage technologies: CD-Recordable (CD-R) and DVD-
Recordable (DVD-R), optical disks that can be written to by the computer to create a CD-
ROM or DVD-ROM, but can be written to only once; and CD-ReWritable (CD-RW),
DVD-ReWritable (DVD-RW and DVD+RW), and DVD–Random Access Memory
(DVD-RAM), disks that can be written to multiple times.

When compared to semiconductor memory, magnetic and optical storage is less


expensive, is not volatile (i.e., data is not lost when the power to the computer is shut
off), and provides a convenient way to transfer data from one computer to another. Thus
operating instructions or data output from one computer can be stored away from the
computer and then retrieved either by the same computer or another. In a system using
magnetic tape the information is stored by a specially designed tape recorder somewhat
similar to one used for recording sound. In magnetic and optical disk systems the
principle is the same except that the magnetic or optical medium lies in a path, or track,
on the surface of a disk. The disk drive also contains a motor to spin the disk and a
magnetic or optical head or heads to read and write the data to the disk. Drives take
several forms, the most significant difference being whether the disk can be removed
from the drive assembly.

Removable magnetic disks are most commonly made of mylar enclosed in a paper or
plastic holder. These floppy disks have varying capacities, with very high density disks
holding 250 MB—more than enough to contain a dozen books the size of Tolstoy's Anna
Karenina. Compact discs can hold many hundreds of megabytes, and are used, for
example, to store the information contained in an entire multivolume encyclopedia or set
of reference works, and DVD disks can hold ten times as much as that. Nonremovable
disks are made of metal and arranged in spaced layers. They can hold more data and can
read and write data much faster than floppies.

Data are entered into the computer and the processed data made available via
input/output devices. All auxiliary storage devices are used as input/output devices. For
many years, the most popular input/output medium was the punched card. Although this
is still used, the most popular input device is now the computer terminal and the most
popular output device is the high-speed printer. Human beings can directly communicate
with the computer through computer terminals, entering instructions and data by means
of keyboards much like the ones on typewriters, by using a pointing device such as a
mouse, trackball, or touchpad, or by speaking into a microphone that is connected to
computer running voice-recognition software. Responses may be displayed on a cathode-
ray tube, liquid-crystal display, or printer. The CPU, main storage, auxiliary storage, and
input/output devices collectively make up a system.

Sharing the Computer's Resources

Generally, the slowest operations that a computer must perform are those of transferring
data, particularly when data is received from or delivered to a human being. The
computer's central processor is idle for much of this period, and so two similar techniques
are used to use its power more fully.

Time sharing, used on large computers, allows several users at different terminals to use a
single computer at the same time. The computer performs part of a task for one user, then
suspends that task to do part of another for another user, and so on. Each user only has
the computer's use for a fraction of the time, but the task switching is so rapid that most
users are not aware of it. Most of the tens of millions of computers in the world are stand-
alone, single-user devices known variously as personal computers or workstations. For
them, multitasking involves the same type of switching, but for a single user. This
permits a user, for example, to have one file printed and another sorted while editing a
third in a word-processing session. Such personal computers can also be linked together
in a network, where each computer is connected to others, usually by wires or coaxial
cables, permitting all to share resources such as printers, modems, and hard-disk storage
devices.
Computer Programs and Programming Languages

Before a computer can be used to solve a given problem, it must first be programmed,
that is, prepared for solving the problem by being given a set of instructions, or program.
The various programs by which a computer controls aspects of its operations, such as
those for translating data from one form to another, are known as software, as contrasted
with hardware, which is the physical equipment comprising the installation. In most
computers the moment-to-moment control of the machine resides in a special software
program called an operating system, or supervisor. Other forms of software include
assemblers and compilers for programming languages and applications for business and
home use (see computer program). Software is of great importance; the usefulness of a
highly sophisticated array of hardware can be severely compromised by the lack of
adequate software.

Each instruction in the program may be a simple, single step, telling the computer to
perform some arithmetic operation, to read the data from some given location in the
memory, to compare two numbers, or to take some other action. The program is entered
into the computer's memory exactly as if it were data, and on activation, the machine is
directed to treat this material in the memory as instructions. Other data may then be read
in and the computer can carry out the program to solve the particular problem.

Since computers are designed to operate with binary numbers, all data and instructions
must be represented in this form; the machine language, in which the computer operates
internally, consists of the various binary codes that define instructions together with the
formats in which the instructions are written. Since it is time-consuming and tedious for a
programmer to work in actual machine language, a programming language, or high-level
language, designed for the programmer's convenience, is used for the writing of most
programs. The computer is programmed to translate this high-level language into
machine language and then solve the original problem for which the program was
written. Certain high-level programming languages are universal, varying little from
machine to machine.

Development of Computers

Although the development of digital computers is rooted in the abacus and early
mechanical calculating devices, Charles Babbage is credited with the design of the first
modern computer, the “analytical engine,” during the 1830s. American scientist Vannevar
Bush built a mechanically operated device, called a differential analyzer, in 1930; it was
the first general-purpose analog computer. John Atanassoff constructed the first
semielectronic digital computing device in 1939.

The first fully automatic calculator was the Mark I, or Automatic Sequence Controlled
Calculator, begun in 1939 at Harvard by Howard Aiken, while the first all-purpose
electronic digital computer, ENIAC (Electronic Numerical Integrator And Calculator),
which used thousands of vacuum tubes, was completed in 1946 at the Univ. of
Pennsylvania. UNIVAC (UNIVersal Automatic Computer) became (1951) the first
computer to handle both numeric and alphabetic data with equal facility; this was the first
commercially available computer.

First-generation computers were supplanted by the transistorized computers (see


transistor) of the late 1950s and early 60s, second-generation machines that were smaller,
used less power, and could perform a million operations per second. They, in turn, were
replaced by the third-generation integrated-circuit machines of the mid-1960s and 1970s
that were even smaller and were far more reliable. The 1980s and 90s were characterized
by the development of the microprocessor and the evolution of increasingly smaller but
powerful computers, such as the personal computer and personal digital assistant, which
ushered in a period of rapid growth in the computer industry.

A computer is a machine that manipulates data according to a list of instructions.

The first devices that resemble modern computers date to the mid-20th century (1940–
1945), although the computer concept and various machines similar to computers existed
earlier. Early electronic computers were the size of a large room, consuming as much
power as several hundred modern personal computers (PC).[1] Modern computers are
based on tiny integrated circuits and are millions to billions of times more capable while
occupying a fraction of the space.[2] Today, simple computers may be made small enough
to fit into a wristwatch and be powered from a watch battery. Personal computers, in
various forms, are icons of the Information Age and are what most people think of as "a
computer"; however, the most common form of computer in use today is the embedded
computer. Embedded computers are small, simple devices that are used to control other
devices — for example, they may be found in machines ranging from fighter aircraft to
industrial robots, digital cameras, and children's toys.

The ability to store and execute lists of instructions called programs makes computers
extremely versatile and distinguishes them from calculators. The Church–Turing thesis is
a mathematical statement of this versatility: any computer with a certain minimum
capability is, in principle, capable of performing the same tasks that any other computer
can perform. Therefore, computers with capability and complexity ranging from that of a
personal digital assistant to a supercomputer are all able to perform the same
computational tasks given enough time and storage capacity. It is difficult to identify any
one device as the earliest computer, partly because the term "computer" has been subject
to varying interpretations over time. Originally, the term "computer" referred to a person
who performed numerical calculations (a human computer), often with the aid of a
mechanical calculating device.

The history of the modern computer begins with two separate technologies - that of
automated calculation and that of programmability.

Examples of early mechanical calculating devices included the abacus, the slide rule and
arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100
BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a
play lasting 10 minutes and was operated by a complex system of ropes and drums that
might be considered to be a means of deciding which parts of the mechanism performed
which actions and when.[3] This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to


be the earliest programmable analog computer.[4] It displayed the zodiac, the solar and
lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing
automatic doors to open every hour,[5][6] and five robotic musicians who play music when
struck by levers operated by a camshaft attached to a water wheel. The length of day and
night could be re-programmed every day in order to account for the changing lengths of
day and night throughout the year.[4]

The end of the Middle Ages saw a re-invigoration of European mathematics and
engineering, and Wilhelm Schickard's 1623 device was the first of a number of
mechanical calculators constructed by European engineers. However, none of those
devices fit the modern definition of a computer because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a
series of punched paper cards as a template to allow his loom to weave intricate patterns
automatically. The resulting Jacquard loom was an important step in the development of
computers because the use of punched cards to define woven patterns can be viewed as
an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first
recognizable computers. In 1837, Charles Babbage was the first to conceptualize and
design a fully programmable mechanical computer that he called "The Analytical
Engine".[7] Due to limited finances, and an inability to resist tinkering with the design,
Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the U.S.
Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured
by the Computing Tabulating Recording Corporation, which later became IBM. By the
end of the 19th century a number of technologies that would later prove useful in the
realization of practical computers had begun to appear: the punched card, Boolean
algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by
increasingly sophisticated analog computers, which used a direct mechanical or electrical
model of the problem as a basis for computation. However, these were not programmable
and generally lacked the versatility and accuracy of modern digital computers.

A succession of steadily more powerful and flexible computing devices were constructed
in the 1930s and 1940s, gradually adding the key features that are seen in modern
computers. The use of digital electronics (largely invented by Claude Shannon in 1937)
and more flexible programmability were vitally important steps, but defining one point
along this road as "the first digital electronic computer" is difficult (Shannon 1940).
Notable achievements include:

• Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first
working machine featuring binary arithmetic, including floating point arithmetic
and a measure of programmability. In 1998 the Z3 was proved to be Turing
complete, therefore being the world's first operational computer.
• The non-programmable Atanasoff–Berry Computer (1941) which used vacuum
tube based computation, binary numbers, and regenerative capacitor memory.
• The secret British Colossus computers (1943)[8], which had limited
programmability but demonstrated that a device using thousands of tubes could be
reasonably reliable and electronically reprogrammable. It was used for breaking
German wartime codes.
• The Harvard Mark I (1944), a large-scale electromechanical computer with
limited programmability.
• The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used
decimal arithmetic and is sometimes called the first general purpose electronic
computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of
electronics). Initially, however, ENIAC had an inflexible architecture which
essentially required rewiring to change its programming.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible
and elegant design, which came to be known as the "stored program architecture" or von
Neumann architecture. This design was first formally described by John von Neumann in
the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of
projects to develop computers based on the stored-program architecture commenced
around this time, the first of these being completed in Great Britain. The first to be
demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or
"Baby"), while the EDSAC, completed a year after SSEM, was the first practical
implementation of the stored program design. Shortly thereafter, the machine originally
described by von Neumann's paper—EDVAC—was completed but did not see full-time
use for an additional two years.

Nearly all modern computers implement some form of the stored-program architecture,
making it the single trait by which the word "computer" is now defined. While the
technologies used in computers have changed dramatically since the first electronic,
general-purpose computers of the 1940s, most still use the von Neumann architecture.

Computers that used vacuum tubes as their electronic elements were in use throughout
the 1950s. Vacuum tube electronics were largely replaced in the 1960s by transistor-based
electronics, which are smaller, faster, cheaper to produce, require less power, and are
more reliable. In the 1970s, integrated circuit technology and the subsequent creation of
microprocessors, such as the Intel 4004, further decreased size and cost and further
increased speed and reliability of computers. By the 1980s, computers became
sufficiently small and cheap to replace simple mechanical controls in domestic appliances
such as washing machines. The 1980s also witnessed home computers and the now
ubiquitous personal computer. With the evolution of the Internet, personal computers are
becoming as common as the television and the telephone in the household. The defining
feature of modern computers which distinguishes them from all other machines is that
they can be programmed. That is to say that a list of instructions (the program) can be
given to the computer and it will store them and carry them out at some time in the
future.

In most cases, computer instructions are simple: add one number to another, move some
data from one location to another, send a message to some external device, etc. These
instructions are read from the computer's memory and are generally carried out
(executed) in the order they were given. However, there are usually specialized
instructions to tell the computer to jump ahead or backwards to some other place in the
program and to carry on executing from there. These are called "jump" instructions (or
branches). Furthermore, jump instructions may be made to happen conditionally so that
different sequences of instructions may be used depending on the result of some previous
calculation or some external event. Many computers directly support subroutines by
providing a type of jump that "remembers" the location it jumped from and another
instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read
each word and line in sequence, they may at times jump back to an earlier place in the
text or skip sections that are not of interest. Similarly, a computer may sometimes go
back and repeat the instructions in some section of the program over and over again until
some internal condition is met. This is called the flow of control within the program and
it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic


operation such as adding two numbers with just a few button presses. But to add together
all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of
time—with a near certainty of making a mistake. On the other hand, a computer may be
programmed to do this with just a few simple instructions. For example:

mov #0,sum ; set sum to 0


mov #1,num ; set num to 1
loop: add num,sum ; add num to sum
add #1,num ; add 1 to num
cmp num,#1000 ; compare num to 1000
ble loop ; if num <= 1000, go back to 'loop'
halt ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task
without further human intervention. It will almost never make a mistake and a modern
PC can complete the task in about a millionth of a second.[9]

However, computers cannot "think" for themselves in the sense that they only solve
problems in exactly the way they are programmed to. An intelligent human faced with the
above addition task might soon realize that instead of actually adding up all the numbers
one can simply use the equation

and arrive at the correct answer (500,500) with little work.[10] In other words, a computer
programmed to add up the numbers one by one as in the example above would do exactly
that without regard to efficiency or alternative solutions.

Programs

In practical terms, a computer program may run from just a few instructions to many
millions of instructions, as in a program for a word processor or a web browser. A typical
modern computer can execute billions of instructions per second (gigahertz or GHz) and
rarely make a mistake over many years of operation. Large computer programs
comprising several million instructions may take teams of programmers years to write,
thus the probability of the entire program having been written without error is highly
unlikely.

Errors in computer programs are called "bugs". Bugs may be benign and not affect the
usefulness of the program, or have only subtle effects. But in some cases they may cause
the program to "hang" - become unresponsive to input such as mouse clicks or
keystrokes, or to completely fail or "crash". Otherwise benign bugs may sometimes may
be harnessed for malicious intent by an unscrupulous user writing an "exploit" - code
designed to take advantage of a bug and disrupt a program's proper execution. Bugs are
usually not the fault of the computer. Since computers merely execute the instructions
they are given, bugs are nearly always the result of programmer error or an oversight
made in the program's design.[11]

In most computers, individual instructions are stored as machine code with each
instruction being given a unique number (its operation code or opcode for short). The
command to add two numbers together would have one opcode, the command to multiply
them would have a different opcode and so on. The simplest computers are able to
perform any of a handful of different instructions; the more complex computers have
several hundred to choose from—each with a unique numerical code. Since the
computer's memory is able to store numbers, it can also store the instruction codes. This
leads to the important fact that entire programs (which are just lists of instructions) can be
represented as lists of numbers and can themselves be manipulated inside the computer
just as if they were numeric data. The fundamental concept of storing programs in the
computer's memory alongside the data they operate on is the crux of the von Neumann,
or stored program, architecture. In some cases, a computer might store some or all of its
program in memory that is kept separate from the data it operates on. This is called the
Harvard architecture after the Harvard Mark I computer. Modern von Neumann
computers display some traits of the Harvard architecture in their designs, such as in CPU
caches.

While it is possible to write computer programs as long lists of numbers (machine


language) and this technique was used with many early computers,[12] it is extremely
tedious to do so in practice, especially for complicated programs. Instead, each basic
instruction can be given a short name that is indicative of its function and easy to
remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are
collectively known as a computer's assembly language. Converting programs written in
assembly language into something the computer can actually understand (machine
language) is usually done by a computer program called an assembler. Machine
languages and the assembly languages that represent them (collectively termed low-level
programming languages) tend to be unique to a particular type of computer. For instance,
an ARM architecture computer (such as may be found in a PDA or a hand-held
videogame) cannot understand the machine language of an Intel Pentium or the AMD
Athlon 64 computer that might be in a PC.[13]
Though considerably easier than in machine language, writing long programs in assembly
language is often difficult and error prone. Therefore, most complicated programs are
written in more abstract high-level programming languages that are able to express the
needs of the computer programmer more conveniently (and thereby help reduce
programmer error). High level languages are usually "compiled" into machine language
(or sometimes into assembly language and then into machine language) using another
computer program called a compiler.[14] Since high level languages are more abstract than
assembly language, it is possible to use different compilers to translate the same high
level language program into the machine language of many different types of computer.
This is part of the means by which software like video games may be made available for
different computer architectures such as personal computers and various video game
consoles.

The task of developing large software systems is an immense intellectual effort.


Producing software with an acceptably high reliability on a predictable schedule and
budget has proved historically to be a great challenge; the academic and professional
discipline of software engineering concentrates specifically on this problem.

Example

Suppose a computer is being employed to drive a traffic light. A simple stored program
might say:

1. Turn off all of the lights


2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light
11. Jump to instruction number (2)

With this set of instructions, the computer would cycle the light continually through red,
green, yellow and back to red again until told to stop running the program.

However, suppose there is a simple on/off switch connected to the computer that is
intended to be used to make the light flash red while some maintenance operation is
being performed. The program might then instruct the computer to:

1. Turn off all of the lights


2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light
11. If the maintenance switch is NOT turned on then jump to instruction number 2
12. Turn on the red light
13. Wait for one second
14. Turn off the red light
15. Wait for one second
16. Jump to instruction number 11

In this manner, the computer is either running the instructions from number (2) to (11)
over and over or its running the instructions from (11) down to (16) over and over,
depending on the position of the switch.[15]

How computers work

A general purpose computer has four main sections: the arithmetic and logic unit (ALU),
the control unit, the memory, and the input and output devices (collectively termed I/O).
These parts are interconnected by busses, often made of groups of wires.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked
with these) are collectively known as a central processing unit (CPU). Early CPUs were
composed of many separate components but since the mid-1970s CPUs have typically
been constructed on a single integrated circuit called a microprocessor.

Control unit

The control unit (often called a control system or central controller) directs the various
components of a computer. It reads and interprets (decodes) instructions in the program
one by one. The control system decodes each instruction and turns it into a series of
control signals that operate the other parts of the computer.[16] Control systems in
advanced computers may change the order of some instructions so as to improve
performance.

A key component common to all CPUs is the program counter, a special memory cell (a
register) that keeps track of which location in memory the next instruction is to be read
from.[17]
The control system's function is as follows—note that this is a simplified description, and
some of these steps may be performed concurrently or in a different order depending on
the type of CPU:

1. Read the code for the next instruction from the cell indicated by the program
counter.
2. Decode the numerical code for the instruction into a set of commands or signals
for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps
from an input device). The location of this required data is typically stored within
the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct
the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or
perhaps an output device.
8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be
changed by calculations done in the ALU. Adding 100 to the program counter would
cause the next instruction to be read from a place 100 locations further down the
program. Instructions that modify the program counter are often known as "jumps" and
allow for loops (instructions that are repeated by the computer) and often conditional
instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes through to
process an instruction is in itself like a short computer program - and indeed, in some
more complex CPU designs, there is another yet smaller computer called a
microsequencer that runs a microcode program that causes all of these events to happen.

The set of arithmetic operations that a particular ALU supports may be limited to adding
and subtracting or might include multiplying or dividing, trigonometry functions (sine,
cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst
others use floating point to represent real numbers—albeit with limited precision.
However, any computer that is capable of performing just the simplest operations can be
programmed to break down the more complex operations into simple steps that it can
perform. Therefore, any computer can be programmed to perform any arithmetic
operation—although it will take more time to do so if its ALU does not directly support
the operation. An ALU may also compare numbers and return boolean truth values (true
or false) depending on whether one is equal to, greater than or less than the other ("is 64
greater than 65?").

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful
both for creating complicated conditional statements and processing boolean logic.

Superscalar computers contain multiple ALUs so that they can process several
instructions at the same time. Graphics processors and computers with SIMD and MIMD
features often provide ALUs that can perform arithmetic on vectors and matrices.

Memory

A computer's memory can be viewed as a list of cells into which numbers can be placed
or read. Each cell has a numbered "address" and can store a single number. The computer
can be instructed to "put the number 123 into the cell numbered 1357" or to "add the
number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell
1595". The information stored in memory may represent practically anything. Letters,
numbers, even computer instructions can be placed into memory with equal ease. Since
the CPU does not differentiate between different types of information, it is up to the
software to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in
groups of eight bits (called a byte). Each byte is able to represent 256 different numbers;
either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes
may be used (typically, two, four or eight). When negative numbers are required, they are
usually stored in two's complement notation. Other arrangements are possible, but are
usually not seen outside of specialized applications or historical contexts. A computer can
store any kind of information in memory as long as it can be somehow represented in
numerical form. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and
written to much more rapidly than the main memory area. There are typically between
two and one hundred registers depending on the type of CPU. Registers are used for the
most frequently needed data items to avoid having to access main memory every time
data is needed. Since data is constantly being worked on, reducing the need to access
main memory (which is often slow compared to the ALU and control units) greatly
increases the computer's speed.
Computer main memory comes in two principal varieties: random access memory or
RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU
commands it, but ROM is pre-loaded with data and software that never changes, so the
CPU can only read from it. ROM is typically used to store the computer's initial start-up
instructions. In general, the contents of RAM is erased when the power to the computer is
turned off while ROM retains its data indefinitely. In a PC , the ROM contains a
specialized program called the BIOS that orchestrates loading the computer's operating
system from the hard disk drive into RAM whenever the computer is turned on or reset.
In embedded computers, which frequently do not have disk drives, all of the software
required to perform the task may be stored in ROM. Software that is stored in ROM is
often called firmware because it is notionally more like hardware than software. Flash
memory blurs the distinction between ROM and RAM by retaining data when turned off
but being rewritable like RAM. However, flash memory is typically much slower than
conventional ROM and RAM so its use is restricted to applications where high speeds are
not required.[18]

In more sophisticated computers there may be one or more RAM cache memories which
are slower than registers but faster than main memory. Generally computers with this sort
of cache are designed to move frequently needed data into the cache automatically, often
without the need for any intervention on the programmer's part.

I/O is the means by which a computer receives information from the outside world and
sends results back. Devices that provide input or output to the computer are called
peripherals. On a typical personal computer, peripherals include input devices like the
keyboard and mouse, and output devices such as the display and printer. Hard disk drives,
floppy disk drives and optical disc drives serve as both input and output devices.
Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and
memory. A graphics processing unit might contain fifty or more tiny computers that
perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop
computers contain many smaller computers that assist the main CPU in performing I/O.

Multitasking

While a computer may be viewed as running one gigantic program stored in its main
memory, in some systems it is necessary to give the appearance of running several
programs simultaneously. This is achieved by having the computer switch rapidly
between running each program in turn. One means by which this is done is with a special
signal called an interrupt which can periodically cause the computer to stop executing
instructions where it was and do something else instead. By remembering where it was
executing prior to the interrupt, the computer can return to that task later. If several
programs are running "at the same time", then the interrupt generator might be causing
several hundred interrupts per second, causing a program switch each time. Since modern
computers typically execute instructions several orders of magnitude faster than human
perception, it may appear that many programs are running at the same time even though
only one is ever executing in any given instant. This method of multitasking is sometimes
termed "time-sharing" since each program is allocated a "slice" of time in turn.

Before the era of cheap computers, the principle use for multitasking was to allow many
people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several


programs to run more slowly - in direct proportion to the number of programs it is
running. However, most programs spend much of their time waiting for slow input/output
devices to complete their tasks. If a program is waiting for the user to click on the mouse
or press a key on the keyboard, then it will not take a "time slice" until the event it is
waiting for has occurred. This frees up time for other programs to execute so that many
programs may be run at the same time without unacceptable speed loss.

Multiprocessing

Some computers may divide their work between one or more separate CPUs, creating a
multiprocessing configuration. Traditionally, this technique was utilized only in large and
powerful computers such as supercomputers, mainframe computers and servers.
However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit)
personal and laptop computers have become widely available and are beginning to see
increased usage in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ


significantly from the basic stored-program architecture and from general purpose
computers.[19] They often feature thousands of CPUs, customized high-speed
interconnects, and specialized computing hardware. Such designs tend to be useful only
for specialized tasks due to the large scale of program organization required to
successfully utilize most of the available resources at once. Supercomputers usually see
usage in large-scale simulation, graphics rendering, and cryptography applications, as
well as with other so-called "embarrassingly parallel" tasks.
Computers have been used to coordinate information between multiple locations since
the 1950s. The U.S. military's SAGE system was the first large-scale example of such a
system, which led to a number of special-purpose commercial systems like Sabre.

In the 1970s, computer engineers at research institutions throughout the United States
began to link their computers together using telecommunications technology. This effort
was funded by ARPA (now DARPA), and the computer network that it produced was
called the ARPANET. The technologies that made the Arpanet possible spread and
evolved. In time, the network spread beyond academic and military institutions and
became known as the Internet. The emergence of networking involved a redefinition of
the nature and boundaries of the computer. Computer operating systems and applications
were modified to include the ability to define and access the resources of other computers
on the network, such as peripheral devices, stored information, and the like, as extensions
of the resources of an individual computer. Initially these facilities were available
primarily to people working in high-tech environments, but in the 1990s the spread of
applications like e-mail and the World Wide Web, combined with the development of
cheap, fast networking technologies like Ethernet and ADSL saw computer networking
become almost ubiquitous. In fact, the number of computers that are networked is
growing phenomenally. A very large proportion of personal computers regularly connect
to the Internet to communicate and receive information. "Wireless" networking, often
utilizing mobile phone networks, has meant networking is becoming increasingly
ubiquitous even in mobile computing environments.

Você também pode gostar