Você está na página 1de 7

Increasing Factors which improves the performance of computer in

future:

The following increasing factors which improve the performance of computer in future
which are given below.

1. Multi core Processor:


It’s not hard to predict that next-generation processors will have
smaller details, more transistors on each square millimeter of silicon, and do more clever
things, such as powering down under-used sections even between keystrokes. They will
also have many cores or CPUs.

1.1.Introduction:
Multi-core processor is a single computing component with two or more
independent actual central processing units (called "cores"), which are the units that read
and execute program instructions. The instructions are ordinary CPU instructions such as
add, move data, and branch, but the multiple cores can run multiple instructions at the
same time, increasing overall speed for programs amenable to parallel computing.
Manufacturers typically integrate the cores onto a single integrated circuit die (known as
a chip multiprocessor or CMP), or onto multiple dies in a single chip package. Processors
were originally developed with only one core. A dual-core processor has two cores (e.g.
AMD Phenom II X2, Intel Core Duo), a quad-core processor contains four cores, a hexa-
core processor contains six cores (e.g. AMD Phenom II X6, Intel Core i7 Extreme
Edition 980X), an octo-core processor or octa-core processor contains eight cores (e.g.
Intel Xeon E7- 2820, AMD FX-8350), a deca-core processor contains ten cores (e.g. Intel
Xeon E7-2850).A multi-core processor implements multiprocessing in a single physical
package. Designers may couple cores in a multi-core device tightly or loosely. For
example, cores may or may not share caches, and they may implement message passing
or shared memory inter-core communication methods. Common network topologies to
interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous
multi-core systems include only identical cores, heterogeneous multi-core systems have
cores that are not identical. Just as with single-processor systems, cores in multi-core
systems may implement architectures such as superscalar, VLIW, vector processing,
SIMD, or Multithreading.
Multi-core processors are widely used across many application domains including
general-purpose, embedded, network, digital signal processing (DSP), and graphics. The
improvement in performance gained by the use of a multi-core processor depends very
much on the software algorithms used and their implementation. In particular, possible
gains are limited by the fraction of the software that can be run in parallel simultaneously
on multiple cores; this effect is described by Amdahl's law. In the best case, so-called

1
embarrassingly parallel problems may realize speedup factors near the number of cores,
or even more if the problem is split up enough to fit within each core's cache(s), avoiding
use of much slower main system memory. Most applications, however, are not
accelerated so much unless programmers invest a prohibitive amount of effort in re-
factoring the whole problem.

1.2.Advantages:

 The proximity of multiple CPU cores on the same die allows the cache coherency
circuitry to operate at a much higher clock-rate than is possible if the signals have to
travel off-chip. Combining equivalent CPUs on a single die significantly improves the
performance of cache snoop (alternative: Bus snooping) operations. Put simply, this
means that signals between different CPUs travel shorter distances, and therefore those
signals degrade less. These higher-quality signals allow more data to be sent in a given
time period, since individual signals can be shorter and do not need to be repeated as
often.

 Multi-core chips also allow higher performance at lower energy. This can be a big factor
in mobile devices that operate on batteries. Since each core in multi-core is generally
more energy-efficient, the chip becomes more efficient than having a single large
monolithic core. This allows higher performance with less energy. The challenge of
writing parallel code clearly offsets this benefit.

1.3.Multi-Core Challenges:
Having multiple cores on a single chip gives rise to some
problems and challenges. Power and temperature management are two concerns that can
increase exponentially with the addition of multiple cores. Memory/cache coherence is
another challenge, since all designs discussed above have distributed L1 and in some
cases L2 caches which must be coordinated. And finally, using a multi-core processor to
its full potential is another issue. If programmers do not write applications that take
advantage of multiple cores there is no gain, and in some cases there is a loss of
performance. Application need to be written so that different parts can be run
concurrently (without any ties to another part of the application that is being run
simultaneously).

1.4.Cache Coherence:
Cache coherence is a concern in a multi-core environment because of
distributed L1 and L2 cache. Since each core has its own cache, the copy of the data in
that cache may not always be the most up-to-date version. For example, imagine a dual-
core processor where each core brought a block of memory into its private cache. One
core writes a value to a specific location; when the second core attempts to read that

2
value from its cache it will not have the updated copy unless its cache entry is invalidated
and a cache miss occurs. This cache miss forces the second cores cache entry to be
updated. If this coherence policy was not in place garbage data would be read and invalid
results would be produced, possibly crashing the program or the entire computer.

1.5.Multithreading:
The last, and most important, issue is using multithreading or other
parallel processing techniques to get the most performance out of the multi-core
processor. “With the possible exception of Java, there are no widely used commercial
development languages with [multithreaded] extensions. Rebuilding applications to be
multithreaded means a complete rework by programmers in most cases.
Programmers have to write applications with subroutines able to be run in different cores,
meaning that data dependencies will have to be resolved or accounted for (e.g. latency in
communication or using a shared cache). Applications should be balanced. If one core is
being used much more than another, the programmer is not taking full advantage of the
multi- core system. Some companies have heard the call and designed new products with
multi-core capabilities; Microsoft and Apples newest operating systems can run on up to
4 cores.

1.6.Starvation:
If a program is not developed correctly for use in a multicore processor one or
more of the cores may starve for data. This would be seen if a single-threaded application
is run in a multi-core system. The thread would simply run in one of the cores while the
other cores sat idle. This is an extreme case, but illustrates the problem.

1.7.Interconnect:
Another important feature with impact on performance of multicore chips is
the communication among different on chip components: cores, caches, and if integrated
memory controllers and network controllers. Initial designs used a bus as in traditional
multi-CPU systems. The trend now is to use a crossbar or other advanced mechanisms to
reduce latency and contention .For instance, AMD CPUs employ a crossbar, and the
Tilera TILE64 implements a fast non-blocking multilink mesh. However, The
interconnect can become expensive: an 8 x 8 crossbar on-chip can consume as much area
as five cores and as much power as two cores. With only private caches on-chip, data
exchange between threads running on different cores historically necessitated using the
off-chip interconnect. Shared on-chip caches naturally support data exchange among
threads running on different cores. Thus, introducing a level of shared cache on chip
commonly L2 or, as the more recent trend, L3 or supporting data-exchange short-cuts
such as cache-to-cache transfer helped reduce the off-chip traffic. However, more on chip
cache levels put additional requirements on the on-chip interconnect in terms of

3
complexity and bandwidth. As data processing increases with more thread-level
parallelism, demands also typically increase on the off-chip communication fabric for
memory accesses, I/O, or CPU-to-CPU communication. To address these requirements,
the new trend in off-chip communication is from bus-based towards packet-based, point-
to-point interconnects. This concept was first implemented in Hyper Transport for AMD
CPUs and later in Intel’s Quick Path Interconnect. The off-chip interconnect and data-
coherency support also impact scalability of multiple-CPU servers.

2. Speed:

All current computer device technologies are indeed limited by the speed of electron
motion. This limitation is rather fundamental, because the fastest possible speed for
information transmission is of course the speed of light, and the speed of an electron is
already a substantial fraction of this. Where we hope for future improvements is not so
much in the speed of computer devices as in the speed of computation. At first, these may
sound like the same thing, until you realize that the number of computer device
operations needed to perform a computation is determined by something else--namely, an
algorithm.

A very efficient algorithm can perform a computation much more quickly than can an
inefficient algorithm, even if there is no change in the computer hardware. So further
improvement in algorithms offers a possible route to continuing to make computers
faster; better exploitation of parallel operations, pre-computation of parts of a problem,
and other similar tricks are all possible ways of increasing computing efficiency.

These ideas may sound like they have nothing to do with 'physical restrictions,' but in fact
we have found that by taking into account some of the quantum-mechanical properties of
future computer devices, we can devise new kinds of algorithms that are much, much
more efficient for certain computations. We still know very little about the ultimate
limitations of these 'quantum algorithms.

3. Graphic system:

A computer's graphics system determines how well it can work with visual output.
Graphics systems can either be integrated into a computer's motherboard, or plugged into
the motherboard as a separate "video card". Graphics systems integrated into the
motherboard (also known as "onboard graphics") are now quite powerful, and sufficient
for handling the requirements of most software applications aside from games playing,
3D modeling, and some forms of video editing.

4
Any form of modern computer graphics system can now display high-resolution colour
images on a standard-sized display screen (ie any monitor up to about 19" in size). What
the more sophisticated graphics cards now determine is how well a computer can handle
the playback of high definition video, as well as the speed and quality at which 3D scenes
(including games!) can be rendered. Another key feature of separate graphics cards is that
most of them now allow more than one display screen to be connected to a computer.
Others also permit the recording of video.

In effect, modern graphics cards have become dedicated computers in their own right,
with their own processor chips and RAM dedicated to video decoding and 3D rendering.
Hardly surprisingly, when it comes to final performance, the more RAM and the faster
and more sophisticated the processor available on a graphics card the better. This said,
top-end graphics cards can cost up to a few thousand dollars or pounds.

As a basic rule, unless a computer is going to be used to handle 3D graphics or to


undertake a significant volume of video editing or recording, today there is little point in
opting for anything other than onboard graphics (not least because separate graphics
cards consume quite a lot of electricity and create quite a lot of heat and noise). Adding a
new graphics card to a computer with onboard graphics is also a very easy upgrade if
required in the future.

Graphics cards connect to what is known as either a "PCI Express" or an "AGP" slot on a
computer's motherboard. PCI Express is the more powerful and modern standard, with
the best graphics cards requiring the use of two PCI Express slots. A PC being upgraded
from onboard graphics sometimes also requires an upgraded power supply if it is to
continue to run in a stable fashion.

4. HARD DRIVE SPEED AND CAPACITY:


Hard disk drives are the high capacity storage devices inside a computer from
which software and user data are loaded. Like most other modern storage devices, the
capacity of the one or more internal hard disks inside a computer is measured in
gigabytes (GB). Today 40GB is an absolute minimum hard drive size for a new computer
running Windows 7, with a far larger capacity being recommended in any situation where
more than office software is going to be installed. Where a computer will frequently be
used to edit video, a second internal hard disk dedicated only to video storage is highly
recommended for stable operation. Indeed, for professional video editing using a program
like Premiere Pro CS5, Adobe now recommend that a PC has at least three internal hard
disks (one for the operating system and programs, one for video project files, and one for
video media). This is also not advice to be lightly ignored if you want your computer to
actually work!

5
Most computers are configured to use a proportion of a computer's internal hard disk to
store temporary files. Such a "swap file" enables the computer to operate effectively, and
means that some free hard disk space always needs to be available for a computer to run
properly. However, providing that a hard disk is large enough to store the
required software and user data without getting beyond about 80 per cent full, hard disk
capacity will have no impact on overall system performance. However, what does impact
significantly on overall system performance is the speed of a computer's main internal
hard disk. This is simply because the longer it takes to read software and data from the
disk, and to access temporary files, the slower the computer will run.

Two key factors determine the speed of traditional, spinning hard disks. The first is the
rotational velocity of the physical disk itself. This can currently be 4200, 5400, 7200,
10000 or 15000 rpm (revolutions per minute). The faster the disk spins, the quicker data
can be read from or written to it, hence the faster the disk the better (although faster disks
consume more power, make more noise, and generate more heat). Most desktop hard
disks run at either 5400 or 7200 rpm, whilst most laptop hard disks run at 4200 or 5400.
However, upgrading to a 10000 or 15000 rpm disk -such as a Velociraptor from Western
Digital can prove one of the most cost-effective upgrades for increasing the performance
and responsiveness of a desktop computer.

The second key factor that determines performance of a traditional, internal hard disk is
the interface used to connect it to the computer's motherboard. Three types of interface
exist: SATA, which is the most modern and now pretty much the norm on new
PCs; IDE (also known as UDMA), which is a slower and older form of interface, and
finally SCSI, which is happens to be the oldest but in it most modern variant is still the
fastest disk interface standard. This said, SCSI is now all but redundant in desktop
computing since the introduction of SATA, as SATA provides a fairly high speed
interface at much lower cost and complexity than SCSI.

The above points all noted, for users seeking ultimate performance, there is now the
option of installing a computer's operating system, programs and data on a solid state
drive (SSD), rather than a traditional, spinning hard disk. SSDs are far faster and more
energy efficient than traditional, spinning hard disks, which in time they will largely
replace. This said, at present SSDs are still a lot more expensive than traditional spinning
hard disks in terms of cost-per-gigabyte.

5. Installed Memory:

The amount and type of memory you have installed in your computer determines how
much data can be processed at once. Faster memory and a larger amount of RAM will
make your computer perform more quickly. Attempting to use an application that
overloads the RAM will give you significantly slower speeds.

6
6. Video Processing Components:

Some computers have the video processing hardware on the motherboard, while
others have a standalone video card. Generally, computers with a standalone
graphical processing unit (GPU) are better at displaying complex images and video
than computers with a GPU on the motherboard. Your video card partially determines
which software you can run and how smoothly it works.

7. Maintenance:

Even a computer made of the latest, best components still requires maintenance. It's
important to install updates, keep the computer free of malware and perform regular
upkeep. Updates from the operating system manufacturer and any hardware or software
updates can help the computer run better or fix problems. Malware slows your computer
down and can expose you to identity theft. An anti-virus program and regular scans will
help you avoid those issues. Make sure to defragment your computer, clean up old files
and shut it down regularly. Do these things will help keep your system working quickly.

Você também pode gostar