Você está na página 1de 2

What is gpu?

It is a processor optimized for 2D/3D graphics, video, visual computing, and display. It is highly
parallel, highly multithreaded multiprocessor optimized for visual computing. It provides real-
time visual interaction with computed objects via graphics images, and video.
GPUs are used in embedded systems, mobile phones, personal computers, workstations, and
game consoles.
Its evolution.
The first GPUs were designed as graphics accelerators, supporting only specific fixed-function
pipelines. Starting in the late 1990s, the hardware became increasingly programmable,
culminating in NVIDIA's first GPU in 1999. Less than a year after NVIDIA coined the term GPU,
artists and game developers weren't the only ones doing ground-breaking work with the
technology: Researchers were tapping its excellent floating point performance. The General
Purpose GPU (GPGPU) movement had dawned.
But GPGPU was far from easy back then, even for those who knew graphics programming
languages such as OpenGL. Developers had to map scientific calculations onto problems that
could be represented by triangles and polygons. GPGPU was practically off-limits to those who
hadn't memorized the latest graphics APIs until a group of Stanford University researchers set
out to reimagine the GPU as a "streaming processor."
Stream processing (http://en.wikipedia.org/wiki/Stream_processing)
In 2003, a team of researchers led by Ian Buck unveiled Brook, the first widely adopted
programming model to extend C with data-parallel constructs. Using concepts such as streams,
kernels and reduction operators, the Brook compiler and runtime system exposed the GPU as a
general-purpose processor in a high-level language. Most importantly, Brook programs were not
only easier to write than hand-tuned GPU code, they were seven times faster than similar
existing code.
NVIDIA knew that blazingly fast hardware had to be coupled with intuitive software and
hardware tools, and invited Ian Buck to join the company and start evolving a solution to
seamlessly run C on the GPU. Putting the software and hardware together, NVIDIA unveiled
CUDA in 2006, the world's first solution for general-computing on GPUs.
GPU v/s CPU
The difference between CPUs and GPUs is that GPUs are highly specialized in number crunching,
something that graphics processing desperately needs as it involves millions, if not billions, of
calculations per second.
A GPU is tailored for highly parallel operation while a CPU executes programs serially
For this reason, GPUs have many parallel execution units and higher transistor counts, while
CPUs have few execution units and higher clockspeeds
GPUs have significantly faster and more advanced memory interfaces as they need to shift
around a lot more data than CPUs




The way computing is done in gpu or how gpu works.
Architecture
Architecturally, the CPU is composed of an only few cores with lots of cache memory that can handle
a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can
handle thousands of threads simultaneously. The ability of a GPU with 100+ cores to process
thousands of threads can accelerate some software by 100x over a CPU alone. Whats more, the
GPU achieves this acceleration while being more power- and cost-efficient than a CPU.

Você também pode gostar