Você está na página 1de 60

A. A.

Datti 2018
Introduction
Applications of Computer Graphics
Graphics System
OpenGL Overview
Output Primitives
2D Transformations & Viewing
3D Transformations & Viewing

2
Computer graphics is the creation and manipulation of geometric
objects (models) and images using computer.

Computer graphics is concerned with all aspects of producing pictures


or images using a computer.

Computer graphics is involved in any work that uses computation to


create or modify images, whether those images are still or moving;
interactive or fixed; on film, video, screen, or print.

3
 The applications of computer graphics are many and varied; we can,
however, divide them into four major areas:

1. Display of information (Maps, graphs and medical images)

2. Design (Engineering etc)

3. Simulation and animation (Training, Games, Magazines and


Movies, Virtual Reality and Scientific Simulations)

4. User interfaces (Operating systems and applications)

Many applications may span two or more of these areas.

4
Scientific Visualization – Geographic Info. Systems

5
Scientific Visualization – Geographic Info. Systems

6
Statistics– Charts and Graphs

7
Statistics – Charts and Graphs

8
Scientific Visualization – Charts and Graphs

9
Scientific Visualization – Charts and Graphs

10
Scientific Visualization – Charts and Graphs

11
Scientific Visualization - Medical

12
Scientific Visualization - Medical

13
Info graphic Posters

14
CAD/CAM – Mechanical Engineering

15
CAD/CAM - Architecture

16
CAD/CAM - Fashion

17
Scientific
Simulations

18
20
Entertainment - Art

21
Training Simulations

22
Entertainment - Animations

23
Entertainment - Movies

24
Entertainment - Movies

25
Entertainment – Video Games

26
Entertainment – Video Games

27
Graphical User Interfaces

28
Graphical User Interfaces

29
Graphical User Interfaces

30
Introduction
Applications of Computer Graphics
Graphics System
Hardware
Software

31
• There are six major hardware elements in a computer
graphics system:

• Input devices
• Processing
• Central Processing Unit
• Graphics Processing Unit
• Memory
• Main Memory
• Frame buffer
• Output devices
Input devices Output devices
• There are ways images are represented on digital output devices
Raster and Vector.
• Virtually all modern graphics systems are raster based. The image
we see on the output device is an array—the raster—of picture
elements, or pixels, produced by the graphics system.
• Vector Systems on the other hand represent an image as collection
of lines.
*Each pixel corresponds to a location, or small area, in the
image.

*Collectively, the pixels are sored in a part of memory called the


frame buffer.

*The frame buffer can be viewed as the core element of a


graphics system.

*Itsresolution - the number of pixels in the frame buffer -


determines the detail that you can see in the image. E.g. 1024
* 768 (=786,432 pixels)
* The depth or precision of the frame buffer, defined as the number of
bits that are used for each pixel, determines properties such as how
many colors can be represented on a given system.

* For example, a 1-bit-deep frame buffer allows only two colors, whereas
an 8-bit-deep frame buffer allows 256 colors.

* Infull-color systems, there are 24 (or more) bits per pixel. Such
systems can display sufficient colors to represent most images
realistically. They are also called true-color systems, or RGB-color
systems, because individual groups of bits in each pixel are assigned to
each of the three primary colors—red, green, and blue—used in most
displays.

* High dynamic range (HDR) systems use 12 or more bits for each color
component.
1 bit (2 colors) 2 bit (4 colors) 4 bit (16 colors)

8 bits (256 colors) 24 bits (16,777,216 colors) 32 bits (4,294,967,296 colors)


* In a simple system, there may be only one processor, the central
processing unit (CPU) of the system, which must do both the normal
processing and the graphical processing.
* The main graphical function of the processor is to take specifications of
graphical primitives (such as lines, circles, and polygons) generated by
application programs and to assign values to the pixels in the frame
buffer that best represent these entities.
* For example, a triangle is specified by its three vertices, the graphics
system must generate a set of pixels that appear as line segments to the
viewer. The conversion of geometric entities to pixel colors and locations
in the frame buffer is known as rasterization, or scan conversion.

* Graphics Processing Units (on modern systems) on the other hand are
dedicated processing units specialized in graphic functions.
Hardcopy
• Dot Matrix Printers
• Ink-Jet Printers
• Laser Printers
• 3D Printers
Softcopy
• Cathode Ray Tube (CRT)
• Liquid Crystal Display (LCD)
Miscellaneous
• Hologram
• Dot Matrix - uses a head with 7 to 24 pins to strike a ribbon
(single or multiple color)

• Ink Jet Printers: a printer in which the characters are formed by


minute jets of ink. (fires small balls of colored ink)

• Laser Printers: a printer linked to a computer producing printed


material by using a laser to form a pattern of electrostatically
charged dots on a light-sensitive drum, which attract toner (or
dry ink powder). The toner is transferred to a piece of paper and
fixed by a heating process.

• 3D Printers: a machine allowing the creation of a physical object


from a three-dimensional digital model, typically by laying down
many thin layers of a material in succession.
•In a raster system, the graphics system takes
pixels from the frame buffer and
displays them as points on the surface of the
display.

•Examples
• Cathode Ray Tube (CRT)
• Liquid Crystal Display (LCD)
• Light Emitting Diode (LED) Display
 When electrons strike the phosphor coating on the tube, light is emitted.

 The direction of the beam is controlled by two pairs of deflection plates.

 Light appears on the surface of the CRT when a sufficiently intense beam of electrons is
directed at the phosphor.
 The screen is coated
with phosphor, 3 colors
for a color monitor.
 For a color monitor,
three guns light up red,
green, or blue
phosphors.
 Liquid crystal displays use small flat chips which change their transparency properties
when a voltage is applied.

 LCD elements are arranged in an n x m array call the LCD matrix

 LCDs elements do not emit light, but use backlights behind the LCD matrix

 Color is obtained by placing filters in front of each LCD element

 Image quality dependent on viewing angle.


Also divided into pixels, but without an electron gun firing at a screen, LCDs have
cells that either allow light to flow through, or block it.
 There are two primary types of input devices:
 Pointing devices and Keyboard devices.
 The pointing device allows the user to indicate a position on the screen and almost
always incorporates one or more buttons to allow the user to send signals to the
computer (mouse, joystick, touch screen and spaceballs).
 The keyboard device is almost always a physical keyboard but can be generalized to
include any device that returns character codes.
Introduction
Applications of Computer Graphics
Graphics System
Hardware
Software

50
Special Purpose
Word, Excel etc.
AutoCAD
Animation and Simulation Packages e.g. Maya
Visualization Packages e.g. GraphViz
Painting Packages e.g. MSPaint

General Purpose
Programming API (Application Program Interface)
 OpenGL
 DirectX
 Java2D and Java3D
 Programmer sees the graphics system through an interface: the Application
Programmer Interface (API)
Application

Application High Level API (Java3D)

Low-Level Application Programming Interface (OpenGL)

Hardware and software

Output Devices Input Devices


Functions that specify what we need to form an image

 Objects: are usually defined by sets of vertices.

 For simple geometric objects — such as line segments, rectangles, and


polygons—there is a simple relationship between a list of vertices, or
positions in space, and the object.

 For more complex objects, there may be multiple ways of defining the
object from a set of vertices. A circle, for example, can be defined by three
points on its circumference, or by its centre and one point on the
circumference.
 Viewer/Camera :can be defined using four types of necessary specifications:

 Position The camera location usually is given by the position of the center of the
lens, which is the center of projection (COP).

 Orientation Once we have positioned the camera, we can place a camera


coordinate system with its origin at the center of projection. We can then rotate the
camera independently around the three axes of this system.

 Focal length The focal length of the lens determines the size of the image on the
film plane or, equivalently, the portion of the world the camera sees.
 Much of the work in the pipeline is in converting object
representations from one coordinate system to another
 World coordinates
 Camera coordinates
 Screen coordinates
 Every change of coordinates is equivalent to a matrix
transformation
 Just as a real camera cannot “see” the whole world, the virtual camera
can only see part of the world space
 Objects that are not within this volume are said to be clipped out of the scene
 Must carry out the process that combines the 3D viewer
with the 3D objects to produce the 2D image
 If an object is visible in the image, the appropriate pixels in
the frame buffer must be assigned colors
 Vertices assembled into objects
 Effects of lights and materials must be determined
 Polygons filled with interior colors/shades
 Must have also determine which objects are in front (hidden surface
removal)