Escolar Documentos
Profissional Documentos
Cultura Documentos
Oct 2013
Chapter 1 Graphic Systems and Models
Section 1.3 Images Physical and Synthetic
Two basic entities must be part of any image formation process
1. Object
2. Viewer
The visible spectrum of light for humans is from 350 to 780 nm
A light source is characterized by its intensity and direction
A ray is a semi-infinite line that emanates from a point and travels to infinity
Ray tracing and photon mapping are examples of image formation techniques
Section 1.5 Synthetic Camera Model
Conceptual foundation for modern three dimensional computer graphics is the synthetic camera
model.
Few basic principles include:
Specification of object is independent of the specification of the viewer
Compute the image using simple geometric calculations
COP Center of projection (center of the lens)
With synthetic cameras we move the clipping to the front by placing a clipping rectangle, or clipping
window in the projection plane. This acts as a window through which we view the world.
Section 1.7 - Graphic Architectures
2 main approaches
1. Object Oriented Pipeline vertices travel through the pipeline that determines the color
and pixel positions.
2. Image Oriented Pipeline loop over pixels. For each pixel work backwards to determine
which geometric primitives can contribute to its color.
Object Orient Pipeline
History
Graphic architecture has progressed from single central processing to do graphics to a pipeline
model. Pipeline architecture reduces the total processing time for a render (think of it as multiple
specialized processors, each performing a function and then passing the result on to the next
processor).
Advantages
Each primitive can be processed independently which leads to fast performance
Memory requirements reduced because not all objects are needed in memory at the same time
Disadvantages
Cannot handle most global effects such as shadows, reflections and blending in a physically
correct manner
4 major steps in pipeline
1. Vertex Processing
a. Does coordinate transformations
b. Computes color for each vertex
2. Clipping and Primitive Assembly
a. Clipping is performed on a primitive by primitive basis
3. Rasterization
a. Convert from vertices to fragments
b. Output of rasterizer is a set of fragments
4. Fragment Processing
a. Takes fragments generated by rasterizer and updates pixels
Fragments think of them as a potential pixel that carries information including its color, location
and depth info.
6 major frames that occur in OpenGL
1. Object / Model Coordinates
2. World Coordinates
3. Eye or Camera Coordinates
4. Clip Coordinates
5. Normalized Device Coordinates
6. Window or Screen Coordinates
Example Questions for Chapter 1
Textbook Question 1.1
What are the main advantages and disadvantages of the preferred method to form computer-
generated images discussed in this chapter?
Textbook Question 1.5
Each image has a set of objects and each object comprises a set of graphical primitives. What does
each primitive comprise? What are the major steps in the imaging process?
Exam Jun 2011 1.a (6 marks)
Differentiate between the object oriented and image oriented pipeline implementation strategies
and discuss the advantages of each approach? What strategy does OpenGL use?
Exam Jun 2012 1.a (4 marks)
What is the main advantage and disadvantage of using the pipeline approach to form computer
generated images?
Exam Jun 2012 1.b (4 marks)
Differentiate between the object oriented and image oriented pipeline implementation strategies
Exam Jun 2012 1.c (4 marks)
Name the frames in the usual order in which they occur in the OpenGL pipeline
Exam Jun 2013 1.3 (3 marks)
Can the standard OpenGL pipeline easily handle light scattering from object to object? Explain?
Chapter 2 Graphics Programming
Key concepts that need to be understood
Typical composition of Vertices / Primitive Objects
Size & Colour
Immediate mode vs. retained mode graphics
Immediate mode
- Used to be the standard method for displaying graphics
- There is no memory of the geometric data stored
- Large overhead in time needed to transfer drawing instructions and model data for each
cycle to the GPU
Retained mode graphics
- Has the data stored in a data structure which allows it to redisplay the data with the option
of slight modifications (i.e. change color) by resending the array without regenerating the
points.
Retained mode is the opposite of immediate: most rendering data is pre-loaded onto the graphics
card and thus when a render cycle takes place, only render instructions, and not data, are sent.
Both immediate and retained mode can be used at the same time on all graphics cards, though the
moral of the story is that if possible, use retained mode to improve performance.
Coordinate Systems
Device Dependent Graphics - Originally graphic systems required the user to specify all information
directly in units of the display device (i.e. pixels).
Device Independent Graphics - Allows users to work in any coordinate system that they desire.
World coordinate system coordinate system that the user decides to work in
Vertex coordinates the units that an application program uses to specify vertex positions.
At some point with device independent graphics the values in the vertex coordinate system must be
mapped to window coordinates. The graphic system rather than the user is now responsible for this
task and mapping is performed automatically as part of the rendering process.
Color RGB vs. Indexed
With both the indexed and RGB color models the number of colors that can be displayed depends on
the depth of the frame (color) buffer.
Indexed Color Model
In the past, memory was expensive and small and displays had limited colors.
This meant that the indexed-color model was preferred because
- It had lower memory requirements
- Displays had limited colors available.
In an indexed color model a color lookup table is used to identify which color to display.
Color indexing presented 2 major problems
1) When working with dynamic images that needed shading we would typically need more
colors than were provided by the color index mode.
2) The interaction with the window system is more complex than with RGB color.
RGB Color Model
As hardware has advanced, RGB has become the norm.
Think of RGB conceptually as three separate buffers, one for red, green and blue. It allows us to
specify the proportion of red, green and blue in a single pixel. In OpenGL this is often stored in a
three dimensional vector.
RGB color model can become unsuitable when the depth of the frame is small because shades
become too distinct/discreet.
Viewing Orthographic and Two Dimensional
The orthographic view is the simplest and OpenGLs default view. Mathematically, the orthographic
projection is what we would get if the camera in our synthetic camera model had an infinitely long
telephoto lens and we could then place the camera infinitely far from our objects.
In OpenGL, an orthographic projection with a right-parallelepiped viewing volume is the default. The
orthographic projection sees only those objects in the volume specified by the viewing volume.
Two dimensional viewing is a special case of three-dimensional graphics. Our viewing area is in the
plane z = 0, within a three dimensional viewing volume. The area of the world that we image is
known as the viewing rectangle, or clipping rectangle. Objects inside the rectangle are in the image;
objects outside are clipped out.
Aspect Ratio and Viewports
Aspect Ratio - The aspect ratio of a rectangle is the ratio of the rectangles width to its height. The
independence of the object, viewing, and workstation window specifications can cause undesirable
side effects if the aspect ratio of the viewing rectangle is not the same as the aspect ratio of the
window specified.
In glut we use glutInitWindowSize to set this. Side effects can include distortion. Distortion is a
consequence of our default mode of operation, in which the entire clipping rectangle is mapped to
the display window.
Clipping Rectangle - The only way we can map the entire contents of the clipping rectangle to the
entire display window is to distort the contents of clipping rectangle to fit inside the display window.
This is avoided if the display window and clipping rectangle have the same aspect ratio.
Viewport - Another more flexible approach is to use the concept of a viewport. A viewport is a
rectangular area of the display window. By default it is the entire window, but it can be set to any
smaller size in pixels.
OpenGL Programming Basics
Event Processing
Event processing allows us to program how we would like the system to react to certain events.
These could include mouse, keyboard or window events.
Callbacks (Display, Reshape, Idle, Keyboard, Mouse)
Each event has a callback that is specified. The callback is used to trigger actions when an event is
used.
The idle callback is invoked when there are no other events to trigger. A typical use of the idle
callback is to continue to generate graphical primitives through a display function while nothing else
is happening.
Hidden Surface Removal
Given the position of the viewer and the objects being rendered we should be able to draw the
objects in such a way that the correct image is obtained. Algorithms for ordering objects so that they
are drawn correctly are called visible-surface algorithms (or hidden-surface removal algorithms).
Z-buffer algorithm - A common hidden surface removal algorithm supported by OpenGL.
Double buffering
Why we need double buffering
Because an application program typically works asynchronously, changes can occur to the display
buffer at any time. Depending on when the display is updated, this can cause the display to show
partially updated results.
What is double buffering
A way to avoid partial updates. Instead of a single frame buffer, the hardware has two frame buffers.
Front buffer the buffer that is displayed
Back buffer the buffer that is being updated
Once updating the back buffer is complete, the front and back buffer are swapped. The new back
buffer is then cleared and the system starts updating it.
To trigger a refresh using double buffering in OpenGL we call glutSwapBuffers();
Menus
Glut provides pop-up menus that can be used.
An example of doing this in code would be
glutCreateMenu(demo_menu); //Create Callback for Menu
glutAddMenuEntry(quit,1);
glutAddMenuEntry(start rotation, 2);
glutAttachMenu(GLUT_RIGHT_BUTTON);
void demo_menu(int id)
{
//react to menu
}
Purpose of GLFlush statements
Similar to computer IO buffer, OpenGL commands are not executed immediately. All commands are
stored in buffers first, including network buffers and the graphics accelerator itself, and are awaiting
execution until buffers are full. For example, if an application runs over the network, it is much more
efficient to send a collection of commands in a single packet than to send each command over
network one at a time.
glFlush() - empties all commands in these buffers and forces all pending commands to be executed
immediately without waiting for buffers to get full. glFlush() guarantees that all OpenGL commands
made up to that point will complete executions in a finite amount time after calling glFlush().
glFlush() does not wait until previous executions are complete and may return immediately to your
program. So you are free to send more commands even though previously issued commands are not
finished.
Vertex Shaders and Fragment Shaders
OpenGL requires a minimum of a vertex and fragment shader.
Vertex Shader
A simple vertex shader determines the color and passes the vertex location to the fragment shader.
The absolute minimum a vertex shader must do is send a vertex location to the rasterizer.
In general a vertex shader will transform the representation of a vertex location from whatever
coordinate system in which is it specified to a representation in clip coordinates for the rasterizer.
Shaders are written using GLSL (which is very similar to a dumbed down c).
Example would be .
In vec4 vPosition
void main()
{
Gl_Position = vPosition;
}
Gl_Position is a special variable known by OpenGL and used to pass data to the rasterizer.
Fragment Shader
Each invocation of the vertex shader outputs a vertex that then goes through primitive assembly and
clipping before reaching the rasterizer. The rasterizer outputs fragments for each primitive inside the
clipping volume. Each fragment invokes an execution of the fragment shader.
At a minimum, each execution of the fragment shader must output a color for the fragment unless
the fragment is to be discarded.
void main()
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Shaders need to be compiled and linked as a bare minimum for things to work.
Example Questions for Chapter 2
Exam Nov 2011 1.1 (4 marks)
Explain what double buffering is and how it is used in computer graphics.
Exam Jun 2011 6.a (4 marks)
Discuss the difference between RGB color model and the indexed col or model with respect to the
depth of the frame (color) buffer.
Exam Nov 2012 5.4 (4 marks)
Discuss the difference between the RGB color model and the indexed color model with respect to
the depth of the frame (color) buffer.
Exam Nov 2012 1.1 (3 marks)
A real-time graphics program can use a single frame buffer for rendering polygons, clearing the
buffer, and repeating the process. Why do we usually use two buffers instead?
Exam Jun 2013 8.2 (5 marks)
GLUT uses a callback function event model. Describe how it works and state the purpose of the idle,
display and reshape callbacks.
Exam Jun 2013 8.3 (1 marks)
What is the purpose of the OpenGL glFlush statement.
Exam Jun 2013 8.4 (1 marks)
Is the following code a fragment or vertex shader
In vec4 vPosition;
void Main() {Gl_Position = vPosition;}
Exam Jun 2013 1.1 (4 marks)
Explain the difference between immediate mode graphics and retained mode graphics.
Exam Jun 2013 1.2 (2 marks)
Name two artifacts in computer graphics that may commonly be specified at the vertices of a
polygon and then interpolated across the polygon to give a value for each fragment within the
polygon.
Chapter 3 Geometric Objects and Transformations
Key concepts you should know in this chapter are the following:
Surface Normals
Normals are vectors that are perpendicular to a surface. They can be used to describe the
orientation of direction of that surface.
Uses of surface normals include
Together with a point, a normal can be used to specify the equation of a plane
The shading of objects depends on the orientation of their surfaces, a factor that is characterized
by the normal vector at each point.
Flat shading uses surface normals to determine if the normal is the same at all points on the
surface.
Calculating smooth (Gourand and Phong) shading.
Ray tracing and light interactions can be calculated from the angle of incidence and the normal.
Homogeneous Coordinates
Because there can be confusion between vectors and points we use homogenous coordinates.
For a point, the fourth coordinate is 1 and for a vector it is 0. For example
The point (4,5,6) is represented in homogenous coordinates by (4,5,6,1)
The vector (4,5,6) is represented in homogenous coordinates by (4,5,6,0)
Advantages of homogenous coordinates include
All affine (line preserving) transformations can be represented as matrix multiplications in
homogenous coordinates.
Less arithmetic work is involved.
The uniform representation of all affine transformations makes carrying out successive
transformations far easier than in three dimensional space.
Modern hardware implements homogenous coordinate operations directl y, using parallelism to
achieve high speed calculations.
Instance Transformations
An instance transformation is the product of a translation, a rotation and a scaling.
The order of the transformations that comprise an instance transformation will effect the outcome.
For instance, if we rotate a square before we apply a non-uniform scale, we will shear the square,
something we cannot do if we scale then rotate.
Frames in OpenGL
The following is the usual order in which the frames occur in the pipeline.
1) Object (or model) coordinates
2) World coordinate
3) Eye (or camera) coordinates
4) Clip coordinates
5) Normalized device coordinates
6) Window (or screen) coordinates
Model Frame (Represents an object we want to render in our world).
A scene may comprise of many models each is oriented, sized and positioned in the World
coordinate system.
World Frame - also referred to as the application frame - represents values in world coordinates.
If we do not apply transformations to our object frames the world and model coordinat es are the
same.
The camera frame (or eye frame) is a frame whose origin is the center of the camera lens and whose
axes are aligned with the sides of the camera.
Because there is an affine transformation that corresponds to each change of frame, there are 4x4
matrices that represent the transformation from model coordinates to world coordinates and from
world coordinates to eye coordinates. These transformations are usually concatenated together into
the model-view transformation, which is specified by the model-view matrix.
After transformation, vertices are still represented in homogenous coordinates. The division by the
w component called perspective division, yields three dimensional representations in normalized
device coordinates.
The final translation takes a position in normalized device coordinates and, taking into account the
viewport, creates a three dimensional representation in window coordinates.
Translation, Rotation, Scaling and Shearing
Know how to perform Translation, Rotation, Scaling and Shearing (You do not have to learn off the
matrices, they will be given to you if necessary).
Affine transformation - An affine transformation is any transformation that preserves collinearity
(i.e., all points lying on a line initially still lie on a li ne after transformation) and ratios of distances
(e.g., the midpoint of a line segment remains the midpoint after transformation).
Rigid-body Transformations - Rotation and translation are known as rigid-body transformations. No
combination of rotations and translations can alter the shape or volume of an object, they can alter
only the objects location and orientation.
Within a frame, each affine transformation is represented by a 4x4 matric of the form
Translation
Translation displaces points by a fixed distance in a given direction.
P=P+d
We can also get the same result using the matrix multiplication
P = Tp
where
T is called the translation matrix
Rotation
Two dimensional rotations
Three dimensional rotations.
Rotation about the x-axis by an angle followed by rotation about the y-axis by an angle does not
give us the same result as the one that we obtain if we reverse the order of rotations.
Scaling
P = sP, where
Sheer
Sections that are not examinable include 3.13 & 3.14
Examples of different types of matrices are below
Example Questions for Chapter 3
Exam Jun 2011 2 (6 marks)
Consider the diagram below and answer the question that follows
a) Determine the transformation matrix which will transform the square ABCD to the square
ABCD. Show all workings.
Hint Below is the transformation matrices for clockwise and anticlockwise rotation about the z-axis.
b) Using the transformation matrix in a, calculate the new position of A if the transformation
was performed on ABCD
Exam Nov 2011 2.1 & 2.2 (6 marks)
Consider a triangular prism with vertices a,b,c,d,e and f at (0,0,0),(1,0,0),(0,0,1),(0,2,0),(1,2,0) and
(0,2,1), respectively.
Perform scaling by a factor of 15 along the x-axes. (2 marks)
Then perform a clockwise rotation by 45 about the y-axis (4 marks)
Hint: The transformation matrix for rotation about the y-axis is given alongside (where theta is the
angle of rotation)
Exam Nov 2012 2.2 (6 marks)
Consider the following 4x4 matrices
Which of the matrices reflect the following (give the correct letter only)
2.2.1 Identity Matric (no effect)
2.2.2 Uniform Scaling
2.2.3 Non-uniform scaling
2.2.4 Reflection
2.2.5 Rotation about z
2.2.6 Rotation
Exam June 2012 2.a (1 marks)
What is an instance transformation?
Exam June 2012 2.b (3 marks)
Will you get the same effect if the order of transformations that comprise an instance
transformation were changed? Explain using an example.
Exam June 2012 2.c (4 marks)
Provide a mathematical proof to show that rotation and uniform scaling commute.
Exam June 2013 2.1 (3 marks)
Do the following transformation sequences commute? If they do commute under certain conditions
only, state those conditions.
2.1.1 Rotation
2.1.2 Rotation and Scaling
2.1.3 Two Rotations
Exam June 2013 2.2 (5 marks)
Consider a line segment (in 3 dimensions) with endpoints a and b at (0,1,0) and (1,2,3) respectively.
Compute the coordinates of vertices that result after each application of the following sequence of
transformations of the line segment.
2.2.1 Perform scaling by a factory of 3 along the x-axis
2.2.2 Then perform a translation of 2 units along the y-axis
2.2.3 Finally perform an anti-clockwise rotation by 60 about the z-axis
Hint the transformation matric for rotation about the z-axis is given below (where omega is the
angle of anti-clockwise rotation)
Chapter 4 - Viewing
Important concepts for Chapter 4 include
Planar Geometric Projections are the class of projections produced by parallel and perspective
views. A planar geometric projection is a projection where the surface is a plane and the projectors
are lines.
4.1 Classical and Computer Viewing
Two types of views
1. Perspective Views Views with finite COP
2. Parallel Views Views with infinite COP
Classical and computer viewing COP / DOP
COP Center of Projections. For computers is the origin of the camera frame for perspective views.
DOP Direction of projections
PRP Projection Reference Point
In classical viewing there is an underlying notion of a principle face.
Different types of classical views include
Parallel Viewing
Orthographic Projections parallel view shows a single plane
Axonometric Projections parallel view - projection are still orthogonal to the projection plane
but the projection plane can have any orientation with respect to the object (isometric,
diametric and trimetric views)
Oblique Projections most general parallel view most difficult views to construct by hand.
Perspective Viewing
Characterized by diminution of size
Classical perspective views are known as one, two and three point perspective
Parallel lines in each of the three principal directions converges to a finite vanishing point
Perspective foreshortening- The farther an object is from the center of projection ,the smaller it
appears. Perspective drawings are characterized by perspective foreshortening and vanishing points.
Perspective foreshortening is the illusion that object and lengths appear smaller as there distance
from the center of projection increases. These points are called vanishing point. Principal vanishing
points are formed by the apparent intersection of lines parallel to one of the three x,y or z axis. The
number of principal vanishing points is determined by the number of principal axes interested by the
view plane.
4.2 Viewing with a computer read only
4.3.1 Positioning of camera frame read only
4.3.2 Normalization
Normalization transformation - specification of the projection matrix
VRP View Reference Point
VRC View Reference Coordinate
VUP View Up Vector - the up direction of the camera
VPN View Plane Normal the orientation of the projection plane or back of the camera
Camera is positioned at the origin, pointing in the negative z direction. Camera is centered at a point
called the View Reference Point (VRP). Orientation of the camera is specified by View Plane Normal
(VPN) and View Up Vector (VUP). The View Plane Normal is the orientation of the projection plane
or back of camera. The orientation of the plane does not specify the up direction of the camera
hence we have View Up Vector (VUP) which is the up direction of the camera. VUP fixes the camera.
Viewing Coordinate System The orthogonal coordinate system (see pg 240)
View Orientation Matrix the matrix that does the change of frames. It is equivalent to the viewing
component of the model-view matrix. (Not necessary to know formulae or derivations)
4.3.3 Look-at function
The use of VRP, VPN and VUP is but one way to provide an API for specifying the position of a
camera.
The LookAt function creates a viewing matrix derived from an eye point, a reference point indicating
the center of the scene, and an up vector. The matrix maps the reference point to the negative z-axis
and the eye point to the origin, so that when you use a typical projection matrix, the center of the
scene maps to the center of the viewport. Similarly, the direction described by the up vector
projected onto the viewing plane is mapped to the positive y-axis so that it points upward in the
viewport. The up vector must not be parallel to the line of sight from the eye to the reference point.
Eye and points VPN = a-e (Not necessary to know other formulae or derivations)
4.3.4 Other Viewing APIs - read only
4.4 Parallel Projections
A parallel projection is the limit of a perspective projection in that the center of projection (COP) is
infinitely far from the object being viewed.
Orthogonal projections - a special kind of parallel projection in which the projector are parallel to
the view plane. A single orthogonal view is restricted to one principal face of an object.
Axonometric view projectors are perpendicular to the projection plane but projection plane can
have any orientation with respect to object.
Oblique projections projectors are parallel but can make an arbitrary angle to the projection plane
and projection plane can have any orientation with respect to object.
Projection Normalization a process using translation and scaling that will transform vertices in
camera coordinates to fit inside the default view volume. (see page 247/248 for detailed
explanation).
4.4.5 Oblique Projections - Leave out
4.5 Perspective projections
Perspective projections are what we get with a camera whose lens has a finite length, or in terms of
our synthetic camera model, the center of the projection is finite.
4.5.1 Simple Perspective Projections - Not necessary to know formulae and derivations
Read pg. 257
4.6 View volume, Frustum, Perspective Functions
Two perspective functions you need to know
1. Mat4 Frustrum(left, right, bottom, top, near, far)
2. Mat4 Perspective(fovy, aspect, near, far)
(All variables are of type GlFloat)
View Volume (Canonical)
The view volume can be thought of as the volume that a real camera would see through its lens
(Except that it is also limited in distance from the front and back). It is a section of 3D space that is
visible from the camera or viewer between two distances.
When using orthogonal (or parallel) projections, the view volume is rectangular. In OpenGL, an
orthographic projection is defined with the function call glOrtho(left, right, bottom, top, near, far).
When using perspective projections, the view volume is a frustrum and has a truncated pyramid
shape. In OpenGL, a perspective projection is defined with the function all glFrustum(xmin, xmax,
ymin, ymax, nbear, far) or gluPerspective(fovy, aspect, near, far).
NB: Not necessary to know formulae or derivations.
4.7 Perspective-Projection Matrices read only
4.8 Hidden surface removal
Conceptually we seek algorithms that either remove those surfaces that should not be visible to the
viewer, called hidden-surface-removal algorithms, or find which surfaces are visible, called visible-
surface-algorithms.
OpengGL has a particular algorithm associated with it, the z-buffer algorithm, to which we can
interface through three function calls.
Hidden-surface-removal algorithms can be divided into two broad classes
1. Object-space algorithms
2. Image-space algorithms
Object-space algorithms
Object space algorithms attempt to order the surfaces of the objects in the scene such that
rendering surfaces in a particular order provides the correct image. i.e. render objects furthest back
first.
This class of algorithms does not work well with pipeline architectures in which objects are passed
down the pipeline in an arbitrary order. In order to decide on a proper order in which to render the
objects, the graphics system must have all the objects available so it can sort them into the desi red
back-to-front order.
Depth Sort Algorithm
All polygons are rendered with hidden surface removal as a consequence of back to front rendering
of polygons. Depth sort orders the polygons by how far away from the viewer their maximum z-
value is. If the minimum depth (z-value) of a given polygon is greater than the maximum depth of
the polygon behind the one of interest, we can render the polygons back to front.
Image-space algorithms
Image-space algorithms work as part of the projection process and seek to determine the
relationship among object points on each projector. The z-buffer algorithm is an example of this.
Z-Buffer Algorithm
The basic idea of the z-buffer algorithm is that for each fragment on the polygon corresponding to
the intersection of the polygon with a ray (from the COP) through a pixel we compute the depth
from the center of projection. If the depth is greater than the depth currently stored in the z-buffer,
it is ignored else z-buffer is updated and color buffer is updated with the new color fragment.
Ultimately we display only the closest point on each projector. The algorithm requires a depth
buffer, or z-buffer, to store the necessary depth information as polygons are rasterized.
Because we must keep depth information for each pixel in the color buffer, the z-buffer has the
same spatial resolution as the color buffers. The depth buffer is initialized to a value that
corresponds to the farthest distance from the viewer.
For instance with the diagram below, a projector from the COP passes through two surfaces.
Because the circle is closer to the viewer than to the triangle, it is the circles color that determines
the color placed in the color buffer at the location corresponding to where the projector pierces the
projection plane.
2 Major Advantages of Z-Buffer Algorithm
- Its complexity is proportional to the number of fragments generated by the rasterizer
- It can be implemented with a small number of additional calculations over what we have to
do to project and display polygons without hidden-surface removal
Handling Translucent Objects using the Z-Buffer Algorithm
Any object behind an opaque object (solid object) should not be rendered. Any object behind a
translucent object (see through objects) should be composited.
Basic approach in the z-buffer algorithm would be
- If the depth information allows a pixel to be rendered, it is blended (composited) with the pixel
already stored there.
- If the pixel is part of an opaque polygon, the depth data is updated.
4.9 Displaying Meshes read only
4.10 Projections and Shadows - Know
The creation of simple shadows is an interesting application of projection matrices.
To add physically correct shadows we would typically have to do global calculations that are difficult.
This normally cannot be done in real time.
There is the concept of a shadow polygon which is a flat polygon which is the projection of the
original polygon onto the surface with the center of projection at the light source.
Shadows are easier to calculate if the light source is not moving, if it is moving, the shadows would
possibly need to be calculated in the idle callback function.
For a simple environment such as a plane flying over a flat terrain casting a single shadow, this is an
appropriate approach. When objects can cast shadows on other objects, this method becomes
impractical
Example Questions for Chapter 4
Exam Jun 2011 3.3 (4 marks)
Differentiate between orthographic and perspective projections in terms of projectors and the
projection plane.
Exam Jun 2012 3.a (4 marks)
Define the term View Volume with respect to computer graphics and with reference to both
perspective and orthogonal views.
Exam Jun 2012 3.a (4 marks)
Define the term View Volume with respect to computer graphics and with reference to both
perspective and orthogonal views.
Exam Nov 2012 1.3 (6 marks)
Define the term View Volume with reference two both perspective and orthogonal views. Provide
the OpenGL functions that used to define the respective view volumes.
Exam Jun 2011 3.b (4 marks)
Orthogonal, oblique and axonometric view scenes are all parallel view scenes. Explain the
differences between orthogonal, axonometric, and oblique view scenes.
Exam June 2013 3.1 (1 marks)
Explain what is meant by non-uniform foreshortening of objects under perspective camera.
Exam June 2013 3.2 (3 marks)
What is the purpose of projection normalization in the computer graphics pipeline? Name one
advantage of using this technique.
Exam June 2013 3.3 (4 marks)
Draw a view Frustum. Position and name the three important rectangular planes at their correct
positions. Make sure that the position of the origin and the orientation of the z-axis are clearly
distinguishable. State the name of the coordinate system (or f rame) in which the view frustum is
defined.
Exam June 2013 6.1 (2 marks)
Draw a picture of a set of simple polygons that the Depth sort algorithm cannot render without
splitting the polygons
Exam June 2013 6.2 (3 marks)
Why cant the standard z-buffer algorithm handle scenes with both opaque and translucent objects?
What modifications can be made to the algorithm for it to handle this?
Exam June 2012 3.a (4 marks)
Hidden surface removal can be divided into two broad classes. State and explain each of these
classes.
Exam June 2012 3.b (4 marks)
Explain the problem of rendering translucent objects using the z-buffer algorithm, and describe how
the algorithm can be adapted to deal with this problem (without sorting the polygons).
Exam June 2012 4.a (4 marks)
What is parallel projection? What specialty do orthogonal projections provide? What is the
advantage of the normalization transformation process?
Exam June 2012 4.b (2 marks)
Why are projections produced by parallel and perspective viewing known as planar geometric
projections?
Exam June 2012 4.c (4 marks)
The specification of the orientation of a synthetic camera can be divided into the specification of the
view reference point (VRP), view-plane normal (VPN) and the view-up-vector (VUP). Explain each of
these?
Exam Nov 2012 3.1 (6 marks)
Differentiate between Depth sort and z-buffer algorithms for hidden surface removal.
Exam Nov 2012 3.2 (6 marks)
Briefly describe, with any appropriate equations, the algorithm for removing (or culling) back facing
polygons. Assume that the normal points out from the visible side of the polygon
Chapter 5 Lighting and Shading
A surface can either emit light by self-emission, or reflect light from other surfaces that illuminate it.
Some surfaces can do both.
Rendering equation - to represent lighting correctly we would need a recursive call that would blend
light between sources this can mathematically be described using the rendering equation. There
are various approximations of this equation using ray tracing unfortunately these methods cannot
render scenes at the rate at which we can pass polygons through the modeling-projection pipeline.
For render pipeline architectures we focus on a simpler rendering model, based on the Phong
reflection model that provides a compromi se between physical correctness and efficient calculation.
Rather than looking at a global energy balance, we follow rays of light from light-emitting (or self-
luminous) surfaces that we call light sources. We then model what happens to these rays as they
interact with reflecting surfaces in the scene. This approach is similar to ray tracing, but we consider
only single interactions between light sources and surfaces.
2 independent parts of the problem
1. Model the light sources in the scene
2. Build a reflection model that deals with the interactions between materials and light.
We need to consider only those rays that leave the source and reach the viewers eye (either directly
or through interactions with objects). These are the rays that reach the center of projection (COP)
after passing through the clipping rectangle.
Interactions between light and materials can be classified into three groups depicted.
1. Specular Surfaces appear shinny because most of the light that is reflected or scattered is
in a narrow range of angles close to the angle of reflection. Mirrors are perfectly specular
surfaces.
2. Diffuse surfaces characterized by reflected light being scattered in all directions. Walls
painted with matt paint are diffuse reflectors.
3. Translucent Surfaces allow some light to penetrate the surface and to emerge from
another location on the object. The process of refraction characterizes glass and water.
5.1 Light and Matter
There are 4 basic types of light sources
1. Ambient Lighting
2. Point Sources
3. Spotlights
4. Distant Lights
These four lighting types are sufficient for rendering most simple scenes.
5-2 Light Sources
Ambient Light
Ambient light produces light of constant intensity throughout the scene. All objects are illuminated
from all sides.
Point Sources
Point sources emit light equally in all direction, but the intensity of the light diminishes with the
distance between the light and the objects it illuminates. Surfaces facing away from the light source
are not illuminated.
Umbra the area that is fully in the shadow
Penumbra the area that is partially in the shadow
Spotlights
A spot light source is similar to a point light source except that its illumination is restricted to a cone
in a particular direction.
Spotlights are characterized by a narrow range of angles through which light is emitted. More
realistic spotlights are characterized by the distribution of light within the cone usually with most
of the light concentrated in the center of the cone.
Distant Light Sources
A distant light source is like a point light source except that the rays of light are all parallel.
Most shading calculations require the direction from the point on the surface to the light source
position. As we move across a surface, calculating the intensity at each point, we should re-compute
this vector repeatedly a computation that is a significant part of the shading calculation. Distant
light sources can be calculated faster than near light sources (see pg. 294 for parallel light).
5.3 - Phong Reflection Model
The Phong model uses 4 vectors to calculate a color for an arbitrary point P on a surface.
1. l from p to light source
2. n the normal at point p
3. v from p to viewer
4. r reflection of ray from l
The Phong model supports the three types of material -light interactions
1. Ambient Light, I = kL where k is the reflection coefficient, L is ambient term
2. Diffuse Light, I = k(I.n)L
3. Specular Light, I=k.L Max((rv)