Você está na página 1de 272

3D Rendering

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Fri, 31 Jan 2014 12:14:57 UTC

Contents
Articles
Preface
3D rendering

Concepts

1
1
6

Alpha mapping

Ambient occlusion

Anisotropic filtering

Back-face culling

11

Beam tracing

12

Bidirectional texture function

13

Bilinear filtering

14

Binary space partitioning

16

Bounding interval hierarchy

21

Bounding volume

24

Bump mapping

27

CatmullClark subdivision surface

29

Conversion between quaternions and Euler angles

31

Cube mapping

34

Diffuse reflection

37

Displacement mapping

40

DooSabin subdivision surface

42

Edge loop

43

Euler operator

44

False radiosity

45

Fragment

46

Geometry pipelines

47

Geometry processing

48

Global illumination

48

Gouraud shading

51

Graphics pipeline

53

Hidden line removal

55

Hidden surface determination

56

High dynamic range rendering

59

Image-based lighting

64

Image plane

65

Irregular Z-buffer

65

Isosurface

66

Lambert's cosine law

68

Lambertian reflectance

70

Level of detail

71

Mipmap

74

Newell's algorithm

76

Non-uniform rational B-spline

77

Normal

86

Normal mapping

90

OrenNayar reflectance model

92

Painter's algorithm

95

Parallax mapping

97

Particle system

98

Path tracing

101

Per-pixel lighting

105

Phong reflection model

107

Phong shading

110

Photon mapping

112

Polygon

115

Potentially visible set

116

Precomputed Radiance Transfer

118

Procedural generation

119

Procedural texture

125

3D projection

128

Quaternions and spatial rotation

132

Radiosity

143

Ray casting

148

Ray tracing

151

Reflection

159

Reflection mapping

161

Relief mapping

164

Render Output unit

165

Rendering

165

Retained mode

174

Scanline rendering

174

Schlick's approximation

177

Screen Space Ambient Occlusion

177

Self-shadowing

182

Shadow mapping

182

Shadow volume

188

Silhouette edge

193

Spectral rendering

194

Specular highlight

195

Specularity

199

Sphere mapping

199

Stencil buffer

200

Stencil codes

202

Subdivision surface

207

Subsurface scattering

210

Surface caching

212

Texel

212

Texture atlas

213

Texture filtering

214

Texture mapping

216

Texture synthesis

219

Tiled rendering

223

UV mapping

225

UVW mapping

227

Vertex

227

Vertex Buffer Object

229

Vertex normal

234

Viewing frustum

235

Virtual actor

236

Volume rendering

238

Volumetric lighting

244

Voxel

245

Z-buffering

249

Z-fighting

253

Appendix
3D computer graphics software

References

255
255

Article Sources and Contributors

257

Image Sources, Licenses and Contributors

263

Article Licenses
License

267

Preface
3D rendering
3D computer graphics

Basics

3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
Primary Uses

3D models / Computer-aided design


Graphic design / Video games
Visual effects / Visualization
Virtual engineering / Virtual reality
Virtual cinematography
Related concepts

CGI / Animation / 3D display


Wireframe model / Texture mapping
Computer animation / Motion capture
Skeletal animation / Crowd simulation
Global illumination / Volume rendering

v
t

e [1]

3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D
images with 3D photorealistic effects or non-photorealistic rendering on a computer.

3D rendering

Rendering methods
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be
compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often
specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe
rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or
radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different
methods are better suited for either photo-realistic rendering, or real-time rendering.

Real-time
Rendering for interactive media, such as games and
simulations, is calculated and displayed in real time, at
rates of approximately 20 to 120 frames per second. In
real-time rendering, the goal is to show as much
information as possible as the eye can process in a
fraction of a second (a.k.a. in one frame. In the case of
30 frame-per-second animation a frame encompasses
one 30th of a second). The primary goal is to achieve
an as high as possible degree of photorealism at an
A screenshot from Second Life, an example of a modern simulation
acceptable minimum rendering speed (usually 24
which renders frames in real time.
frames per second, as that is the minimum the human
eye needs to see to successfully create the illusion of
movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result the final
image presented is not necessarily that of the real-world, but one close enough for the human eye to tolerate.
Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts
to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These
effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact of a camera. This is
the basic method employed in games, interactive worlds and VRML. The rapid increase in computer processing
power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such
as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.

3D rendering

Non real-time
Animations for non-interactive media, such as feature
films and video, are rendered much more slowly.
Non-real time rendering enables the leveraging of
limited processing power in order to obtain higher
image quality. Rendering times for individual frames
may vary from a few seconds to several days for
complex scenes. Rendered frames are stored on a hard
disk then can be transferred to other media such as
motion picture film or optical disk. These frames are
then displayed sequentially at high frame rates,
typically 24, 25, or 30 frames per second, to achieve
the illusion of movement.
When the goal is photo-realism, techniques such as ray
tracing or radiosity are employed. This is the basic
method employed in digital media and artistic works.
Techniques have been developed for the purpose of
simulating other naturally-occurring effects, such as the
interaction of light with various forms of matter.
Examples of such techniques include particle systems
(which can simulate rain, smoke, or fire), volumetric
sampling (to simulate fog, dust and other spatial
atmospheric effects), caustics (to simulate light
focusing by uneven light-refracting surfaces, such as
the light ripples seen on the bottom of a swimming
pool), and subsurface scattering (to simulate light
reflecting inside the volumes of solid objects such as
human skin).

An example of a ray-traced image that typically takes seconds or


minutes to render.

The rendering process is computationally expensive,


given the complex variety of physical processes being
simulated. Computer processing power has increased
Computer-generated image created by Gilles Tran.
rapidly over the years, allowing for a progressively
higher degree of realistic rendering. Film studios that
produce computer-generated animations typically make use of a render farm to generate images in a timely manner.
However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home
computer system. The output of the renderer is often used as only one small part of a completed motion-picture
scene. Many layers of material may be rendered separately and integrated into the final shot using compositing
software.

Reflection and shading models


Models of reflection/scattering and shading are used to describe the appearance of a surface. Although these issues
may seem like problems all on their own, they are studied almost exclusively within the context of rendering.
Modern 3D computer graphics rely heavily on a simplified reflection model called Phong reflection model (not to be
confused with Phong shading). In refraction of light, an important concept is the refractive index. In most 3D
programming implementations, the term for this value is "index of refraction," usually abbreviated "IOR." Shading

3D rendering

can be broken down into two orthogonal issues, which are often studied independently:
Reflection/Scattering - How light interacts with the surface at a given point
Shading - How material properties vary across the surface
Reflection
Reflection or scattering is the relationship
between
incoming
and
outgoing
illumination at a given point. Descriptions
of scattering are usually given in terms of a
bidirectional scattering distribution function
or BSDF. Popular reflection rendering
techniques in 3D computer graphics include:
Flat shading: A technique that shades
each polygon of an object based on the
polygon's "normal" and the position and
intensity of a light source.
Gouraud shading: Invented by H.
Gouraud in 1971, a fast and
resource-conscious vertex shading
technique used to simulate smoothly
shaded surfaces.

The Utah teapot

Texture mapping: A technique for simulating a large amount of surface detail by mapping images (textures) onto
polygons.
Phong shading: Invented by Bui Tuong Phong, used to simulate specular highlights and smooth shaded surfaces.
Bump mapping: Invented by Jim Blinn, a normal-perturbation technique used to simulate wrinkled surfaces.
Cel shading: A technique used to imitate the look of hand-drawn animation.
Shading
Shading addresses how different types of scattering are distributed across the surface (i.e., which scattering function
applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is
some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.) A
simple example of shading is texture mapping, which uses an image to specify the diffuse color at each point on a
surface, giving it more apparent detail.

3D rendering

Transport
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of
light transport.

Projection
The shaded three-dimensional objects
must be flattened so that the display
device - namely a monitor - can
display it in only two dimensions, this
process is called 3D projection. This is
done using projection and, for most
applications, perspective projection.
The basic idea behind perspective
projection is that objects that are
further away are made smaller in
Perspective Projection
relation to those that are closer to the
eye. Programs produce perspective by
multiplying a dilation constant raised to the power of the negative of the distance from the observer. A dilation
constant of one means that there is no perspective. High dilation constants can cause a "fish-eye" effect in which
image distortion begins to occur. Orthographic projection is used mainly in CAD or CAM applications where
scientific modeling requires precise measurements and preservation of the third dimension.

External links

A Critical History of Computer Graphics and Animation [2]


Architectural 3D Rendering [3]
How Stuff Works - 3D Graphics [4]
History of Computer Graphics series of articles [5]

References
[1]
[2]
[3]
[4]
[5]

http:/ / en. wikipedia. org/ w/ index. php?title=Template:3D_computer_graphics& action=edit


http:/ / accad. osu. edu/ ~waynec/ history/ lessons. html
http:/ / www. 3dpower. in/ services/ architectural-visualisation/ 3d-architectural-rendering
http:/ / computer. howstuffworks. com/ 3dgraphics. htm
http:/ / hem. passagen. se/ des/ hocg/ hocg_1960. htm

Concepts
Alpha mapping
Alpha mapping is a technique in 3D computer graphics where an image is mapped (assigned) to a 3D object, and
designates certain areas of the object to be transparent or translucent. The transparency can vary in strength, based on
the image texture, which can be greyscale, or the alpha channel of an RGBA image texture.

Ambient occlusion
In computer graphics, ambient occlusion is used to represent how
exposed each point in a scene is to ambient lighting. So the enclosed
inside of a tube is typically more occluded (and hence darker) than the
exposed outer surfaces; and deeper inside the tube, the more occluded
(and darker) it becomes. The result is diffuse, non-directional lighting
throughout the scene, casting no clear shadows, but with enclosed and
sheltered areas darkened. In this way, it attempts to approximate the
way light radiates in real life, especially off what are normally
considered non-reflective surfaces.
Unlike local methods like Phong shading, ambient occlusion is a global
method, meaning the illumination at each point is a function of other
geometry in the scene. However, it is a very crude approximation to
full global illumination. The soft appearance achieved by ambient
occlusion alone is similar to the way an object appears on an overcast
day.

Implementation
In real-time games, Screen space ambient occlusion can be used as a
The ambient occlusion map for this scene darkens
only the innermost angles of corners.
faster approximation of true ambient occlusion, using pixel depth
rather than scene geometry to form an ambient occlusion map.
However, newer technologies are making true ambient occlusion feasible even in real-time.
Ambient occlusion is related to accessibility shading, which determines appearance based on how easy it is for a
surface to be touched by various elements (e.g., dirt, light, etc.). It has been popularized in production animation due
to its relative simplicity and efficiency. In the industry, ambient occlusion is often referred to as "sky light".[citation
needed]

The ambient occlusion shading model has the nice property of offering a better perception of the 3d shape of the
displayed objects. This was shown in a paper where the authors report the results of perceptual experiments showing
that depth discrimination under diffuse uniform sky lighting is superior to that predicted by a direct lighting model.
The occlusion

at a point

over the hemisphere

on a surface with normal

with respect to projected solid angle:

can be computed by integrating the visibility function

Ambient occlusion

where
and

is the visibility function at

, defined to be zero if

is occluded in the direction

is the infinitesimal solid angle step of the integration variable

and one otherwise,

. A variety of techniques are used to

approximate this integral in practice: perhaps the most straightforward way is to use the Monte Carlo method by
casting rays from the point and testing for intersection with other scene geometry (i.e., ray casting). Another
approach (more suited to hardware acceleration) is to render the view from

by rasterizing black geometry against

a white background and taking the (cosine-weighted) average of rasterized fragments. This approach is an example
of a "gathering" or "inside-out" approach, whereas other algorithms (such as depth-map ambient occlusion) employ
"scattering" or "outside-in" techniques.
In addition to the ambient occlusion value, a "bent normal" vector
is often generated, which points in the average
direction of unoccluded samples. The bent normal can be used to look up incident radiance from an environment
map to approximate image-based lighting. However, there are some situations in which the direction of the bent
normal is a misrepresentation of the dominant direction of illumination, e.g.,

In this example the bent normal Nb has an unfortunate direction, since it is pointing at an occluded surface.

In this example, light may reach the point p only from the left or right sides, but the bent normal points to the
average of those two sources, which is, unfortunately, directly toward the obstruction.

Ambient occlusion

Recognition
In 2010, Hayden Landis, Ken McGaugh and Hilmar Koch were awarded a Scientific and Technical Academy Award
for their work on ambient occlusion rendering.[1]

References
[1] Oscar 2010: Scientific and Technical Awards (http:/ / www. altfg. com/ blog/ awards/ oscar-2010-scientific-and-technical-awards-489/ ), Alt
Film Guide, Jan 7, 2010

External links
Depth Map based Ambient Occlusion (http://www.andrew-whitehurst.net/amb_occlude.html)
NVIDIA's accurate, real-time Ambient Occlusion Volumes (http://research.nvidia.com/publication/
ambient-occlusion-volumes)
Assorted notes about ambient occlusion (http://www.cs.unc.edu/~coombe/research/ao/)
Ambient Occlusion Fields (http://www.tml.hut.fi/~janne/aofields/) real-time ambient occlusion using
cube maps
PantaRay ambient occlusion used in the movie Avatar (http://research.nvidia.com/publication/
pantaray-fast-ray-traced-occlusion-caching-massive-scenes)
Fast Precomputed Ambient Occlusion for Proximity Shadows (http://hal.inria.fr/inria-00379385) real-time
ambient occlusion using volume textures
Dynamic Ambient Occlusion and Indirect Lighting (http://download.nvidia.com/developer/GPU_Gems_2/
GPU_Gems2_ch14.pdf) a real time self ambient occlusion method from Nvidia's GPU Gems 2 book
GPU Gems 3 : Chapter 12. High-Quality Ambient Occlusion (http://http.developer.nvidia.com/GPUGems3/
gpugems3_ch12.html)
ShadeVis (http://vcg.sourceforge.net/index.php/ShadeVis) an open source tool for computing ambient
occlusion
xNormal (http://www.xnormal.net) A free normal mapper/ambient occlusion baking application
3dsMax Ambient Occlusion Map Baking (http://www.mrbluesummers.com/893/video-tutorials/
baking-ambient-occlusion-in-3dsmax-monday-movie) Demo video about preparing ambient occlusion in 3dsMax

Anisotropic filtering

Anisotropic filtering
In 3D computer graphics, anisotropic filtering (abbreviated AF) is a method of enhancing the image quality of
textures on surfaces of computer graphics that are at oblique viewing angles with respect to the camera where the
projection of the texture (not the polygon or other primitive on which it is rendered) appears to be non-orthogonal
(thus the origin of the word: "an" for not, "iso" for same, and "tropic" from tropism, relating to direction; anisotropic
filtering does not filter the same in every direction).
Like bilinear and trilinear filtering, anisotropic filtering eliminates aliasing effects, but improves on these other
techniques by reducing blur and preserving detail at extreme viewing angles.
Anisotropic compression is relatively intensive (primarily memory bandwidth and to some degree computationally,
though the standard space-time tradeoff rules apply) and only became a standard feature of consumer-level graphics
cards in the late 1990s. Anisotropic filtering is now common in modern graphics hardware (and video driver
software) and is enabled either by users through driver settings or by graphics applications and video games through
programming interfaces.

An improvement on isotropic MIP mapping


From this point forth, it is assumed the reader is
familiar with MIP mapping.
If we were to explore a more approximate anisotropic
algorithm, RIP mapping, as an extension from MIP
mapping, we can understand how anisotropic filtering
gains so much texture mapping quality. If we need to
texture a horizontal plane which is at an oblique angle
to the camera, traditional MIP map minification
would give us insufficient horizontal resolution due to
the reduction of image frequency in the vertical axis.
This is because in MIP mapping each MIP level is
isotropic, so a 256 256 texture is downsized to a 128
128 image, then a 64 64 image and so on, so
resolution halves on each axis simultaneously, so a
MIP map texture probe to an image will always
sample an image that is of equal frequency in each
axis. Thus, when sampling to avoid aliasing on a
high-frequency axis, the other texture axes will be
similarly downsampled and therefore potentially
blurred.

An example of anisotropic mipmap image storage: the principal image


on the top left is accompanied by filtered, linearly transformed copies
of reduced size. (click to compare to previous, isotropic mipmaps of
the same image)

With RIP map anisotropic filtering, in addition to downsampling to 128 128, images are also sampled to 256 128
and 32 128 etc. These anisotropically downsampled images can be probed when the texture-mapped image
frequency is different for each texture axis. Therefore, one axis need not blur due to the screen frequency of another
axis, and aliasing is still avoided. Unlike more general anisotropic filtering, the RIP mapping described for
illustration is limited by only supporting anisotropic probes that are axis-aligned in texture space, so diagonal
anisotropy still presents a problem, even though real-use cases of anisotropic texture commonly have such
screenspace mappings.
In layman's terms, anisotropic filtering retains the "sharpness" of a texture normally lost by MIP map texture's
attempts to avoid aliasing. Anisotropic filtering can therefore be said to maintain crisp texture detail at all viewing

Anisotropic filtering
orientations while providing fast anti-aliased texture filtering.

Degree of anisotropy supported


Different degrees or ratios of anisotropic filtering can be applied during rendering and current hardware rendering
implementations set an upper bound on this ratio. This degree refers to the maximum ratio of anisotropy supported
by the filtering process. So, for example 4:1 (pronounced 4-to-1) anisotropic filtering will continue to sharpen more
oblique textures beyond the range sharpened by 2:1.
In practice what this means is that in highly oblique texturing situations a 4:1 filter will be twice as sharp as a 2:1
filter (it will display frequencies double that of the 2:1 filter). However, most of the scene will not require the 4:1
filter; only the more oblique and usually more distant pixels will require the sharper filtering. This means that as the
degree of anisotropic filtering continues to double there are diminishing returns in terms of visible quality with fewer
and fewer rendered pixels affected, and the results become less obvious to the viewer.
When one compares the rendered results of an 8:1 anisotropically filtered scene to a 16:1 filtered scene, only a
relatively few highly oblique pixels, mostly on more distant geometry, will display visibly sharper textures in the
scene with the higher degree of anisotropic filtering, and the frequency information on these few 16:1 filtered pixels
will only be double that of the 8:1 filter. The performance penalty also diminishes because fewer pixels require the
data fetches of greater anisotropy.
In the end it is the additional hardware complexity vs. these diminishing returns, which causes an upper bound to be
set on the anisotropic quality in a hardware design. Applications and users are then free to adjust this trade-off
through driver and software settings up to this threshold.

Implementation
True anisotropic filtering probes the texture anisotropically on the fly on a per-pixel basis for any orientation of
anisotropy.
In graphics hardware, typically when the texture is sampled anisotropically, several probes (texel samples) of the
texture around the center point are taken, but on a sample pattern mapped according to the projected shape of the
texture at that pixel.
Each anisotropic filtering probe is often in itself a filtered MIP map sample, which adds more sampling to the
process. Sixteen trilinear anisotropic samples might require 128 samples from the stored texture, as trilinear MIP
map filtering needs to take four samples times two MIP levels and then anisotropic sampling (at 16-tap) needs to
take sixteen of these trilinear filtered probes.
However, this level of filtering complexity is not required all the time. There are commonly available methods to
reduce the amount of work the video rendering hardware must do.
The anisotropic filtering method most commonly implemented on graphics hardware is the composition of the
filtered pixel values from only one line of MIP map samples, which is referred to as "footprint assembly".[1]

10

Anisotropic filtering

Performance and optimization


The sample count required can make anisotropic filtering extremely bandwidth-intensive. Multiple textures are
common; each texture sample could be four bytes or more, so each anisotropic pixel could require 512 bytes from
texture memory, although texture compression is commonly used to reduce this.
A video display device can easily contain over two million pixels, and desired application framerates are often
upwards of 30 frames per second. As a result, the required texture memory bandwidth may grow to large values.
Ranges of hundreds of gigabytes per second of pipeline bandwidth for texture rendering operations is not unusual
where anisotropic filtering operations are involved.[citation needed]
Fortunately, several factors mitigate in favor of better performance:
The probes themselves share cached texture samples, both inter-pixel and intra-pixel.
Even with 16-tap anisotropic filtering, not all 16 taps are always needed because only distant highly oblique pixel
fills tend to be highly anisotropic.
Highly Anisotropic pixel fill tends to cover small regions of the screen (i.e. generally under 10%)
Texture magnification filters (as a general rule) require no anisotropic filtering.

References
[1] Schilling, A.; Knittel, G., May 22, 2001

External links
The Naked Truth About Anisotropic Filtering (http://www.extremetech.com/computing/
51994-the-naked-truth-about-anisotropic-filtering)

Back-face culling
In computer graphics, back-face culling determines whether a polygon of a graphical object is visible. It is a step in
the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order
when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the
polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera
and will not be drawn.
The process makes rendering objects quicker and more efficient by reducing the number of polygons for the program
to draw. For example, in a city street scene, there is generally no need to draw the polygons on the sides of the
buildings facing away from the camera; they are completely occluded by the sides facing the camera.
A related technique is clipping, which determines whether polygons are within the camera's field of view at all.
Another similar technique is Z-culling, also known as occlusion culling, which attempts to skip the drawing of
polygons which are covered from the viewpoint by other visible polygons.
This technique only works with single-sided polygons, which are only visible from one side. Double-sided polygons
are rendered from both sides, and thus have no back-face to cull.
One method of implementing back-face culling is by discarding all polygons where the dot product of their surface
normal and the camera-to-polygon vector is greater than or equal to zero.

11

Back-face culling

Further reading
Geometry Culling in 3D Engines [1], by Pietari Laurila

References
[1] http:/ / www. gamedev. net/ reference/ articles/ article1212. asp

Beam tracing
Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to
render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism
simulations.
Beam tracing is a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with beams.
Beams are shaped like unbounded pyramids, with (possibly complex) polygonal cross sections. Beam tracing was
first proposed by Paul Heckbert and Pat Hanrahan.[1]
In beam tracing, a pyramidal beam is initially cast through the entire viewing frustum. This initial viewing beam is
intersected with each polygon in the environment, typically from nearest to farthest. Each polygon that intersects
with the beam must be visible, and is removed from the shape of the beam and added to a render queue. When a
beam intersects with a reflective or refractive polygon, a new beam is created in a similar fashion to ray-tracing.
A variant of beam tracing casts a pyramidal beam through each pixel of the image plane. This is then split up into
sub-beams based on its intersection with scene geometry. Reflection and transmission (refraction) rays are also
replaced by beams.This sort of implementation is rarely used, as the geometric processes involved are much more
complex and therefore expensive than simply casting more rays through the pixel. Cone tracing is a similar
technique using a cone instead of a complex pyramid.
Beam tracing solves certain problems related to sampling and aliasing, which can plague conventional ray tracing
approaches.[2] Since beam tracing effectively calculates the path of every possible ray within each beam [3](which
can be viewed as a dense bundle of adjacent rays), it is not as prone to under-sampling (missing rays) or
over-sampling (wasted computational resources). The computational complexity associated with beams has made
them unpopular for many visualization applications. In recent years, Monte Carlo algorithms like distributed ray
tracing (and Metropolis light transport?) have become more popular for rendering calculations.
A 'backwards' variant of beam tracing casts beams from the light source into the environment. Similar to backwards
raytracing and photon mapping, backwards beam tracing may be used to efficiently model lighting effects such as
caustics.[4] Recently the backwards beam tracing technique has also been extended to handle glossy to diffuse
material interactions (glossy backward beam tracing) such as from polished metal surfaces.[5]
Beam tracing has been successfully applied to the fields of acoustic modelling[6] and electromagnetic propagation
modelling.[7] In both of these applications, beams are used as an efficient way to track deep reflections from a source
to a receiver (or vice-versa). Beams can provide a convenient and compact way to represent visibility. Once a beam
tree has been calculated, one can use it to readily account for moving transmitters or receivers.
Beam tracing is related in concept to cone tracing.

12

Beam tracing

References
[1] P. S. Heckbert and P. Hanrahan, " Beam tracing polygonal objects (http:/ / www. eng. utah. edu/ ~cs7940/ papers/ p119-heckbert. pdf)",
Computer Graphics 18(3), 119-127 (1984).
[2] A. Lehnert, "Systematic errors of the ray-tracing algorithm", Applied Acoustics 38, 207-221 (1993).
[3] Steven Fortune, "Topological Beam Tracing", Symposium on Computational Geometry 1999: 59-68
[4] M. Watt, "Light-water interaction using backwards beam tracing", in "Proceedings of the 17th annual conference on Computer graphics and
interactive techniques(SIGGRAPH'90)",377-385(1990).
[5] B. Duvenhage, K. Bouatouch, and D.G. Kourie, "Exploring the use of Glossy Light Volumes for Interactive Global Illumination", in
"Proceedings of the 7th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa", 2010.
[6] T. Funkhouser, I. Carlbom, G. Elko, G. Pingali, M. Sondhi, and J. West, "A beam tracing approach to acoustic modelling for interactive
virtual environments", in Proceedings of the 25th annual conference on Computer graphics and interactive techniques (SIGGRAPH'98),
21-32 (1998).
[7] Steven Fortune, "A Beam-Tracing Algorithm for Prediction of Indoor Radio Propagation", in WACG 1996: 157-166

Bidirectional texture function


Bidirectional texture function (BTF) is a 7-dimensional function depending on planar texture coordinates (x,y) as
well as on view and illumination spherical angles. In practice this function is obtained as a set of several thousand
color images of material sample taken during different camera and light positions.
The BTF is a representation the appearance of texture as a function of viewing and illumination direction. It is an
image-based representation, since the geometry of the surface is unknown and not measured. BTF is typically
captured by imaging the surface at a sampling of the hemisphere of possible viewing and illumination directions.
BTF measurements are collections of images. The term BTF was first introduced in and similar terms have since
been introduced including BSSRDF and SBRDF (spatial BRDF). SBRDF has a very similar definition to BTF, i.e.
BTF is also a spatially varying BRDF.
To cope with a massive BTF data with high redundancy, many compression methods were proposed.
Application of the BTF is in photorealistic material rendering of objects in virtual reality systems and for visual
scene analysis, e.g., recognition of complex real-world materials using bidirectional feature histograms or 3D
textons.
Biomedical and biometric applications of the BTF include recognition of skin texture

References

13

Bilinear filtering

Bilinear filtering
Bilinear filtering is a texture filtering method used to smooth textures
when displayed larger or smaller than they actually are.
Most of the time, when drawing a textured shape on the screen, the
texture is not displayed exactly as it is stored, without any distortion.
A zoomed small portion of a bitmap, using
Because of this, most pixels will end up needing to use a point on the
nearest-neighbor filtering (left), bilinear filtering
texture that is "between" texels, assuming the texels are points (as
(center), and bicubic filtering (right).
opposed to, say, squares) in the middle (or on the upper left corner, or
anywhere else; it does not matter, as long as it is consistent) of their
respective "cells". Bilinear filtering uses these points to perform bilinear interpolation between the four texels nearest
to the point that the pixel represents (in the middle or upper left of the pixel, usually).

The formula
In a mathematical context, bilinear interpolation is the problem of finding a function f(x,y) of the form
f(x,y) = c11xy + c10x + c01y + c00 satisfying
f(x1,y1) = z11, f(x1,y2) = z12, f(x2,y1) = z21, and f(x2,y2) = z22.
The usual, and usually computationally least expensive way to compute f is through linear interpolation used twice,
for example to compute two functions f1 and f2, satisfying
f1(y1) = z11, f1(y2) = z12, f2(y1) = z21, and f2(y2) = z22,
and then to combine these functions (which are linear in y) into one function f satisfying
f(x1,y) = f1(y), and f(x2,y) = f2(y).
In computer graphics, bilinear filtering is usually performed on a texture during texture mapping, or on a bitmap
during resizing. In both cases, the source data (bitmap or texture) can be seen as a two-dimensional array of values
zij, or several (usually three) of these in the case of full-color data. The data points used in bilinear filtering are the
2x2 points surrounding the location for which the color is to be interpolated.
Additionally, one does not have to compute the actual coefficients of the function f; computing the value f(x,y) is
sufficient.
The largest integer not larger than x shall be called [x], and the fractional part of x shall be {x}. Then, x = [x] + {x},
and {x} < 1. We have x1 = [x], x2 = [x] + 1, y1 = [y], y2 = [y] + 1. The data points used for interpolation are taken
from the texture / bitmap and assigned to z11, z12, z21, and z22.
f1(y1) = z11, f1(y2) = z12 are the two data points for f1; subtracting the former from the latter yields
f1(y2) - f1(y1) = z12 - z11.
Because f1 is linear, its derivative is constant and equal to
(z12 - z11) / (y2 - y1) = z12 - z11.
Because f1(y1) = z11,
f1(y1 + {y}) = z11 + {y}(z12 - z11),
and similarly,
f2(y1 + {y}) = z21 + {y}(z22 - z21).
Because y1 + {y} = y, we have computed the endpoints f1(y) and f2(y) needed for the second interpolation step.
The second step is to compute f(x,y), which can be accomplished by the very formula we used for computing the
intermediate values:

14

Bilinear filtering

15

f(x,y) = f1(y) + {x}(f2(y) - f1(y)).


In the case of scaling, y remains constant within the same line of the rescaled image, and storing the intermediate
results and reusing them for calculation of the next pixel can lead to significant savings. Similar savings can be
achieved with all "bi" kinds of filtering, i.e. those which can be expressed as two passes of one-dimensional filtering.
In the case of texture mapping, a constant x or y is rarely if ever encountered, and because today's (2000+) graphics
hardware is highly parallelized,[citation needed] there would be no time savings anyway.
Another way of writing the bilinear interpolation formula is
f(x,y) = (1-{x})((1-{y})z11 + {y}z12) + {x}((1-{y})z21 + {y}z22).

Sample code
This code assumes that the texture is square (an extremely common occurrence), that no mipmapping comes into
play, and that there is only one channel of data (not so common. Nearly all textures are in color so they have red,
green, and blue channels, and many have an alpha transparency channel, so we must make three or four calculations
of y, one for each channel). The location of UV-coordinates is at center of texel. For example, {(0.25,0.25),
(0.75,0.25), (0.25,0.75), (0.75,0.75)} are values for 2x2 texture.
double getBilinearFilteredPixelColor(Texture tex, double u, double v)
{
u = u * tex.size - 0.5;
v = v * tex.size - 0.5;
int x = floor(u);
int y = floor(v);
double u_ratio = u - x;
double v_ratio = v - y;
double u_opposite = 1 - u_ratio;
double v_opposite = 1 - v_ratio;
double result = (tex[x][y]
* u_opposite
u_ratio) * v_opposite +
(tex[x][y+1] * u_opposite
u_ratio) * v_ratio;
return result;
}

+ tex[x+1][y]

+ tex[x+1][y+1] *

Limitations
Bilinear filtering is rather accurate until the scaling of the texture gets below half or above double the original size of
the texture - that is, if the texture was 256 pixels in each direction, scaling it to below 128 or above 512 pixels can
make the texture look bad, because of missing pixels or too much smoothness. Often, mipmapping is used to provide
a scaled-down version of the texture for better performance; however, the transition between two differently-sized
mipmaps on a texture in perspective using bilinear filtering can be very abrupt. Trilinear filtering, though somewhat
more complex, can make this transition smooth throughout.
For a quick demonstration of how a texel can be missing from a filtered texture, here's a list of numbers representing
the centers of boxes from an 8-texel-wide texture (in red and black), intermingled with the numbers from the centers
of boxes from a 3-texel-wide down-sampled texture (in blue). The red numbers represent texels that would not be
used in calculating the 3-texel texture at all.
0.0625, 0.1667, 0.1875, 0.3125, 0.4375, 0.5000, 0.5625, 0.6875, 0.8125, 0.8333, 0.9375

Bilinear filtering

Special cases
Textures aren't infinite, in general, and sometimes one ends up with a pixel coordinate that lies outside the grid of
texel coordinates. There are a few ways to handle this:
Wrap the texture, so that the last texel in a row also comes right before the first, and the last texel in a column also
comes right above the first. This works best when the texture is being tiled.
Make the area outside the texture all one color. This may be of use for a texture designed to be laid over a solid
background or to be transparent.
Repeat the edge texels out to infinity. This works best if the texture is not designed to be repeated.

Binary space partitioning


In computer science, binary space partitioning (BSP) is a method for recursively subdividing a space into convex
sets by hyperplanes. This subdivision gives rise to a representation of objects within the space by means of a tree
data structure known as a BSP tree.
Binary space partitioning was developed in the context of 3D computer graphics, where the structure of a BSP tree
allows spatial information about the objects in a scene that is useful in rendering, such as their ordering from
front-to-back with respect to a viewer at a given location, to be accessed rapidly. Other applications include
performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics
and 3-D video games, ray tracing and other computer applications that involve handling of complex spatial scenes.

Overview
Binary space partitioning is a generic process of recursively dividing a scene into two until the partitioning satisfies
one or more requirements. It can be seen as a generalisation of other spatial tree structures such as k-d trees and
quadtrees, one where hyperplanes that partition the space may have any orientation, rather than being aligned with
the coordinate axes as they are in k-d trees or quadtrees. When used in computer graphics to render scenes composed
of planar polygons, the partitioning planes are frequently (but not always) chosen to coincide with the planes defined
by polygons in the scene.
The specific choice of partitioning plane and criterion for terminating the partitioning process varies depending on
the purpose of the BSP tree. For example, in computer graphics rendering, the scene is divided until each node of the
BSP tree contains only polygons that can render in arbitrary order. When back-face culling is used, each node
therefore contains a convex set of polygons, whereas when rendering double-sided polygons, each node of the BSP
tree contains only polygons in a single plane. In collision detection or ray tracing, a scene may be divided up into
primitives on which collision or ray intersection tests are straightforward.
Binary space partitioning arose from the computer graphics need to rapidly draw three dimensional scenes composed
of polygons. A simple way to draw such scenes is the painter's algorithm, which produces polygons in order of
distance from the viewer, back to front, painting over the background and previous polygons with each closer object.
This approach has two disadvantages: time required to sort polygons in back to front order, and the possibility of
errors in overlapping polygons. Fuchs and co-authors showed that constructing a BSP tree solved both of these
problems by providing a rapid method of sorting polygons with respect to a given viewpoint (linear in the number of
polygons in the scene) and by subdividing overlapping polygons to avoid errors that can occur with the painter's
algorithm. A disadvantage of binary space partitioning is that generating a BSP tree can be time-consuming.
Typically, it is therefore performed once on static geometry, as a pre-calculation step, prior to rendering or other
realtime operations on a scene. The expense of constructing a BSP tree makes it difficult and inefficient to directly
implement moving objects into a tree.

16

Binary space partitioning


BSP trees are often used by 3D video games, particularly first-person shooters and those with indoor environments.
Game engines utilising BSP trees include the Doom engine (probably the earliest game to use a BSP data structure
was Doom), the Quake engine and its descendants. In video games, BSP trees containing the static geometry of a
scene are often used together with a Z-buffer, to correctly merge movable objects such as doors and characters onto
the background scene. While binary space partitioning provides a convenient way to store and retrieve spatial
information about polygons in a scene, it does not solve the problem of visible surface determination.

Generation
The canonical use of a BSP tree is for rendering polygons (that are double-sided, that is, without back-face culling)
with the painter's algorithm. Each polygon is designated with a front side and a back side which could be chosen
arbitrarily and only affects the structure of the tree but not the required result. Such a tree is constructed from an
unsorted list of all the polygons in a scene. The recursive algorithm for construction of a BSP tree from that list of
polygons is:
1. Choose a polygon P from the list.
2. Make a node N in the BSP tree, and add P to the list of polygons at that node.
3. For each other polygon in the list:
1. If that polygon is wholly in front of the plane containing P, move that polygon to the list of nodes in front of
P.
2. If that polygon is wholly behind the plane containing P, move that polygon to the list of nodes behind P.
3. If that polygon is intersected by the plane containing P, split it into two polygons and move them to the
respective lists of polygons behind and in front of P.
4. If that polygon lies in the plane containing P, add it to the list of polygons at node N.
4. Apply this algorithm to the list of polygons in front of P.
5. Apply this algorithm to the list of polygons behind P.
The following diagram illustrates the use of this algorithm in converting a list of lines or polygons into a BSP tree.
At each of the eight steps (i.-viii.), the algorithm above is applied to a list of lines, and one new node is added to the
tree.
Start with a list of lines, (or in 3-D, polygons) making up the scene. In the tree diagrams, lists are denoted
by rounded rectangles and nodes in the BSP tree by circles. In the spatial diagram of the lines, direction
chosen to be the 'front' of a line is denoted by an arrow.
i.

Following the steps of the algorithm above,


1. We choose a line, A, from the list and,...
2. ...add it to a node.
3. We split the remaining lines in the list into those in front of A (i.e. B2, C2, D2), and those behind (B1,
C1, D1).
4. We first process the lines in front of A (in steps iiv),...
5. ...followed by those behind (in steps vivii).

ii.

We now apply the algorithm to the list of lines in front of A (containing B2, C2, D2). We choose a line, B2,
add it to a node and split the rest of the list into those lines that are in front of B2 (D2), and those that are
behind it (C2, D3).

iii.

Choose a line, D2, from the list of lines in front of B2. It is the only line in the list, so after adding it to a
node, nothing further needs to be done.

iv.

We are done with the lines in front of B2, so consider the lines behind B2 (C2 and D3). Choose one of
these (C2), add it to a node, and put the other line in the list (D3) into the list of lines in front of C2.

v.

Now look at the list of lines in front of C2. There is only one line (D3), so add this to a node and continue.

17

Binary space partitioning

vi.

We have now added all of the lines in front of A to the BSP tree, so we now start on the list of lines behind
A. Choosing a line (B1) from this list, we add B1 to a node and split the remainder of the list into lines in
front of B1 (i.e. D1), and lines behind B1 (i.e. C1).

vii.

Processing first the list of lines in front of B1, D1 is the only line in this list, so add this to a node and
continue.

viii. Looking next at the list of lines behind B1, the only line in this list is C1, so add this to a node, and the BSP
tree is complete.

The final number of polygons or lines in a tree is often larger (sometimes much larger) than the original list, since
lines or polygons that cross the partitioning plane must be split into two. It is desirable to minimize this increase, but
also to maintain reasonable balance in the final tree. The choice of which polygon or line is used as a partitioning
plane (in step 1 of the algorithm) is therefore important in creating an efficient BSP tree.

Traversal
A BSP tree is traversed in a linear time, in an order determined by the particular function of the tree. Again using the
example of rendering double-sided polygons using the painter's algorithm, to draw a polygon P correctly requires
that all polygons behind the plane P lies in must be drawn first, then polygon P, then finally the polygons in front of
P. If this drawing order is satisfied for all polygons in a scene, then the entire scene renders in the correct order. This
procedure can be implemented by recursively traversing a BSP tree using the following algorithm. From a given
viewing location V, to render a BSP tree,
1. If the current node is a leaf node, render the polygons at the current node.
2. Otherwise, if the viewing location V is in front of the current node:
1. Render the child BSP tree containing polygons behind the current node
2. Render the polygons at the current node
3. Render the child BSP tree containing polygons in front of the current node
3. Otherwise, if the viewing location V is behind the current node:
1. Render the child BSP tree containing polygons in front of the current node
2. Render the polygons at the current node
3. Render the child BSP tree containing polygons behind the current node
4. Otherwise, the viewing location V must be exactly on the plane associated with the current node. Then:
1. Render the child BSP tree containing polygons in front of the current node
2. Render the child BSP tree containing polygons behind the current node

Applying this algorithm recursively to the BSP tree generated above results in the following steps:
The algorithm is first applied to the root node of the tree, node A. V is in front of node A, so we apply the
algorithm first to the child BSP tree containing polygons behind A
This tree has root node B1. V is behind B1 so first we apply the algorithm to the child BSP tree containing
polygons in front of B1:
This tree is just the leaf node D1, so the polygon D1 is rendered.
We then render the polygon B1.

18

Binary space partitioning


We then apply the algorithm to the child BSP tree containing polygons behind B1:
This tree is just the leaf node C1, so the polygon C1 is rendered.
We then draw the polygons of A
We then apply the algorithm to the child BSP tree containing polygons in front of A
This tree has root node B2. V is behind B2 so first we apply the algorithm to the child BSP tree containing
polygons in front of B2:
This tree is just the leaf node D2, so the polygon D2 is rendered.
We then render the polygon B2.
We then apply the algorithm to the child BSP tree containing polygons behind B2:
This tree has root node C2. V is in front of C2 so first we would apply the algorithm to the child BSP tree
containing polygons behind C2. There is no such tree, however, so we continue.
We render the polygon C2.
We apply the algorithm to the child BSP tree containing polygons in front of C2
This tree is just the leaf node D3, so the polygon D3 is rendered.
The tree is traversed in linear time and renders the polygons in a far-to-near ordering (D1, B1, C1, A, D2, B2, C2,
D3) suitable for the painter's algorithm.

Timeline
1969 Schumacker et al. published a report that described how carefully positioned planes in a virtual environment
could be used to accelerate polygon ordering. The technique made use of depth coherence, which states that a
polygon on the far side of the plane cannot, in any way, obstruct a closer polygon. This was used in flight
simulators made by GE as well as Evans and Sutherland. However, creation of the polygonal data organization
was performed manually by scene designer.
1980 Fuchs et al. extended Schumackers idea to the representation of 3D objects in a virtual environment by
using planes that lie coincident with polygons to recursively partition the 3D space. This provided a fully
automated and algorithmic generation of a hierarchical polygonal data structure known as a Binary Space
Partitioning Tree (BSP Tree). The process took place as an off-line preprocessing step that was performed once
per environment/object. At run-time, the view-dependent visibility ordering was generated by traversing the tree.
1981 Naylor's Ph.D thesis containing a full development of both BSP trees and a graph-theoretic approach using
strongly connected components for pre-computing visibility, as well as the connection between the two methods.
BSP trees as a dimension independent spatial search structure was emphasized, with applications to visible
surface determination. The thesis also included the first empirical data demonstrating that the size of the tree and
the number of new polygons was reasonable (using a model of the Space Shuttle).
1983 Fuchs et al. describe a micro-code implementation of the BSP tree algorithm on an Ikonas frame buffer
system. This was the first demonstration of real-time visible surface determination using BSP trees.
1987 Thibault and Naylor described how arbitrary polyhedra may be represented using a BSP tree as opposed to
the traditional b-rep (boundary representation). This provided a solid representation vs. a surface
based-representation. Set operations on polyhedra were described using a tool, enabling Constructive Solid
Geometry (CSG) in real-time. This was the fore runner of BSP level design using brushes, introduced in the
Quake editor and picked up in the Unreal Editor.
1990 Naylor, Amanatides, and Thibault provide an algorithm for merging two BSP trees to form a new BSP tree
from the two original trees. This provides many benefits including: combining moving objects represented by
BSP trees with a static environment (also represented by a BSP tree), very efficient CSG operations on polyhedra,
exact collisions detection in O(log n * log n), and proper ordering of transparent surfaces contained in two
interpenetrating objects (has been used for an x-ray vision effect).

19

Binary space partitioning


1990 Teller and Squin proposed the offline generation of potentially visible sets to accelerate visible surface
determination in orthogonal 2D environments.
1991 Gordon and Chen [CHEN91] described an efficient method of performing front-to-back rendering from a
BSP tree, rather than the traditional back-to-front approach. They utilised a special data structure to record,
efficiently, parts of the screen that have been drawn, and those yet to be rendered. This algorithm, together with
the description of BSP Trees in the standard computer graphics textbook of the day (Computer Graphics:
Principles and Practice) was used by John Carmack in the making of Doom.
1992 Tellers PhD thesis described the efficient generation of potentially visible sets as a pre-processing step to
acceleration real-time visible surface determination in arbitrary 3D polygonal environments. This was used in
Quake and contributed significantly to that game's performance.
1993 Naylor answers the question of what characterizes a good BSP tree. He used expected case models (rather
than worst case analysis) to mathematically measure the expected cost of searching a tree and used this measure
to build good BSP trees. Intuitively, the tree represents an object in a multi-resolution fashion (more exactly, as a
tree of approximations). Parallels with Huffman codes and probabilistic binary search trees are drawn.
1993 Hayder Radha's PhD thesis described (natural) image representation methods using BSP trees. This includes
the development of an optimal BSP-tree construction framework for any arbitrary input image. This framework is
based on a new image transform, known as the Least-Square-Error (LSE) Partitioning Line (LPE) transform. H.
Radha' thesis also developed an optimal rate-distortion (RD) image compression framework and image
manipulation approaches using BSP trees.

References
Additional references
[NAYLOR90] B. Naylor, J. Amanatides, and W. Thibualt, "Merging BSP Trees Yields Polyhedral Set
Operations", Computer Graphics (Siggraph '90), 24(3), 1990.
[NAYLOR93] B. Naylor, "Constructing Good Partitioning Trees", Graphics Interface (annual Canadian CG
conference) May, 1993.
[CHEN91] S. Chen and D. Gordon. Front-to-Back Display of BSP Trees. (http://cs.haifa.ac.il/~gordon/
ftb-bsp.pdf) IEEE Computer Graphics & Algorithms, pp 7985. September 1991.
[RADHA91] H. Radha, R. Leoonardi, M. Vetterli, and B. Naylor Binary Space Partitioning Tree Representation
of Images, Journal of Visual Communications and Image Processing 1991, vol. 2(3).
[RADHA93] H. Radha, "Efficient Image Representation using Binary Space Partitioning Trees.", Ph.D. Thesis,
Columbia University, 1993.
[RADHA96] H. Radha, M. Vetterli, and R. Leoonardi, Image Compression Using Binary Space Partitioning
Trees, IEEE Transactions on Image Processing, vol. 5, No.12, December 1996, pp.16101624.
[WINTER99] AN INVESTIGATION INTO REAL-TIME 3D POLYGON RENDERING USING BSP TREES.
Andrew Steven Winter. April 1999. available online
Mark de Berg, Marc van Kreveld, Mark Overmars, and Otfried Schwarzkopf (2000). Computational Geometry
(2nd revised ed.). Springer-Verlag. ISBN3-540-65620-0. Section 12: Binary Space Partitions: pp.251265.
Describes a randomized Painter's Algorithm.
Christer Ericson: Real-Time Collision Detection (The Morgan Kaufmann Series in Interactive 3-D Technology).
Verlag Morgan Kaufmann, S. 349-382, Jahr 2005, ISBN 1-55860-732-3

20

Binary space partitioning

External links
BSP trees presentation (http://www.cs.wpi.edu/~matt/courses/cs563/talks/bsp/bsp.html)
Another BSP trees presentation (http://web.archive.org/web/20110719195212/http://www.cc.gatech.edu/
classes/AY2004/cs4451a_fall/bsp.pdf)
A Java applet that demonstrates the process of tree generation (http://symbolcraft.com/graphics/bsp/)
A Master Thesis about BSP generating (http://archive.gamedev.net/archive/reference/programming/features/
bsptree/bsp.pdf)
BSP Trees: Theory and Implementation (http://www.devmaster.net/articles/bsp-trees/)
BSP in 3D space (http://www.euclideanspace.com/threed/solidmodel/spatialdecomposition/bsp/index.htm)

Bounding interval hierarchy


A bounding interval hierarchy (BIH) is a partitioning data structure similar to that of bounding volume hierarchies
or kd-trees. Bounding interval hierarchies can be used in high performance (or real-time) ray tracing and may be
especially useful for dynamic scenes.
The BIH was first presented under the name of SKD-Trees,[1] presented by Ooi et al., and BoxTrees,[2]
independently invented by Zachmann.

Overview
Bounding interval hierarchies (BIH) exhibit many of the properties of both bounding volume hierarchies (BVH) and
kd-trees. Whereas the construction and storage of BIH is comparable to that of BVH, the traversal of BIH resemble
that of kd-trees. Furthermore, BIH are also binary trees just like kd-trees (and in fact their superset, BSP trees).
Finally, BIH are axis-aligned as are its ancestors. Although a more general non-axis-aligned implementation of the
BIH should be possible (similar to the BSP-tree, which uses unaligned planes), it would almost certainly be less
desirable due to decreased numerical stability and an increase in the complexity of ray traversal.
The key feature of the BIH is the storage of 2 planes per node (as opposed to 1 for the kd tree and 6 for an axis
aligned bounding box hierarchy), which allows for overlapping children (just like a BVH), but at the same time
featuring an order on the children along one dimension/axis (as it is the case for kd trees).
It is also possible to just use the BIH data structure for the construction phase but traverse the tree in a way a
traditional axis aligned bounding box hierarchy does. This enables some simple speed up optimizations for large ray
bundles [3] while keeping memory/cache usage low.
Some general attributes of bounding interval hierarchies (and techniques related to BIH) as described by [4] are:

Very fast construction times


Low memory footprint
Simple and fast traversal
Very simple construction and traversal algorithms
High numerical precision during construction and traversal
Flatter tree structure (decreased tree depth) compared to kd-trees

21

Bounding interval hierarchy

Operations
Construction
To construct any space partitioning structure some form of heuristic is commonly used. For this the surface area
heuristic, commonly used with many partitioning schemes, is a possible candidate. Another, more simplistic
heuristic is the "global" heuristic described by which only requires an axis-aligned bounding box, rather than the full
set of primitives, making it much more suitable for a fast construction.
The general construction scheme for a BIH:
calculate the scene bounding box
use a heuristic to choose one axis and a split plane candidate perpendicular to this axis
sort the objects to the left or right child (exclusively) depending on the bounding box of the object (note that
objects intersecting the split plane may either be sorted by its overlap with the child volumes or any other
heuristic)
calculate the maximum bounding value of all objects on the left and the minimum bounding value of those on the
right for that axis (can be combined with previous step for some heuristics)
store these 2 values along with 2 bits encoding the split axis in a new node
continue with step 2 for the children
Potential heuristics for the split plane candidate search:
Classical: pick the longest axis and the middle of the node bounding box on that axis
Classical: pick the longest axis and a split plane through the median of the objects (results in a leftist tree which is
often unfortunate for ray tracing though)
Global heuristic: pick the split plane based on a global criterion, in the form of a regular grid (avoids unnecessary
splits and keeps node volumes as cubic as possible)
Surface area heuristic: calculate the surface area and amount of objects for both children, over the set of all
possible split plane candidates, then choose the one with the lowest costs (claimed to be optimal, though the cost
function poses unusual demands to proof the formula, which can not be fulfilled in real life. also an exceptionally
slow heuristic to evaluate)

Ray traversal
The traversal phase closely resembles a kd-tree traversal: One has to distinguish 4 simple cases, where the ray

just intersects the left child


just intersects the right child
intersects both children
intersects neither child (the only case not possible in a kd traversal)

For the third case, depending on the ray direction (negative or positive) of the component (x, y or z) equalling the
split axis of the current node, the traversal continues first with the left (positive direction) or the right (negative
direction) child and the other one is pushed onto a stack.
Traversal continues until a leaf node is found. After intersecting the objects in the leaf, the next element is popped
from the stack. If the stack is empty, the nearest intersection of all pierced leafs is returned.
It is also possible to add a 5th traversal case, but which also requires a slightly complicated construction phase. By
swapping the meanings of the left and right plane of a node, it is possible to cut off empty space on both sides of a
node. This requires an additional bit that must be stored in the node to detect this special case during traversal.
Handling this case during the traversal phase is simple, as the ray
just intersects the only child of the current node or
intersects nothing

22

Bounding interval hierarchy

Properties
Numerical stability
All operations during the hierarchy construction/sorting of the triangles are min/max-operations and comparisons.
Thus no triangle clipping has to be done as it is the case with kd-trees and which can become a problem for triangles
that just slightly intersect a node. Even if the kd implementation is carefully written, numerical errors can result in a
non-detected intersection and thus rendering errors (holes in the geometry) due to the missed ray-object intersection.

Extensions
Instead of using two planes per node to separate geometry, it is also possible to use any number of planes to create a
n-ary BIH or use multiple planes in a standard binary BIH (one and four planes per node were already proposed in
and then properly evaluated in [5]) to achieve better object separation.

References
Papers
[1] Nam, Beomseok; Sussman, Alan. A comparative study of spatial indexing techniques for multidimensional scientific datasets (http:/ /
ieeexplore. ieee. org/ Xplore/ login. jsp?url=/ iel5/ 9176/ 29111/ 01311209. pdf)
[2] Zachmann, Gabriel. Minimal Hierarchical Collision Detection (http:/ / zach. in. tu-clausthal. de/ papers/ vrst02. html)
[3] Wald, Ingo; Boulos, Solomon; Shirley, Peter (2007). Ray Tracing Deformable Scenes using Dynamic Bounding Volume Hierarchies (http:/ /
www. sci. utah. edu/ ~wald/ Publications/ 2007/ / / BVH/ download/ / togbvh. pdf)
[4] Wchter, Carsten; Keller, Alexander (2006). Instant Ray Tracing: The Bounding Interval Hierarchy (http:/ / ainc. de/ Research/ BIH. pdf)
[5] Wchter, Carsten (2008). Quasi-Monte Carlo Light Transport Simulation by Efficient Ray Tracing (http:/ / vts. uni-ulm. de/ query/ longview.
meta. asp?document_id=6265)

External links
BIH implementations: Javascript (http://github.com/imbcmdth/jsBIH).

23

Bounding volume

24

Bounding volume
For building code compliance, see Bounding.
In computer graphics and computational geometry, a bounding
volume for a set of objects is a closed volume that completely contains
the union of the objects in the set. Bounding volumes are used to
improve the efficiency of geometrical operations by using simple
volumes to contain more complex objects. Normally, simpler volumes
have simpler ways to test for overlap.
A bounding volume for a set of objects is also a bounding volume for
the single object consisting of their union, and the other way around.
Therefore it is possible to confine the description to the case of a single
object, which is assumed to be non-empty and bounded (finite).
A three dimensional model with its bounding box
drawn in dashed lines.

Uses of bounding volumes


Bounding volumes are most often used to accelerate certain kinds of tests.
In ray tracing, bounding volumes are used in ray-intersection tests, and in many rendering algorithms, they are used
for viewing frustum tests. If the ray or viewing frustum does not intersect the bounding volume, it cannot intersect
the object contained in the volume. These intersection tests produce a list of objects that must be displayed. Here,
displayed means rendered or rasterized.
In collision detection, when two bounding volumes do not intersect, then the contained objects cannot collide, either.
Testing against a bounding volume is typically much faster than testing against the object itself, because of the
bounding volume's simpler geometry. This is because an 'object' is typically composed of polygons or data structures
that are reduced to polygonal approximations. In either case, it is computationally wasteful to test each polygon
against the view volume if the object is not visible. (Onscreen objects must be 'clipped' to the screen, regardless of
whether their surfaces are actually visible.)
To obtain bounding volumes of complex objects, a common way is to break the objects/scene down using a scene
graph or more specifically bounding volume hierarchies like e.g. OBB trees. The basic idea behind this is to organize
a scene in a tree-like structure where the root comprises the whole scene and each leaf contains a smaller subpart.

Common types of bounding volume


The choice of the type of bounding volume for a given application is determined by a variety of factors: the
computational cost of computing a bounding volume for an object, the cost of updating it in applications in which
the objects can move or change shape or size, the cost of determining intersections, and the desired precision of the
intersection test. The precision of the intersection test is related to the amount of space within the bounding volume
not associated with the bounded object, called void space. Sophisticated bounding volumes generally allow for less
void space but are more computationally expensive. It is common to use several types in conjunction, such as a
cheap one for a quick but rough test in conjunction with a more precise but also more expensive type.
The types treated here all give convex bounding volumes. If the object being bounded is known to be convex, this is
not a restriction. If non-convex bounding volumes are required, an approach is to represent them as a union of a
number of convex bounding volumes. Unfortunately, intersection tests become quickly more expensive as the

Bounding volume
bounding boxes become more sophisticated.
A bounding box is a cuboid, or in 2-D a rectangle, containing the object. In dynamical simulation, bounding boxes
are preferred to other shapes of bounding volume such as bounding spheres or cylinders for objects that are roughly
cuboid in shape when the intersection test needs to be fairly accurate. The benefit is obvious, for example, for objects
that rest upon other, such as a car resting on the ground: a bounding sphere would show the car as possibly
intersecting with the ground, which then would need to be rejected by a more expensive test of the actual model of
the car; a bounding box immediately shows the car as not intersecting with the ground, saving the more expensive
test.
A bounding capsule is a swept sphere (i.e. the volume that a sphere takes as it moves along a straight line segment)
containing the object. Capsules can be represented by the radius of the swept sphere and the segment that the sphere
is swept across). It has traits similar to a cylinder, but is easier to use, because the intersection test is simpler. A
capsule and another object intersect if the distance between the capsule's defining segment and some feature of the
other object is smaller than the capsule's radius. For example, two capsules intersect if the distance between the
capsules' segments is smaller than the sum of their radii. This holds for arbitrarily rotated capsules, which is why
they're more appealing than cylinders in practice.
A bounding cylinder is a cylinder containing the object. In most applications the axis of the cylinder is aligned with
the vertical direction of the scene. Cylinders are appropriate for 3-D objects that can only rotate about a vertical axis
but not about other axes, and are otherwise constrained to move by translation only. Two vertical-axis-aligned
cylinders intersect when, simultaneously, their projections on the vertical axis intersect which are two line
segments as well their projections on the horizontal plane two circular disks. Both are easy to test. In video
games, bounding cylinders are often used as bounding volumes for people standing upright.
A bounding ellipsoid is an ellipsoid containing the object. Ellipsoids usually provide tighter fitting than a sphere.
Intersections with ellipsoids are done by scaling the other object along the principal axes of the ellipsoid by an
amount equal to the multiplicative inverse of the radii of the ellipsoid, thus reducing the problem to intersecting the
scaled object with a unit sphere. Care should be taken to avoid problems if the applied scaling introduces skew.
Skew can make the usage of ellipsoids impractical in certain cases, for example collision between two arbitrary
ellipsoids.
A bounding slab is related to the AABB and used to speed up ray tracing[1]
A bounding sphere is a sphere containing the object. In 2-D graphics, this is a circle. Bounding spheres are
represented by centre and radius. They are very quick to test for collision with each other: two spheres intersect
when the distance between their centres does not exceed the sum of their radii. This makes bounding spheres
appropriate for objects that can move in any number of dimensions.
In many applications the bounding box is aligned with the axes of the co-ordinate system, and it is then known as an
axis-aligned bounding box (AABB). To distinguish the general case from an AABB, an arbitrary bounding box is
sometimes called an oriented bounding box (OBB). AABBs are much simpler to test for intersection than OBBs,
but have the disadvantage that when the model is rotated they cannot be simply rotated with it, but need to be
recomputed.
A bounding triangle in 2-D is quite useful to speedup the clipping or visibility test of a B-Spline curve. See "Circle
and B-Splines clipping algorithms" under the subject Clipping (computer graphics) for an example of use.
A convex hull is the smallest convex volume containing the object. If the object is the union of a finite set of points,
its convex hull is a polytope.
A discrete oriented polytope (DOP) generalizes the AABB. A DOP is a convex polytope containing the object (in
2-D a polygon; in 3-D a polyhedron), constructed by taking a number of suitably oriented planes at infinity and
moving them until they collide with the object. The DOP is then the convex polytope resulting from intersection of
the half-spaces bounded by the planes. Popular choices for constructing DOPs in 3-D graphics include the

25

Bounding volume

26

axis-aligned bounding box, made from 6 axis-aligned planes, and the beveled bounding box, made from 10 (if
beveled only on vertical edges, say), 18 (if beveled on all edges), or 26 planes (if beveled on all edges and corners).
A DOP constructed from k planes is called a k-DOP; the actual number of faces can be less than k, since some can
become degenerate, shrunk to an edge or a vertex.
A minimum bounding rectangle or MBR the least AABB in 2-D is frequently used in the description of
geographic (or "geospatial") data items, serving as a simplified proxy for a dataset's spatial extent (see geospatial
metadata) for the purpose of data search (including spatial queries as applicable) and display. It is also a basic
component of the R-tree method of spatial indexing.

Basic intersection checks


For some types of bounding volume (OBB and convex polyhedra), an effective check is that of the separating axis
theorem. The idea here is that, if there exists an axis by which the objects do not overlap, then the objects do not
intersect. Usually the axes checked are those of the basic axes for the volumes (the unit axes in the case of an AABB,
or the 3 base axes from each OBB in the case of OBBs). Often, this is followed by also checking the cross-products
of the previous axes (one axis from each object).
In the case of an AABB, this tests becomes a simple set of overlap tests in terms of the unit axes. For an AABB
defined by M,N against one defined by O,P they do not intersect if (Mx>Px) or (Ox>Nx) or (My>Py) or (Oy>Ny) or
(Mz>Pz) or (Oz>Nz).
An AABB can also be projected along an axis, for example, if it has edges of length L and is centered at C, and is
being projected along the axis N:
, and
or
, and
where m and n are the minimum and maximum extents.
An OBB is similar in this respect, but is slightly more complicated. For an OBB with L and C as above, and with I, J,
and K as the OBB's base axes, then:

For the ranges m,n and o,p it can be said that they do not intersect if m>p or o>n. Thus, by projecting the ranges of 2
OBBs along the I, J, and K axes of each OBB, and checking for non-intersection, it is possible to detect
non-intersection. By additionally checking along the cross products of these axes (I0I1, I0J1, ...) one can be more
certain that intersection is impossible.
This concept of determining non-intersection via use of axis projection also extends to convex polyhedra, however
with the normals of each polyhedral face being used instead of the base axes, and with the extents being based on the
minimum and maximum dot products of each vertex against the axes. Note that this description assumes the checks
are being done in world space.

References
[1] POV-Ray Documentation (http:/ / www. povray. org/ documentation/ view/ 3. 6. 1/ 323/ )

External links
Illustration of several DOPs for the same model, from epicgames.com (http://udn.epicgames.com/Two/rsrc/
Two/CollisionTutorial/kdop_sizes.jpg)

Bump mapping

27

Bump mapping
Bump mapping is a technique in
computer graphics for simulating
bumps and wrinkles on the surface of
an object. This is achieved by
perturbing the surface normals of the
object and using the perturbed normal
during lighting calculations. The result
is an apparently bumpy surface rather
than a smooth surface although the
surface of the underlying object is not
actually changed. Bump mapping was
introduced by Blinn in 1978.[1]

A sphere without bump mapping (left). A bump map to be applied to the sphere (middle).
The sphere with the bump map applied (right) appears to have a mottled surface
resembling an orange. Bump maps achieve this effect by changing how an illuminated
surface reacts to light without actually modifying the size or shape of the surface

Normal mapping is the most common variation of bump mapping used.[2]

Bump mapping basics


Bump mapping is a technique in
computer graphics to make a rendered
surface look more realistic by
simulating small displacements of the
surface. However, unlike traditional
displacement mapping, the surface
geometry is not modified. Instead only
the surface normal is modified as if the
surface had been displaced. The
modified surface normal is then used
for lighting calculations as usual,
typically using the Phong reflection
model or similar, giving the
appearance of detail instead of a
smooth surface.

Bump mapping is limited in that it does not actually modify the shape of the underlying
object. On the left, a mathematical function defining a bump map simulates a crumbling
surface on a sphere, but the object's outline and shadow remain those of a perfect sphere.
On the right, the same function is used to modify the surface of a sphere by generating an
isosurface. This actually models a sphere with a bumpy surface with the result that both
its outline and its shadow are rendered realistically.

Bump mapping is much faster and


consumes less resources for the same
level
of
detail
compared
to
displacement mapping because the geometry remains unchanged.

There are primarily two methods to perform bump mapping. The first uses a height map for simulating the surface
displacement yielding the modified normal. This is the method invented by Blinn and is usually what is referred to as
bump mapping unless specified. The steps of this method are summarized as follows.
Before lighting a calculation is performed for each visible point (or pixel) on the object's surface:
1. Look up the height in the heightmap that corresponds to the position on the surface.
2. Calculate the surface normal of the heightmap, typically using the finite difference method.
3. Combine the surface normal from step two with the true ("geometric") surface normal so that the combined
normal points in a new direction.

Bump mapping
4. Calculate the interaction of the new "bumpy" surface with lights in the scene using, for example, the Phong
reflection model.
The result is a surface that appears to have real depth. The algorithm also ensures that the surface appearance
changes as lights in the scene are moved around.
The other method is to specify a normal map which contains the modified normal for each point on the surface
directly. Since the normal is specified directly instead of derived from a height map this method usually leads to
more predictable results. This makes it easier for artists to work with, making it the most common method of bump
mapping today.
There are also extensions which modify other surface features in addition to increasing the sense of depth. Parallax
mapping is one such extension.
The primary limitation with bump mapping is that it perturbs only the surface normals without changing the
underlying surface itself.[3] Silhouettes and shadows therefore remain unaffected, which is especially noticeable for
larger simulated displacements. This limitation can be overcome by techniques including the displacement mapping
where bumps are actually applied to the surface or using an isosurface.

Realtime bump mapping techniques


Realtime 3D graphics programmers often use variations of the technique in order to simulate bump mapping at a
lower computational cost.
One typical way was to use a fixed geometry, which allows one to use the heightmap surface normal almost directly.
Combined with a precomputed lookup table for the lighting calculations the method could be implemented with a
very simple and fast loop, allowing for a full-screen effect. This method was a common visual effect when bump
mapping was first introduced.

References
[1] Blinn, James F. "Simulation of Wrinkled Surfaces" (http:/ / portal. acm. org/ citation. cfm?id=507101), Computer Graphics, Vol. 12 (3),
pp.286-292 SIGGRAPH-ACM (August 1978)
[2] Mikkelsen, Morten. Simulation of Wrinkled Surfaces Revisited (http:/ / image. diku. dk/ projects/ media/ morten. mikkelsen. 08. pdf), 2008
(PDF)
[3] Real-Time Bump Map Synthesis (http:/ / web4. cs. ucl. ac. uk/ staff/ j. kautz/ publications/ rtbumpmapHWWS01. pdf), Jan Kautz1, Wolfgang
Heidrichy2 and Hans-Peter Seidel1, (1Max-Planck-Institut fr Informatik, 2University of British Columbia)

External links
Bump shading for volume textures (http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=291525), Max,
N.L., Becker, B.G., Computer Graphics and Applications, IEEE, Jul 1994, Volume 14, Issue 4, pages 18 20,
ISSN 0272-1716
Bump Mapping tutorial using CG and C++ (http://www.blacksmith-studios.dk/projects/downloads/
bumpmapping_using_cg.php)
Simple creating vectors per pixel of a grayscale for a bump map to work and more (http://freespace.virgin.net/
hugo.elias/graphics/x_polybm.htm)
Bump Mapping example (http://www.neilwallis.com/java/bump2.htm) (Java applet)

28

CatmullClark subdivision surface

29

CatmullClark subdivision surface


The CatmullClark algorithm is a technique used in computer
graphics to create smooth surfaces by subdivision surface modeling. It
was devised by Edwin Catmull and Jim Clark in 1978 as a
generalization of bi-cubic uniform B-spline surfaces to arbitrary
topology. In 2005, Edwin Catmull received an Academy Award for
Technical Achievement together with Tony DeRose and Jos Stam for
their invention and application of subdivision surfaces.

Recursive evaluation
CatmullClark surfaces are defined recursively, using the following
refinement scheme:
Start with a mesh of an arbitrary polyhedron. All the vertices in this
mesh shall be called original points.
For each face, add a face point
Set each face point to be the average of all original points for the
respective face.

First three steps of CatmullClark subdivision of


a cube with subdivision surface below

For each edge, add an edge point.


Set each edge point to be the average of the two neighbouring face points and its two original endpoints.
For each face point, add an edge for every edge of the face, connecting the face point to each edge point for the
face.
For each original point P, take the average F of all n (recently created) face points for faces touching P, and take
the average R of all n edge midpoints for edges touching P, where each edge midpoint is the average of its two
endpoint vertices. Move each original point to the point
This is the barycenter of P, R and F with respective weights (n-3), 2 and 1.
Connect each new Vertex point to the new edge points of all original edges incident on the original vertex.
Define new faces as enclosed by edges
The new mesh will consist only of quadrilaterals, which won't in general be planar. The new mesh will generally
look smoother than the old mesh.
Repeated subdivision results in smoother meshes. It can be shown that the limit surface obtained by this refinement
process is at least
at extraordinary vertices and
everywhere else (when n indicates how many derivatives are
continuous, we speak of

continuity). After one iteration, the number of extraordinary points on the surface

remains constant.
The arbitrary-looking barycenter formula was chosen by Catmull and Clark based on the aesthetic appearance of the
resulting surfaces rather than on a mathematical derivation, although Catmull and Clark do go to great lengths to
rigorously show that the method yields bicubic B-spline surfaces.

CatmullClark subdivision surface

Exact evaluation
The limit surface of CatmullClark subdivision surfaces can also be evaluated directly, without any recursive
refinement. This can be accomplished by means of the technique of Jos Stam. This method reformulates the
recursive refinement process into a matrix exponential problem, which can be solved directly by means of matrix
diagonalization.

Software using CatmullClark subdivision surfaces

3ds max
3D-Coat
AC3D
Anim8or
AutoCAD
Blender
Carrara
CATIA (Imagine and Shape)
CGAL

Cheetah3D
Cinema4D
Clara.io
DAZ Studio, 2.0
Gelato
Hammer
Hexagon
Houdini
K-3D
LightWave 3D, version 9
Maya
Metasequoia
modo
Mudbox
PRMan
Realsoft3D
Remo 3D
Shade
Rhinoceros 3D - Grasshopper 3D Plugin - Weaverbird Plugin
Silo
SketchUp - Requires a Plugin.
Softimage XSI
Strata 3D CX
Wings 3D
Zbrush
TopMod

30

CatmullClark subdivision surface

References

Conversion between quaternions and Euler


angles
Spatial rotations in three dimensions can be parametrized using both Euler angles and unit quaternions. This article
explains how to convert between the two representations. Actually this simple use of "quaternions" was first
presented by Euler some seventy years earlier than Hamilton to solve the problem of magic squares. For this reason
the dynamics community commonly refers to quaternions in this application as "Euler parameters".

Definition
A unit quaternion can be described as:

We can associate a quaternion with a rotation around an axis by the following expression

where is a simple rotation angle (the value in radians of the angle of rotation) and cos(x), cos(y) and cos(z) are
the "direction cosines" locating the axis of rotation (Euler's Theorem).

31

Conversion between quaternions and Euler angles

32

Rotation matrices
The orthogonal matrix (post-multiplying a
column vector) corresponding to a
clockwise/left-handed rotation by the unit
quaternion
is
given by the inhomogeneous expression:

Euler angles The xyz (fixed) system is shown in blue, the XYZ (rotated) system
is shown in red. The line of nodes, labelled N, is shown in green.

or equivalently, by the homogeneous expression:

If

is not a unit quaternion then the homogeneous form is still a scalar multiple of a rotation

matrix, while the inhomogeneous form is in general no longer an orthogonal matrix. This is why in numerical work
the homogeneous form is to be preferred if distortion is to be avoided.
The direction cosine matrix corresponding to a Body 3-2-1 sequence with Euler angles (, , ) is given by:

Conversion between quaternions and Euler angles

33

Conversion
By combining the quaternion representations of the Euler rotations we get for the Body 3-2-1 sequence, where the
airplane first does yaw (body-z) turn during taxing on the runway, then pitches (body-y) during take-off, and finally
rolls (body-x) in the air. The resulting orientation of Body 3-2-1 sequence is equivalent to that of Lab 1-2-3
sequence, where the airplane is rolled first (lab-X axis), and then nosed up around the horizontal lab-Y axis, and
finally rotated around the vertical Lab-Z axis:

Other rotation sequences use different conventions.

Relationship with TaitBryan angles


Similarly for Euler angles, we use the TaitBryan angles (in terms
of flight dynamics):
Roll
Pitch

: rotation about the X-axis


: rotation about the Y-axis

Yaw

: rotation about the Z-axis

where the X-axis points forward, Y-axis to the right and Z-axis
downward and in the example to follow the rotation occurs in the
order yaw, pitch, roll (about body-fixed axes).

Singularities
One must be aware of singularities in the Euler angle
parametrization when the pitch approaches 90 (north/south
pole). These cases must be handled specially. The common name
for this situation is gimbal lock.

TaitBryan angles for an aircraft

Code to handle the singularities is derived on this site:


www.euclideanspace.com [1]

External links
Q60. How do I convert Euler rotation angles to a quaternion? [2] and related questions at The Matrix and
Quaternions FAQ

References
[1] http:/ / www. euclideanspace. com/ maths/ geometry/ rotations/ conversions/ quaternionToEuler/
[2] http:/ / www. j3d. org/ matrix_faq/ matrfaq_latest. html#Q60

Cube mapping

34

Cube mapping
In computer graphics, cube mapping is a method of
environment mapping that uses the six faces of a cube
as the map shape. The environment is projected onto
the sides of a cube and stored as six square textures,
or unfolded into six regions of a single texture. The
cube map is generated by first rendering the scene six
times from a viewpoint, with the views defined by an
orthogonal 90 degree view frustum representing each
cube face.[1]
In the majority of cases, cube mapping is preferred
over the older method of sphere mapping because it
eliminates many of the problems that are inherent in
sphere mapping such as image distortion, viewpoint
dependency, and computational inefficiency. Also,
cube mapping provides a much larger capacity to
support real-time rendering of reflections relative to
sphere mapping because the combination of
inefficiency and viewpoint dependency severely limit
the ability of sphere mapping to be applied when there
is a consistently changing viewpoint.

The lower left image shows a scene with a viewpoint marked with a
black dot. The upper image shows the net of the cube mapping as seen
from that viewpoint, and the lower right image shows the cube
superimposed on the original scene.

History
Cube mapping was first proposed in 1986 by Ned Greene in his paper Environment Mapping and Other
Applications of World Projections,[2] ten years after environment mapping was first put forward by Jim Blinn and
Martin Newell. However, hardware limitations on the ability to access six texture images simultaneously made it
infeasible to implement cube mapping without further technological developments. This problem was remedied in
1999 with the release of the Nvidia GeForce 256. Nvidia touted cube mapping in hardware as a breakthrough image
quality feature of GeForce 256 that ... will allow developers to create accurate, real-time reflections. Accelerated in
hardware, cube environment mapping will free up the creativity of developers to use reflections and specular lighting
effects to create interesting, immersive environments.[3] Today, cube mapping is still used in a variety of graphical
applications as a favored method of environment mapping.

Advantages
Cube mapping is preferred over other methods of environment mapping because of its relative simplicity. Also, cube
mapping produces results that are similar to those obtained by ray tracing, but is much more computationally
efficient the moderate reduction in quality is compensated for by large gains in efficiency.
Predating cube mapping, sphere mapping has many inherent flaws that made it impractical for most applications.
Sphere mapping is view dependent meaning that a different texture is necessary for each viewpoint. Therefore, in
applications where the viewpoint is mobile, it would be necessary to dynamically generate a new sphere mapping for
each new viewpoint (or, to pre-generate a mapping for every viewpoint). Also, a texture mapped onto a sphere's
surface must be stretched and compressed, and warping and distortion (particularly along the edge of the sphere) are
a direct consequence of this. Although these image flaws can be reduced using certain tricks and techniques like

Cube mapping
pre-stretching, this just adds another layer of complexity to sphere mapping.
Paraboloid mapping provides some improvement on the limitations of sphere mapping, however it requires two
rendering passes in addition to special image warping operations and more involved computation.
Conversely, cube mapping requires only a single render pass, and due to its simple nature, is very easy for
developers to comprehend and generate. Also, cube mapping uses the entire resolution of the texture image,
compared to sphere and paraboloid mappings, which also allows it to use lower resolution images to achieve the
same quality. Although handling the seams of the cube map is a problem, algorithms have been developed to handle
seam behavior and result in a seamless reflection.

Disadvantages
If a new object or new lighting is introduced into scene or if some object that is reflected in it is moving or changing
in some manner, then the reflection changes and the cube map must be re-rendered. When the cube map is affixed to
an object that moves through the scene then the cube map must also be re-rendered from that new position.

Applications
Stable Specular Highlights
Computer-aided design (CAD) programs use specular highlights as visual cues to convey a sense of surface
curvature when rendering 3D objects. However, many CAD programs exhibit problems in sampling specular
highlights because the specular lighting computations are only performed at the vertices of the mesh used to
represent the object, and interpolation is used to estimate lighting across the surface of the object. Problems occur
when the mesh vertices are not dense enough, resulting in insufficient sampling of the specular lighting. This in turn
results in highlights with brightness proportionate to the distance from mesh vertices, ultimately compromising the
visual cues that indicate curvature. Unfortunately, this problem cannot be solved simply by creating a denser mesh,
as this can greatly reduce the efficiency of object rendering.
Cube maps provide a fairly straightforward and efficient solution to rendering stable specular highlights. Multiple
specular highlights can be encoded into a cube map texture, which can then be accessed by interpolating across the
surface's reflection vector to supply coordinates. Relative to computing lighting at individual vertices, this method
provides cleaner results that more accurately represent curvature. Another advantage to this method is that it scales
well, as additional specular highlights can be encoded into the texture at no increase in the cost of rendering.
However, this approach is limited in that the light sources must be either distant or infinite lights, although
fortunately this is usually the case in CAD programs.[4]

Skyboxes
Perhaps the most trivial application of cube mapping is to create pre-rendered panoramic sky images which are then
rendered by the graphical engine as faces of a cube at practically infinite distance with the view point located in the
center of the cube. The perspective projection of the cube faces done by the graphics engine undoes the effects of
projecting the environment to create the cube map, so that the observer experiences an illusion of being surrounded
by the scene which was used to generate the skybox. This technique has found a widespread use in video games
since it allows designers to add complex (albeit not explorable) environments to a game at almost no performance
cost.

35

Cube mapping

36

Skylight Illumination
Cube maps can be useful for modelling outdoor illumination accurately. Simply modelling sunlight as a single
infinite light oversimplifies outdoor illumination and results in unrealistic lighting. Although plenty of light does
come from the sun, the scattering of rays in the atmosphere causes the whole sky to act as a light source (often
referred to as skylight illumination).However, by using a cube map the diffuse contribution from skylight
illumination can be captured. Unlike environment maps where the reflection vector is used, this method accesses the
cube map based on the surface normal vector to provide a fast approximation of the diffuse illumination from the
skylight. The one downside to this method is that computing cube maps to properly represent a skylight is very
complex; one recent process is computing the spherical harmonic basis that best represents the low frequency diffuse
illumination from the cube map. However, a considerable amount of research has been done to effectively model
skylight illumination.

Dynamic Reflection
Basic environment mapping uses a static cube map - although the
object can be moved and distorted, the reflected environment stays
consistent. However, a cube map texture can be consistently updated to
represent a dynamically changing environment (for example, trees
swaying in the wind). A simple yet costly way to generate dynamic
reflections, involves building the cube maps at runtime for every
frame. Although this is far less efficient than static mapping because of
additional rendering steps, it can still be performed at interactive rates.

Cube-mapped reflections in action

Unfortunately, this technique does not scale well when multiple reflective objects are present. A unique dynamic
environment map is usually required for each reflective object. Also, further complications are added if reflective
objects can reflect each other - dynamic cube maps can be recursively generated approximating the effects normally
generated using raytracing.

Global Illumination
An algorithm for global illumination computation at interactive rates using a cube-map data structure, was presented
at ICCVG 2002.[5]

Projection textures
Another application which found widespread use in video games, projective texture mapping relies on cube maps to
project images of an environment onto the surrounding scene; for example, a point light source is tied to a cube map
which is a panoramic image shot from inside a lantern cage or a window frame through which the light is filtering.
This enables a game developers to achieve realistic lighting without having to complicate the scene geometry or
resort to expensive real-time shadow volume computations.

Related
A large set of free cube maps for experimentation: http://www.humus.name/index.php?page=Textures
Mark VandeWettering took M. C. Escher's famous self portrait Hand with Reflecting Sphere and reversed the
mapping to obtain these cube map images: left [6], right [7], up [8], down [9], back [10], front [11]. Here is a three.js
demo using these images (best viewed in wide browser window, and may need to refresh page to view demo): http://
mrdoob.github.io/three.js/examples/webgl_materials_cubemap_escher.html

Cube mapping

37

References
[1] Fernando, R. & Kilgard M. J. (2003). The CG Tutorial: The Definitive Guide to Programmable Real-Time Graphics. (1st ed.).
Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA. Chapter 7: Environment Mapping Techniques
[2] Greene, N. 1986. Environment mapping and other applications of world projections. IEEE Comput. Graph. Appl. 6, 11 (Nov. 1986), 21-29.
(http:/ / dx. doi. org/ 10. 1109/ MCG. 1986. 276658)
[3] Nvidia, Jan 2000. Technical Brief: Perfect Reflections and Specular Lighting Effects With Cube Environment Mapping (http:/ / developer.
nvidia. com/ object/ Cube_Mapping_Paper. html)
[4] Nvidia, May 2004. Cube Map OpenGL Tutorial (http:/ / developer. nvidia. com/ object/ cube_map_ogl_tutorial. html)
[5] http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 95. 946
[6] http:/ / mrdoob. github. io/ three. js/ examples/ textures/ cube/ Escher/ px. jpg
[7] http:/ / mrdoob. github. io/ three. js/ examples/ textures/ cube/ Escher/ nx. jpg
[8] http:/ / mrdoob. github. io/ three. js/ examples/ textures/ cube/ Escher/ py. jpg
[9] http:/ / mrdoob. github. io/ three. js/ examples/ textures/ cube/ Escher/ ny. jpg
[10] http:/ / mrdoob. github. io/ three. js/ examples/ textures/ cube/ Escher/ pz. jpg
[11] http:/ / mrdoob. github. io/ three. js/ examples/ textures/ cube/ Escher/ nz. jpg

Diffuse reflection
Diffuse reflection is the reflection of light from a surface such that an
incident ray is reflected at many angles rather than at just one angle as
in the case of specular reflection. An illuminated ideal diffuse
reflecting surface will have equal luminance from all directions which
lie in the half-space adjacent to the surface (Lambertian reflectance).
A surface built from a non-absorbing powder such as plaster, or from
fibers such as paper, or from a polycrystalline material such as white
marble, reflects light diffusely with great efficiency. Many common
materials exhibit a mixture of specular and diffuse reflection.
The visibility of objects, excluding light-emitting ones, is primarily
caused by diffuse reflection of light: it is diffusely-scattered light that
forms the image of the object in the observer's eye.

Diffuse and specular reflection from a glossy


surface. The rays represent luminous intensity,
which varies according to Lambert's cosine law
for an ideal diffuse reflector.

Diffuse reflection

38

Mechanism
Diffuse reflection from solids is generally not due to
surface roughness. A flat surface is indeed required to
give specular reflection, but it does not prevent diffuse
reflection. A piece of highly polished white marble
remains white; no amount of polishing will turn it into
a mirror. Polishing produces some specular reflection,
but the remaining light continues to be diffusely
reflected.
The most general mechanism by which a surface gives
diffuse reflection does not involve exactly the surface:
most of the light is contributed by scattering centers
beneath the surface,[1][2] as illustrated in Figure1 at
right. If one were to imagine that the figure represents
snow, and that the polygons are its (transparent) ice
crystallites, an impinging ray is partially reflected (a
few percent) by the first particle, enters in it, is again
reflected by the interface with the second particle,
enters in it, impinges on the third, and so on, generating
a series of "primary" scattered rays in random
directions, which, in turn, through the same
mechanism, generate a large number of "secondary"
scattered rays, which generate "tertiary" rays...[3] All
these rays walk through the snow crystallytes, which do
not absorb light, until they arrive at the surface and exit
in random directions.[4] The result is that the light that
was sent out is returned in all directions, so that snow is
white despite being made of transparent material (ice
crystals).
For simplicity, "reflections" are spoken of here, but
more generally the interface between the small particles
that constitute many materials is irregular on a scale
comparable with light wavelength, so diffuse light is
generated at each interface, rather than a single
reflected ray, but the story can be told the same way.

Figure1 General mechanism of diffuse reflection by a solid surface


(refraction phenomena not represented)

Figure2 Diffuse reflection from an irregular surface

This mechanism is very general, because almost all common materials are made of "small things" held together.
Mineral materials are generally polycrystalline: one can describe them as made of a 3D mosaic of small, irregularly
shaped defective crystals. Organic materials are usually composed of fibers or cells, with their membranes and their
complex internal structure. And each interface, inhomogeneity or imperfection can deviate, reflect or scatter light,
reproducing the above mechanism.
Few materials don't follow it: among them are metals, which do not allow light to enter; gases, liquids, glass, and
transparent plastics (which have a liquid-like amorphous microscopic structure); single crystals, such as some gems
or a salt crystal; and some very special materials, such as the tissues which make the cornea and the lens of an eye.
These materials can reflect diffusely, however, if their surface is microscopically rough, like in a frost glass

Diffuse reflection
(Figure2), or, of course, if their homogeneous structure deteriorates, as in the eye lens.
A surface may also exhibit both specular and diffuse reflection, as is the case, for example, of glossy paints as used
in home painting, which give also a fraction of specular reflection, while matte paints give almost exclusively diffuse
reflection.

Specular vs. diffuse reflection


Virtually all materials can give specular reflection, provided that their surface can be polished to eliminate
irregularities comparable with light wavelength (a fraction of micrometer). A few materials, like liquids and glasses,
lack the internal subdivisions which give the subsurface scattering mechanism described above, so they can be clear
and give only specular reflection (not great, however), while, among common materials, only polished metals can
reflect light specularly with great efficiency (the reflecting material of mirrors usually is aluminum or silver). All
other common materials, even when perfectly polished, usually give not more than a few percent specular reflection,
except in particular cases, such as grazing angle reflection by a lake, or the total reflection of a glass prism, or when
structured in certain complex configurations such as the silvery skin of many fish species or the reflective surface of
a dielectric mirror.
Diffuse reflection from white materials, instead, can be highly efficient in giving back all the light they receive, due
to the summing up of the many subsurface reflections.

Colored objects
Up to now white objects have been discussed, which do not absorb light. But the above scheme continues to be valid
in the case that the material is absorbent. In this case, diffused rays will lose some wavelengths during their walk in
the material, and will emerge colored.
More, diffusion affects in a substantial manner the color of objects, because it determines the average path of light in
the material, and hence to which extent the various wavelengths are absorbed.[5] Red ink looks black when it stays in
its bottle. Its vivid color is only perceived when it is placed on a scattering material (e.g. paper). This is so because
light's path through the paper fibers (and through the ink) is only a fraction of millimeter long. Light coming from
the bottle, instead, has crossed centimeters of ink, and has been heavily absorbed, even in its red wavelengths.
And, when a colored object has both diffuse and specular reflection, usually only the diffuse component is colored.
A cherry reflects diffusely red light, absorbs all other colors and has a specular reflection which is essentially white.
This is quite general, because, except for metals, the reflectivity of most materials depends on their refraction index,
which varies little with the wavelength (though it is this variation that causes the chromatic dispersion in a prism), so
that all colors are reflected nearly with the same intensity. Reflections from different origin, instead, may be colored:
metallic reflections, such as in gold or copper, or interferential reflections: iridescences, peacock feathers, butterfly
wings, beetle elytra, or the antireflection coating of a lens.

Importance for vision


Looking at one's surrounding environment, the vast majority of visible objects are seen primarily by diffuse
reflection from their surface. This holds with few exceptions, such as glass, reflective liquids, polished or smooth
metals, glossy objects, and objects that themselves emit light: the Sun, lamps, and computer screens (which,
however, emit diffuse light). Outdoors it is the same, with perhaps the exception of a transparent water stream or of
the iridescent colors of a beetle. Additionally, Rayleigh scattering is responsible for the blue color of the sky, and
Mie scattering for the white color of the water droplets of clouds.
Light scattered from the surfaces of objects is by far the primary light which humans visually observe.

39

Diffuse reflection

40

Interreflection
Diffuse interreflection is a process whereby light reflected from an object strikes other objects in the surrounding
area, illuminating them. Diffuse interreflection specifically describes light reflected from objects which are not shiny
or specular. In real life terms what this means is that light is reflected off non-shiny surfaces such as the ground,
walls, or fabric, to reach areas not directly in view of a light source. If the diffuse surface is colored, the reflected
light is also colored, resulting in similar coloration of surrounding objects.
In 3D computer graphics, diffuse interreflection is an important component of global illumination. There are a
number of ways to model diffuse interreflection when rendering a scene. Radiosity and photon mapping are two
commonly used methods.

References
[1] P.Hanrahan and W.Krueger (1993), Reflection from layered surfaces due to subsurface scattering, in SIGGRAPH 93 Proceedings, J. T.
Kajiya, Ed., vol.27, pp.165174 (http:/ / www. cs. berkeley. edu/ ~ravir/ 6998/ papers/ p165-hanrahan. pdf).
[2] H.W.Jensen et al. (2001), A practical model for subsurface light transport, in ' Proceedings of ACM SIGGRAPH 2001', pp.511518 (http:/ /
www. cs. berkeley. edu/ ~ravir/ 6998/ papers/ p511-jensen. pdf)
[3] Only primary and secondary rays are represented in the figure.
[4] Or, if the object is thin, it can exit from the opposite surface, giving diffuse transmitted light.
[5] Paul Kubelka, Franz Munk (1931), Ein Beitrag zur Optik der Farbanstriche, Zeits. f. Techn. Physik, 12, 593601, see The Kubelka-Munk
Theory of Reflectance (http:/ / web. eng. fiu. edu/ ~godavart/ BME-Optics/ Kubelka-Munk-Theory. pdf)

Displacement mapping
Displacement mapping is an alternative computer graphics technique
in contrast to bump mapping, normal mapping, and parallax mapping,
using a (procedural-) texture- or height map to cause an effect where
the actual geometric position of points over the textured surface are
displaced, often along the local surface normal, according to the value
the texture function evaluates to at each point on the surface. It gives
surfaces a great sense of depth and detail, permitting in particular
self-occlusion, self-shadowing and silhouettes; on the other hand, it is
the most costly of this class of techniques owing to the large amount of
additional geometry.
For years, displacement mapping was a peculiarity of high-end
rendering systems like PhotoRealistic RenderMan, while realtime
APIs, like OpenGL and DirectX, were only starting to use this feature.
One of the reasons for this is that the original implementation of
displacement mapping required an adaptive tessellation of the surface
in order to obtain enough micropolygons whose size matched the size
of a pixel on the screen.

Displacement mapping

Displacement mapping

Meaning of the term in different contexts


Displacement mapping includes the term mapping which refers to a texture map being used to modulate the
displacement strength. The displacement direction is usually the local surface normal. Today, many renderers allow
programmable shading which can create high quality (multidimensional) procedural textures and patterns at arbitrary
high frequencies. The use of the term mapping becomes arguable then, as no texture map is involved anymore.
Therefore, the broader term displacement is often used today to refer to a super concept that also includes
displacement based on a texture map.
Renderers using the REYES algorithm, or similar approaches based on micropolygons, have allowed displacement
mapping at arbitrary high frequencies since they became available almost 20 years ago.
The first commercially available renderer to implement a micropolygon displacement mapping approach through
REYES was Pixar's PhotoRealistic RenderMan. Micropolygon renderers commonly tessellate geometry themselves
at a granularity suitable for the image being rendered. That is: the modeling application delivers high-level primitives
to the renderer. Examples include true NURBS- or subdivision surfaces. The renderer then tessellates this geometry
into micropolygons at render time using view-based constraints derived from the image being rendered.
Other renderers that require the modeling application to deliver objects pre-tessellated into arbitrary polygons or
even triangles have defined the term displacement mapping as moving the vertices of these polygons. Often the
displacement direction is also limited to the surface normal at the vertex. While conceptually similar, those polygons
are usually a lot larger than micropolygons. The quality achieved from this approach is thus limited by the
geometry's tessellation density a long time before the renderer gets access to it.
This difference between displacement mapping in micropolygon renderers vs. displacement mapping in a
non-tessellating (macro)polygon renderers can often lead to confusion in conversations between people whose
exposure to each technology or implementation is limited. Even more so, as in recent years, many non-micropolygon
renderers have added the ability to do displacement mapping of a quality similar to that which a micropolygon
renderer is able to deliver naturally. To distinguish between the crude pre-tessellation-based displacement these
renderers did before, the term sub-pixel displacement was introduced to describe this feature.[citation needed]
Sub-pixel displacement commonly refers to finer re-tessellation of geometry that was already tessellated into
polygons. This re-tessellation results in micropolygons or often microtriangles. The vertices of these then get moved
along their normals to achieve the displacement mapping.
True micropolygon renderers have always been able to do what sub-pixel-displacement achieved only recently, but
at a higher quality and in arbitrary displacement directions.
Recent developments seem to indicate that some of the renderers that use sub-pixel displacement move towards
supporting higher level geometry too. As the vendors of these renderers are likely to keep using the term sub-pixel
displacement, this will probably lead to more obfuscation of what displacement mapping really stands for, in 3D
computer graphics.
In reference to Microsoft's proprietary High Level Shader Language, displacement mapping can be interpreted as a
kind of "vertex-texture mapping" where the values of the texture map do not alter pixel colors (as is much more
common), but instead change the position of vertices. Unlike bump, normal and parallax mapping, all of which can
be said to "fake" the behavior of displacement mapping, in this way a genuinely rough surface can be produced from
a texture. It has to be used in conjunction with adaptive tessellation techniques (that increases the number of
rendered polygons according to current viewing settings) to produce highly detailed meshes.

41

Displacement mapping

42

Further reading

Blender Displacement Mapping [1]


Relief Texture Mapping [2] website
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces [3] paper
Relief Mapping of Non-Height-Field Surface Details [4] paper
Steep Parallax Mapping [5] website
State of the art of displacement mapping on the gpu [6] paper

References
[1]
[2]
[3]
[4]
[5]
[6]

http:/ / wiki. blender. org/ index. php/ Manual/ Displacement_Maps


http:/ / www. inf. ufrgs. br/ %7Eoliveira/ RTM. html
http:/ / www. inf. ufrgs. br/ %7Eoliveira/ pubs_files/ Policarpo_Oliveira_Comba_RTRM_I3D_2005. pdf
http:/ / www. inf. ufrgs. br/ %7Eoliveira/ pubs_files/ Policarpo_Oliveira_RTM_multilayer_I3D2006. pdf
http:/ / graphics. cs. brown. edu/ games/ SteepParallax/ index. html
http:/ / www. iit. bme. hu/ ~szirmay/ egdisfinal3. pdf

DooSabin subdivision surface


In computer graphics, DooSabin subdivision surface is a type of
subdivision surface based on a generalization of bi-quadratic uniform
B-splines. It was developed in 1978 by Daniel Doo and Malcolm
Sabin.[1][2]
This process generates one new face at each original vertex, n new
faces along each original edge, and n x n new faces at each original
face. A primary characteristic of the DooSabin subdivision method is
the creation of four faces around every vertex. A drawback is that the
faces created at the vertices are not necessarily coplanar.

Evaluation

Simple Doo-Sabin sudivision surface. The figure


shows the limit surface, as well as the control
point wireframe mesh.

DooSabin surfaces are defined recursively. Each refinement iteration


replaces the current mesh with a smoother, more refined mesh, following the procedure described in. After many
iterations, the surface will gradually converge onto a smooth limit surface. The figure below show the effect of two
refinement iterations on a T-shaped quadrilateral mesh.
Just as for CatmullClark surfaces,
DooSabin limit surfaces can also be
evaluated directly without any
recursive refinement, by means of the
technique of Jos Stam.[3] The solution
is, however, not as computationally
efficient as for Catmull-Clark surfaces
because the DooSabin subdivision
matrices are not in general diagonalizable.

DooSabin subdivision surface

External links
[1] D. Doo: A subdivision algorithm for smoothing down irregularly shaped polyhedrons, Proceedings on Interactive Techniques in Computer
Aided Design, pp. 157 - 165, 1978 ( pdf (http:/ / trac2. assembla. com/ DooSabinSurfaces/ export/ 12/ trunk/ docs/ Doo 1978 Subdivision
algorithm. pdf))
[2] D. Doo and M. Sabin: Behavior of recursive division surfaces near extraordinary points, Computer-Aided Design, 10 (6) 356360 (1978), (
doi (http:/ / dx. doi. org/ 10. 1016/ 0010-4485(78)90111-2), pdf (http:/ / www. cs. caltech. edu/ ~cs175/ cs175-02/ resources/ DS. pdf))
[3] Jos Stam, Exact Evaluation of CatmullClark Subdivision Surfaces at Arbitrary Parameter Values, Proceedings of SIGGRAPH'98. In
Computer Graphics Proceedings, ACM SIGGRAPH, 1998, 395404 ( pdf (http:/ / www. dgp. toronto. edu/ people/ stam/ reality/ Research/
pdf/ sig98. pdf), downloadable eigenstructures (http:/ / www. dgp. toronto. edu/ ~stam/ reality/ Research/ SubdivEval/ index. html))

DooSabin surfaces (http://graphics.cs.ucdavis.edu/education/CAGDNotes/Doo-Sabin/Doo-Sabin.html)

Edge loop
An edge loop, in computer graphics, can loosely be defined as a set of connected edges across a surface. Usually the
last edge meets again with the first edge, thus forming a loop. The set or string of edges can for example be the outer
edges of a flat surface or the edges surrounding a 'hole' in a surface.
In a stricter sense an edge loop is defined as a set of edges where the loop follows the middle edge in every 'four way
junction'.[1] The loop will end when it encounters another type of junction (three or five way for example). Take an
edge on a mesh surface for example, say at one end of the edge it connects with three other edges, making a four way
junction. If you follow the middle 'road' each time you would either end up with a completed loop or the edge loop
would end at another type of junction.
Edge loops are especially practical in organic models which need to be animated. In organic modeling edge loops
play a vital role in proper deformation of the mesh.[2] A properly modeled mesh will take into careful consideration
the placement and termination of these edge loops. Generally edge loops follow the structure and contour of the
muscles that they mimic. For example, in modeling a human face edge loops should follow the orbicularis oculi
muscle around the eyes and the orbicularis oris muscle around the mouth. The hope is that by mimicking the way the
muscles are formed they also aid in the way the muscles are deformed by way of contractions and expansions. An
edge loop closely mimics how real muscles work, and if built correctly, will give you control over contour and
silhouette in any position.
An important part in developing proper edge loops is by understanding poles.[3] The E(5) Pole and the N(3) Pole are
the two most important poles in developing both proper edge loops and a clean topology on your model. The E(5)
Pole is derived from an extruded face. When this face is extruded, four 4-sided polygons are formed in addition to
the original face. Each lower corner of these four polygons forms a five-way junction. Each one of these five-way
junctions is an E-pole. An N(3) Pole is formed when 3 edges meet at one point creating a three-way junction. The
N(3) Pole is important in that it redirects the direction of an edge loop.

References
[1] Edge Loop (http:/ / wiki. cgsociety. org/ index. php/ Edge_Loop), CG Society
[2] Modeling With Edge Loops (http:/ / zoomy. net/ 2008/ 04/ 02/ modeling-with-edge-loops/ ), Zoomy.net
[3] "The pole" (http:/ / www. subdivisionmodeling. com/ forums/ showthread. php?t=907), SubdivisionModeling.com

External links
Edge Loop (http://wiki.cgsociety.org/index.php/Edge_Loop), CG Society

43

Euler operator

44

Euler operator
In mathematics Euler operators may refer to:
EulerLagrange differential operator d/dx see Lagrangian system
CauchyEuler operators e.g. xd/dx
quantum white noise conservation or QWN-Euler operator QWN-Euler operator

Euler operators (Euler operations)


In solid modeling and computer-aided design, the Euler operators modify the graph of connections to add or remove
details of a mesh while preserving its topology. They are named by Baumgart [1] after the EulerPoincar
charateristic. He chose a set of operators sufficient to create useful meshes, some lose information and so are not
invertible.
The boundary representation for a solid object, its surface, is a polygon mesh of vertices, edges and faces. Its
topology is captured by the graph of the connections between faces. A given mesh may actually contain multiple
unconnected shells (or bodies); each body may be partitioned into multiple connected components each defined by
their edge loop boundary. To represent a hollow object, the inside and outside surfaces are separate shells.
Let the number of vertices be V, edges be E, faces be F, components H, shells S, and let the genus be G (S and G
correspond to the b0 and b2 Betti numbers respectively). Then, to denote a meaningful geometric object, the mesh
must satisfy the generalized EulerPoincar formula
V E + F = H + 2 * (S G)
The Euler operators preserve this characteristic. The Eastman paper lists the following basic operators, and their
effects on the various terms:
Name

Description

V E F H S G

MBFLV

Make Body-Face-Loop-Vertex

MEV

Make Edge-Vertex

MEFL

Make Edge-Face-Loop

MEKL

Make Edge, Kill Loop

-1

KFLEVB

Kill Faces-Loops-Edges-Vertices-Body

-1

KFLEVMG Kill Faces-Loops-Edges-Vertices, Make Genus 2

Geometry
Euler operators modify the mesh's graph creating or removing faces, edges and vertices according to simple rules
while preserving the overall topology thus maintaining a valid boundary (i.e. not introducing holes). The operators
themselves don't define how geometric or graphical attributes map to the new graph: e.g. position, gradient, uv
texture coordinate, these will depend on the particular implementation.

References
[1] Baumgart, B.G^ "Winged edge polyhedron representation", Stanford Artificial Intelligence Report No. CS-320, October, 1972.

(see also Winged edge#External links)


Eastman, Charles M. and Weiler, Kevin J., "Geometric modeling using the Euler operators" (1979). Computer
Science Department. Paper 1587. http://repository.cmu.edu/compsci/1587 (http://repository.cmu.edu/

Euler operator

compsci/1587). Unfortunately this typo-ridden (OCRd?) paper can be quite hard to read.
Easier-to-read reference (http://solidmodel.me.ntu.edu.tw/lessoninfo/file/Chapter03.pdf), from a
solid-modelling course at NTU.
Another reference (http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/model/euler-op.html) that
uses a slightly different definition of terms.
Sven Havemann, Generative Mesh Modeling (http://www.eg.org/EG/DL/dissonline/doc/havemann.pdf),
PhD thesis, Braunschweig University, Germany, 2005.
Martti Mntyl, An Introduction to Solid Modeling, Computer Science Press, Rockville MD, 1988. ISBN
0-88175-108-1.

False radiosity
False Radiosity is a 3D computer graphics technique used to create texture mapping for objects that emulates patch
interaction algorithms in radiosity rendering. Though practiced in some form since the late 90s, this term was coined
only around 2002 by architect Andrew Hartness, then head of 3D and real-time design at Ateliers Jean Nouvel.
During the period of nascent commercial enthusiasm for radiosity-enhanced imagery, but prior to the
democratization of powerful computational hardware, architects and graphic artists experimented with time-saving
3D rendering techniques. By darkening areas of texture maps corresponding to corners, joints and recesses, and
applying maps via self-illumination or diffuse mapping in a 3D program, a radiosity-like effect of patch interaction
could be created with a standard scan-line renderer. Successful emulation of radiosity required a theoretical
understanding and graphic application of patch view factors, path tracing and global illumination algorithms. Texture
maps were usually produced with image editing software, such as Adobe Photoshop. The advantage of this method is
decreased rendering time and easily modifiable overall lighting strategies.
Another common approach similar to false radiosity is the manual placement of standard omni-type lights with
limited attenuation in places in the 3D scene where the artist would expect radiosity reflections to occur. This
method uses many lights and can require an advanced light-grouping system, depending on what assigned
materials/objects are illuminated, how many surfaces require false radiosity treatment, and to what extent it is
anticipated that lighting strategies be set up for frequent changes.

References
Autodesk interview with Hartness about False Radiosity and real-time design [1]

References
[1] http:/ / usa. autodesk. com/ adsk/ servlet/ item?siteID=123112& id=5549510& linkID=10371177

45

Fragment

Fragment
In computer graphics, a fragment is the data necessary to generate a single pixel's worth of a drawing primitive in
the frame buffer.
This data may include, but is not limited to:

raster position
depth
interpolated attributes (color, texture coordinates, etc.)
stencil
alpha
window ID

As a scene is drawn, drawing primitives (the basic elements of graphics output, such as points,lines, circles, text etc.
[1]
) are rasterized into fragments which are textured and combined with the existing frame buffer. How a fragment is
combined with the data already in the frame buffer depends on various settings. In a typical case, a fragment may be
discarded if it is farther away than the pixel that is already at that location (according to the depth buffer). If it is
nearer than the existing pixel, it may replace what is already there, or, if alpha blending is in use, the pixel's color
may be replaced with a mixture of the fragment's color and the pixel's existing color, as in the case of drawing a
translucent object.
In general, a fragment can be thought of as the data needed to shade the pixel, plus the data needed to test whether
the fragment survives to become a pixel (depth, alpha, stencil, scissor, window ID, etc.)

References
[1] The Drawing Primitives by Janne Saarela (http:/ / baikalweb. jinr. ru/ doc/ cern_doc/ asdoc/ gks_html3/ node28. html)

46

Geometry pipelines

Geometry pipelines
Geometric manipulation of modeling primitives, such as that performed by a geometry pipeline, is the first stage in
computer graphics systems which perform image generation based on geometric models. While Geometry Pipelines
were originally implemented in software, they have become highly amenable to hardware implementation,
particularly since the advent of very-large-scale integration (VLSI) in the early 1980s. A device called the Geometry
Engine developed by Jim Clark and Marc Hannah at Stanford University in about 1981 was the watershed for what
has since become an increasingly commoditized function in contemporary image-synthetic raster display systems.
Geometric transformations are applied to the vertices of polygons, or other geometric objects used as modelling
primitives, as part of the first stage in a classical geometry-based graphic image rendering pipeline. Geometric
computations may also be applied to transform polygon or patch surface normals, and then to perform the lighting
and shading computations used in their subsequent rendering.

History
Hardware implementations of the geometry pipeline were introduced in the early Evans & Sutherland Picture
System, but perhaps received broader recognition when later applied in the broad range of graphics systems products
introduced by Silicon Graphics (SGI). Initially the SGI geometry hardware performed simple model space to screen
space viewing transformations with all the lighting and shading handled by a separate hardware implementation
stage, but in later, much higher performance applications such as the RealityEngine, they began to be applied to
perform part of the rendering support as well.
More recently, perhaps dating from the late 1990s, the hardware support required to perform the manipulation and
rendering of quite complex scenes has become accessible to the consumer market. Companies such as Nvidia and
AMD Graphics (formerly ATI) are two current leading representatives of hardware vendors in this space. The
GeForce line of graphics cards from Nvidia was the first to support full OpenGL and Direct3D hardware geometry
processing in the consumer PC market, while some earlier products such as Rendition Verite incorporated hardware
geometry processing through proprietary programming interfaces. On the whole, earlier graphics accelerators by
3Dfx, Matrox and others relied on the CPU for geometry processing.
This subject matter is part of the technical foundation for modern computer graphics, and is a comprehensive topic
taught at both the undergraduate and graduate levels as part of a computer science education.

References

47

Geometry processing

Geometry processing
Geometry processing, or mesh processing, is a fast-growing[citation needed] area of research that uses concepts from
applied mathematics, computer science and engineering to design efficient algorithms for the acquisition,
reconstruction, analysis, manipulation, simulation and transmission of complex 3D models.
Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment
and classical computer-aided design, to biomedical computing, reverse engineering and scientific computing.[citation
needed]

References
External links
Siggraph 2001 Course on Digital Geometry Processing (http://www.multires.caltech.edu/pubs/DGPCourse/),
by Peter Schroder and Wim Sweldens
Symposium on Geometry Processing (http://www.geometryprocessing.org/)
Multi-Res Modeling Group (http://www.multires.caltech.edu/), Caltech
Mathematical Geometry Processing Group (http://geom.mi.fu-berlin.de/index.html), Free University of
Berlin
Computer Graphics Group (http://www.graphics.rwth-aachen.de), RWTH Aachen University
Polygonal Mesh Processing Book (http://www.pmp-book.org/)

Global illumination

Rendering without global illumination. Areas that lie outside of the ceiling lamp's direct light lack definition. For
example, the lamp's housing appears completely uniform. Without the ambient light added into the render, it would
appear uniformly black.

48

Global illumination

Rendering with global illumination. Light is reflected by surfaces, and colored light transfers from one surface to
another. Notice how color from the red wall and green wall (not visible) reflects onto other surfaces in the scene.
Also notable is the caustic projected onto the red wall from light passing through the glass sphere.
Global illumination is a general name for a group of algorithms used in 3D computer graphics that are meant to add
more realistic lighting to 3D scenes. Such algorithms take into account not only the light which comes directly from
a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected
by other surfaces in the scene, whether reflective or not (indirect illumination).
Theoretically reflections, refractions, and shadows are all examples of global illumination, because when simulating
them, one object affects the rendering of another object (as opposed to an object being affected only by a direct
light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination.
Images rendered using global illumination algorithms often appear more photorealistic than images rendered using
only direct illumination algorithms. However, such images are computationally more expensive and consequently
much slower to generate. One common approach is to compute the global illumination of a scene and store that
information with the geometry, e.g., radiosity. That stored data can then be used to generate images from different
viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations
repeatedly.
Radiosity, ray tracing, beam tracing, cone tracing, path tracing, Metropolis light transport, ambient occlusion, photon
mapping, and image based lighting are examples of algorithms used in global illumination, some of which may be
used together to yield results that are not fast, but accurate.
These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most
of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve
the lighting equation and provide a more realistically illuminated scene.
The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to
heat transfer simulations performed using finite-element methods in engineering design.
In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by
an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software
packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination
method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect.
Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However,
used properly, ambient lighting can be an efficient way to make up for a lack of processing power.

49

Global illumination

50

Procedure
More and more specialized algorithms are used in 3D programs that can effectively simulate the global illumination.
These algorithms are numerical approximations to the rendering equation. Well known algorithms for computing
global illumination include path tracing, photon mapping and radiosity. The following approaches can be
distinguished here:
Inversion:
is not applied in practice
Expansion:
bi-directional approach: Photon mapping + Distributed ray tracing, Bi-directional path tracing, Metropolis light
transport
Iteration:
Radiosity
In Light path notation global lighting the paths of the type L (D | S) corresponds * E.
A full treatment can be found in

Image-based lighting
Another way to simulate real global illumination is the use of High dynamic range images (HDRIs), also known as
environment maps, which encircle and illuminate the scene. This process is known as image-based lighting.

List of methods
Method

Description/Notes

Ray tracing

Several enhanced variants exist for solving problems related to sampling, aliasing, soft shadows: Distributed ray
tracing, Cone tracing, Beam tracing.

Path tracing

Unbiased, Variant: Bi-directional Path Tracing

Photon mapping

Consistent, biased; enhanced variants: Progressive Photon Mapping, Stochastic Progressive Photon Mapping
[1]
(unbiased variant )

Lightcuts

enhanced variants: Multidimensional Lightcuts, Bidirectional Lightcuts

Point Based Global


Illumination

Extensively used in movie animations

Radiosity

Finite element method, very good for precomputations.

Metropolis light transport

Builds upon bi-directional path tracing, unbiased

Spherical harmonic
lighting

Encodes global illumination results for real-time rendering of static scenes

Ambient occlusion

Not a physically correct method, but gives good results in general. Good for precomputation.

[2][3]

Global illumination

51

References
[1] http:/ / www. luxrender. net/ wiki/ SPPM
[2] http:/ / graphics. pixar. com/ library/ PointBasedGlobalIlluminationForMovieProduction/ paper. pdf
[3] http:/ / www. karstendaemen. com/ thesis/ files/ intro_pbgi. pdf

External links
SSRT (http://www.nirenstein.com/e107/page.php?11) C++ source code for a Monte-carlo pathtracer
(supporting GI) - written with ease of understanding in mind.
Video demonstrating global illumination and the ambient color effect (http://www.archive.org/details/
MarcC_AoI-Global_Illumination)
Real-time GI demos (http://realtimeradiosity.com/demos) survey of practical real-time GI techniques as a list
of executable demos
kuleuven (http://www.cs.kuleuven.be/~phil/GI/) - This page contains the Global Illumination Compendium,
an effort to bring together most of the useful formulas and equations for global illumination algorithms in
computer graphics.
GI Tutorial (http://www.youtube.com/watch?v=K5a-FqHz3o0) - Video tutorial on faking global illumination
within 3D Studio Max by Jason Donati

Gouraud shading
Gouraud shading, named after Henri
Gouraud, is an interpolation method used in
computer graphics to produce continuous
shading of surfaces represented by polygon
meshes. In practice, Gouraud shading is
most often used to achieve continuous
lighting on triangle surfaces by computing
the lighting at the corners of each triangle
and linearly interpolating the resulting
colours for each pixel covered by the
triangle. Gouraud first published the
technique in 1971.

Gouraud-shaded triangle mesh using the Phong reflection model

Description
Gouraud shading works as follows: An estimate to the surface normal of each vertex in a polygonal 3D model is
either specified for each vertex or found by averaging the surface normals of the polygons that meet at each vertex.
Using these estimates, lighting computations based on a reflection model, e.g. the Phong reflection model, are then
performed to produce colour intensities at the vertices. For each screen pixel that is covered by the polygonal mesh,
colour intensities can then be interpolated from the colour values calculated at the vertices.

Gouraud shading

52

Comparison with other shading techniques


Gouraud shading is considered superior to
flat shading and requires significantly less
processing than Phong shading, but usually
results in a faceted look.
In comparison to Phong shading, Gouraud
shading's strength and weakness lies in its
interpolation. If a mesh covers more pixels
in screen space than it has vertices,
interpolating colour values from samples of
Comparison of flat shading and Gouraud shading.
expensive lighting calculations at vertices is
less processor intensive than performing the
lighting calculation for each pixel as in Phong shading. However, highly localized lighting effects (such as specular
highlights, e.g. the glint of reflected light on the surface of an apple) will not be rendered correctly, and if a highlight
lies in the middle of a polygon, but does not spread to the polygon's vertex, it will not be apparent in a Gouraud
rendering; conversely, if a highlight occurs at the vertex of a polygon, it will be rendered correctly at this vertex (as
this is where the lighting model is applied), but will be spread unnaturally across all neighboring polygons via the
interpolation method.
The problem is easily spotted in a rendering which ought to have a specular highlight moving smoothly across the
surface of a model as it rotates. Gouraud shading will instead produce a highlight continuously fading in and out
across neighboring portions of the model, peaking in intensity when the intended specular highlight passes over a
vertex of the model. For clarity, note that the problem just described can be improved by increasing the density of
vertices in the object (or perhaps increasing them just near the problem area), but of course, this solution applies to
any shading paradigm whatsoever - indeed, with an "incredibly large" number of vertices there would never be any
need at all for shading concepts.

Gouraud-shaded sphere - note the poor behaviour of the specular


highlight.

The same sphere rendered with a very high polygon count.

Gouraud shading

References

Graphics pipeline
Graphics pipeline or rendering pipeline refers to the sequence of steps used to create a 2D raster representation of
a 3D scene. Plainly speaking, once you have created a 3D model, for instance, in a video game, or any other 3d
computer animation, the graphics pipeline is the process of turning that 3D model into what the computer displays.
In the early history of 3D computer graphics fixed purpose hardware was used to speed up the steps of the pipeline,
but the hardware evolved, becoming more general purpose, allowing greater flexibility in graphics rendering, as well
as more generalized hardware, allowing the same generalized hardware to perform not only different steps of the
pipeline, unlike fixed purpose hardware, but even limited forms of general purpose computing. As the hardware
evolved, so did the graphics pipelines, the OpenGL, and DirectX pipelines, but the general concept of the pipeline
remains the same.

Concept
The 3d pipeline usually refers to the most common form of computer 3d rendering, 3d polygon rendering, distinct
from raytracing, and raycasting. In particular, 3d polygon rendering is similar to raycasting. In raycasting, a ray
originates at the point where the camera resides, if that ray hits a surface, then the color and lighting of the point on
the surface where the ray hit is calculated. In 3d polygon rendering the reverse happens, the area that is in view of the
camera is calculated, and then rays are created from every part of every surface in view of the camera and traced
back to the camera.[1]

Stages of the graphics pipeline


3D geometric primitives
First, the scene is created out of geometric primitives. Traditionally this is done using triangles, which are
particularly well suited to this as they always exist on a single plane.

Modeling and transformation


Transform from the local coordinate system to the 3d world coordinate system. A model of a teapot in abstract is
placed in the coordinate system of the 3d world.

Camera transformation
Transform the 3d world coordinate system into the 3d camera coordinate system, with the camera as the origin.

Lighting
Illuminate according to lighting and reflectance. If the teapot is a brilliant white color, but in a totally black room,
then the camera sees it as black. In this step the effect of lighting and reflections are calculated.

Projection transformation
Transform the 3d world coordinates into the 2d view of the camera, for instance the object the camera is centered on
would be in the center of the 2d view of the camera. In the case of a Perspective projection, objects which are distant
from the camera are made smaller. This is achieved by dividing the X and Y coordinates of each vertex of each
primitive by its Z coordinate(which represents its distance from the camera). In an orthographic projection, objects

53

Graphics pipeline
retain their original size regardless of distance from the camera.

Clipping
Geometric primitives that now fall completely outside of the viewing frustum will not be visible and are discarded at
this stage.

Scan conversion or rasterization


Rasterization is the process by which the 2D image space representation of the scene is converted into raster format
and the correct resulting pixel values are determined. From now on, operations will be carried out on each single
pixel. This stage is rather complex, involving multiple steps often referred as a group under the name of pixel
pipeline.

Texturing, fragment shading


At this stage of the pipeline individual fragments (or pre-pixels) are assigned a color based on values interpolated
from the vertices during rasterization, from a texture in memory, or from a shader program.

The graphics pipeline in hardware


The rendering pipeline is mapped onto current graphics acceleration hardware such that the input to the GPU is in
the form of vertices. These vertices then undergo transformation and per-vertex lighting. At this point in modern
GPU pipelines a custom vertex shader program can be used to manipulate the 3D vertices prior to rasterization. Once
transformed and lit, the vertices undergo clipping and rasterization resulting in fragments. A second custom shader
program can then be run on each fragment before the final pixel values are output to the frame buffer for display.
The graphics pipeline is well suited to the rendering process because it allows the GPU to function as a stream
processor since all vertices and fragments can be thought of as independent. This allows all stages of the pipeline to
be used simultaneously for different vertices or fragments as they work their way through the pipe. In addition to
pipelining vertices and fragments, their independence allows graphics processors to use parallel processing units to
process multiple vertices or fragments in a single stage of the pipeline at the same time.

References
1. ^ Graphics pipeline. (n.d.). Computer Desktop Encyclopedia. Retrieved December 13, 2005, from Answers.com:
[2]
2. ^ Raster Graphics and Color [3] 2004 by Greg Humphreys at the University of Virginia
[1] http:/ / www. cs. virginia. edu/ ~gfx/ Courses/ 2012/ IntroGraphics/ lectures/ 13-Pipeline. pdf
[2] http:/ / www. answers. com/ topic/ graphics-pipeline
[3] http:/ / www. cs. virginia. edu/ ~gfx/ Courses/ 2004/ Intro. Fall. 04/ handouts/ 01-raster. pdf

External links
MIT OpenCourseWare Computer Graphics, Fall 2003 (http://ocw.mit.edu/courses/
electrical-engineering-and-computer-science/6-837-computer-graphics-fall-2003/)
ExtremeTech 3D Pipeline Tutorial (http://www.extremetech.com/computing/
49076-extremetech-3d-pipeline-tutorial)
http://developer.nvidia.com/
http://www.atitech.com/developer/

54

Hidden line removal

55

Hidden line removal


Hidden line removal is an extension of wireframe model rendering
where lines (or segments of lines) covered by surfaces are not drawn.
This is not the same as hidden face removal since this involves depth
and occlusion while the other involves normals.

Algorithms
A commonly used algorithm to implement it is Arthur Appel's
algorithm.[1] This algorithm works by propagating the visibility from a
segment with a known visibility to a segment whose visibility is yet to
be determined. Certain pathological cases exist that can make this
algorithm difficult to implement. Those cases are:

Line removal technique in action

1. Vertices on edges;
2. Edges on vertices;
3. Edges on edges.
This algorithm is unstable because an error in visibility will be propagated to subsequent nodes (although there are
ways to compensate for this problem).[2]

References
[1] (Appel, A., "The Notion of Quantitative Invisibility and the Machine Rendering of Solids", Proceedings ACM National Conference,
Thompson Books, Washington, DC, 1967, pp. 387-393.)
[2] James Blinn, "Fractional Invisibility", IEEE Computer Graphics and Applications, Nov. 1988, pp. 77-84.

External links
Patrick-Gilles Maillot's Thesis (https://sites.google.com/site/patrickmaillot/english) an extension of the
Bresenham line drawing algorithm to perform 3D hidden lines removal; also published in MICAD '87
proceedings on CAD/CAM and Computer Graphics, page 591 - ISBN 2-86601-084-1.
Vector Hidden Line Removal (http://wheger.tripod.com/vhl/vhl.htm) An article by Walter Heger with a
further description (of the pathological cases) and more citations.

Hidden surface determination

Hidden surface determination


In 3D computer graphics, hidden surface determination (also known as hidden surface removal (HSR),
occlusion culling (OC) or visible surface determination (VSD)) is the process used to determine which surfaces
and parts of surfaces are not visible from a certain viewpoint. A hidden surface determination algorithm is a solution
to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The
process of hidden surface determination is sometimes called hiding, and such an algorithm is sometimes called a
hider. The analogue for line rendering is hidden line removal. Hidden surface determination is necessary to render
an image correctly, so that one cannot look through walls in virtual reality.

Background
Hidden surface determination is a process by which surfaces which should not be visible to the user (for example,
because they lie behind opaque objects such as walls) are prevented from being rendered. Despite advances in
hardware capability there is still a need for advanced rendering algorithms. The responsibility of a rendering engine
is to allow for large world spaces and as the worlds size approaches infinity the engine should not slow down but
remain at constant speed. Optimising this process relies on being able to ensure the diversion of as few resources as
possible towards the rendering of surfaces that will not end up being rendered to the user.
There are many techniques for hidden surface determination. They are fundamentally an exercise in sorting, and
usually vary in the order in which the sort is performed and how the problem is subdivided. Sorting large quantities
of graphics primitives is usually done by divide and conquer.

Hidden surface removal algorithms


Considering the rendering pipeline, the projection, the clipping, and the rasterization steps are handled differently by
the following algorithms:
Z-buffering During rasterization the depth/Z value of each pixel (or sample in the case of anti-aliasing, but
without loss of generality the term pixel is used) is checked against an existing depth value. If the current pixel is
behind the pixel in the Z-buffer, the pixel is rejected, otherwise it is shaded and its depth value replaces the one in
the Z-buffer. Z-buffering supports dynamic scenes easily, and is currently implemented efficiently in graphics
hardware. This is the current standard. The cost of using Z-buffering is that it uses up to 4 bytes per pixel, and that
the rasterization algorithm needs to check each rasterized sample against the z-buffer. The z-buffer can also suffer
from artifacts due to precision errors (also known as z-fighting), although this is far less common now that
commodity hardware supports 24-bit and higher precision buffers.
Coverage buffers (C-Buffer) and Surface buffer (S-Buffer): faster than z-buffers and commonly used in games in
the Quake I era. Instead of storing the Z value per pixel, they store list of already displayed segments per line of
the screen. New polygons are then cut against already displayed segments that would hide them. An S-Buffer can
display unsorted polygons, while a C-Buffer requires polygons to be displayed from the nearest to the furthest.
Because the C-buffer technique does not require a pixel to be drawn more than once, the process is slightly faster.
This was commonly used with BSP trees, which would provide sorting for the polygons.
Sorted Active Edge List: used in Quake 1, this was storing a list of the edges of already displayed polygons.
Polygons are displayed from the nearest to the furthest. New polygons are clipped against already displayed
polygons' edges, creating new polygons to display then storing the additional edges. It's much harder to
implement than S/C/Z buffers, but it will scale much better with the increase in resolution.
Painter's algorithm sorts polygons by their barycenter and draws them back to front. This produces few artifacts
when applied to scenes with polygons of similar size forming smooth meshes and backface culling turned on. The
cost here is the sorting step and the fact that visual artifacts can occur.

56

Hidden surface determination


Binary space partitioning (BSP) divides a scene along planes corresponding to polygon boundaries. The
subdivision is constructed in such a way as to provide an unambiguous depth ordering from any point in the scene
when the BSP tree is traversed. The disadvantage here is that the BSP tree is created with an expensive
pre-process. This means that it is less suitable for scenes consisting of dynamic geometry. The advantage is that
the data is pre-sorted and error free, ready for the previously mentioned algorithms. Note that the BSP is not a
solution to HSR, only a help.
Ray tracing attempts to model the path of light rays to a viewpoint by tracing rays from the viewpoint into the
scene. Although not a hidden surface removal algorithm as such, it implicitly solves the hidden surface removal
problem by finding the nearest surface along each view-ray. Effectively this is equivalent to sorting all the
geometry on a per pixel basis.
The Warnock algorithm divides the screen into smaller areas and sorts triangles within these. If there is ambiguity
(i.e., polygons overlap in depth extent within these areas), then further subdivision occurs. At the limit,
subdivision may occur down to the pixel level.

Culling and VSD


A related area to VSD is culling, which usually happens before VSD in a rendering pipeline. Primitives or batches of
primitives can be rejected in their entirety, which usually reduces the load on a well-designed system.
The advantage of culling early on the pipeline is that entire objects that are invisible do not have to be fetched,
transformed, rasterized or shaded. Here are some types of culling algorithms:

Viewing frustum culling


The viewing frustum is a geometric representation of the volume visible to the virtual camera. Naturally, objects
outside this volume will not be visible in the final image, so they are discarded. Often, objects lie on the boundary of
the viewing frustum. These objects are cut into pieces along this boundary in a process called clipping, and the
pieces that lie outside the frustum are discarded as there is no place to draw them.

Backface culling
Since meshes are hollow shells, not solid objects, the back side of some faces, or polygons, in the mesh will never
face the camera. Typically, there is no reason to draw such faces. This is responsible for the effect often seen in
computer and video games in which, if the camera happens to be inside a mesh, rather than seeing the "inside"
surfaces of the mesh, it mostly disappears. (Some game engines continue to render any forward-facing or
double-sided polygons, resulting in stray shapes appearing without the rest of the penetrated mesh.)

Contribution culling
Often, objects are so far away that they do not contribute significantly to the final image. These objects are thrown
away if their screen projection is too small. See Clipping plane.

Occlusion culling
Objects that are entirely behind other opaque objects may be culled. This is a very popular mechanism to speed up
the rendering of large scenes that have a moderate to high depth complexity. There are several types of occlusion
culling approaches:
Potentially visible set or PVS rendering, divides a scene into regions and pre-computes visibility for them. These
visibility sets are then indexed at run-time to obtain high quality visibility sets (accounting for complex occluder
interactions) quickly.

57

Hidden surface determination


Portal rendering divides a scene into cells/sectors (rooms) and portals (doors), and computes which sectors are
visible by clipping them against portals.
Hansong Zhang's dissertation "Effective Occlusion Culling for the Interactive Display of Arbitrary Models" [1]
describes an occlusion culling approach.

Divide and conquer


A popular theme in the VSD literature is divide and conquer. The Warnock algorithm pioneered dividing the screen.
Beam tracing is a ray-tracing approach which divides the visible volumes into beams. Various screen-space
subdivision approaches reducing the number of primitives considered per region, e.g. tiling, or screen-space BSP
clipping. Tiling may be used as a preprocess to other techniques. ZBuffer hardware may typically include a coarse
'hi-Z' against which primitives can be rejected early without rasterization, this is a form of occlusion culling.
Bounding volume hierarchies (BVHs) are often used to subdivide the scene's space (examples are the BSP tree, the
octree and the kd-tree). This allows visibility determination to be performed hierarchically: effectively, if a node in
the tree is considered to be invisible then all of its child nodes are also invisible, and no further processing is
necessary (they can all be rejected by the renderer). If a node is considered visible, then each of its children need to
be evaluated. This traversal is effectively a tree walk where invisibility/occlusion or reaching a leaf node determines
whether to stop or whether to recurse respectively.

Sources
http://www.cs.washington.edu/education/courses/cse557/07wi/lectures/hidden-surfaces.pdf
http://design.osu.edu/carlson/history/PDFs/ten-hidden-surface.pdf

References
[1] http:/ / www. cs. unc. edu/ ~zhangh/ hom. html

58

High dynamic range rendering

High dynamic range rendering


High-dynamic-range rendering (HDRR or HDR rendering), also known as high-dynamic-range lighting, is the
rendering of computer graphics scenes by using lighting calculations done in a larger dynamic range. This allows
preservation of details that may be lost due to limiting contrast ratios. Video games and computer-generated movies
and special effects benefit from this as it creates more realistic scenes than with the more simplistic lighting models
used.
Graphics processor company Nvidia summarizes the motivation for HDRR in three points: bright things can be
really bright, dark things can be really dark, and details can be seen in both.

History
The use of high-dynamic-range imaging (HDRI) in computer graphics was introduced by Greg Ward in 1985 with
his open-source Radiance rendering and lighting simulation software which created the first file format to retain a
high-dynamic-range image. HDRI languished for more than a decade, held back by limited computing power,
storage, and capture methods. Not until recently has the technology to put HDRI into practical use been developed.
In 1990, Nakame, et al., presented a lighting model for driving simulators that highlighted the need for
high-dynamic-range processing in realistic simulations.
In 1995, Greg Spencer presented Physically-based glare effects for digital images at SIGGRAPH, providing a
quantitative model for flare and blooming in the human eye.
In 1997 Paul Debevec presented Recovering high dynamic range radiance maps from photographs at SIGGRAPH
and the following year presented Rendering synthetic objects into real scenes. These two papers laid the framework
for creating HDR light probes of a location and then using this probe to light a rendered scene.
HDRI and HDRL (high-dynamic-range image-based lighting) have, ever since, been used in many situations in 3D
scenes in which inserting a 3D object into a real environment requires the lightprobe data to provide realistic lighting
solutions.
In gaming applications, Riven: The Sequel to Myst in 1997 used an HDRI postprocessing shader directly based on
Spencer's paper. After E 2003, Valve Software released a demo movie of their Source engine rendering a cityscape
in a high dynamic range. The term was not commonly used again until E 2004, where it gained much more attention
when Valve Software announced Half-Life 2: Lost Coast and Epic Games showcased Unreal Engine 3, coupled with
open-source engines such as OGRE 3D and open-source games like Nexuiz.

Examples
One of the primary advantages of HDR rendering is that details in a scene with a large contrast ratio are preserved.
Without HDR, areas that are too dark are clipped to black and areas that are too bright are clipped to white. These are
represented by the hardware as a floating point value of 0.0 and 1.0 for pure black and pure white, respectively.
Another aspect of HDR rendering is the addition of perceptual cues which increase apparent brightness. HDR
rendering also affects how light is preserved in optical phenomena such as reflections and refractions, as well as
transparent materials such as glass. In LDR rendering, very bright light sources in a scene (such as the sun) are
capped at 1.0. When this light is reflected the result must then be less than or equal to 1.0. However, in HDR
rendering, very bright light sources can exceed the 1.0 brightness to simulate their actual values. This allows
reflections off surfaces to maintain realistic brightness for bright light sources.

59

High dynamic range rendering

Limitations and compensations


Human eye
The human eye can perceive scenes with a very high dynamic contrast ratio, around 1,000,000:1. Adaptation is
achieved in part through adjustments of the iris and slow chemical changes, which take some time (e.g. the delay in
being able to see when switching from bright lighting to pitch darkness). At any given time, the eye's static range is
smaller, around 10,000:1. However, this is still higher than the static range of most display technology.[citation needed]

Output to displays
Although many manufacturers claim very high numbers, plasma displays, LCD displays, and CRT displays can only
deliver a fraction of the contrast ratio found in the real world, and these are usually measured under ideal conditions.
The simultaneous contrast of real content under normal viewing conditions is significantly lower.
Some increase in dynamic range in LCD monitors can be achieved by automatically reducing the backlight for dark
scenes (LG calls it DigitalFineContrast [1], Samsung are quoting "dynamic contrast ratio"), or having an array of
brighter and darker LED backlights (BrightSide Technologies now part of Dolby [2], and Samsung in development
[3]
).

Light bloom
Light blooming is the result of scattering in the human lens, which our brain interprets as a bright spot in a scene. For
example, a bright light in the background will appear to bleed over onto objects in the foreground. This can be used
to create an illusion to make the bright spot appear to be brighter than it really is.

Flare
Flare is the diffraction of light in the human lens, resulting in "rays" of light emanating from small light sources, and
can also result in some chromatic effects. It is most visible on point light sources because of their small visual angle.
Otherwise, HDR rendering systems have to map the full dynamic range to what the eye would see in the rendered
situation onto the capabilities of the device. This tone mapping is done relative to what the virtual scene camera sees,
combined with several full screen effects, e.g. to simulate dust in the air which is lit by direct sunlight in a dark
cavern, or the scattering in the eye.
Tone mapping and blooming shaders can be used together to help simulate these effects.

Tone mapping
Tone mapping, in the context of graphics rendering, is a technique used to map colors from high dynamic range (in
which lighting calculations are performed) to a lower dynamic range that matches the capabilities of the desired
display device. Typically, the mapping is non-linear it preserves enough range for dark colors and gradually limits
the dynamic range for bright colors. This technique often produces visually appealing images with good overall
detail and contrast. Various tone mapping operators exist, ranging from simple real-time methods used in computer
games to more sophisticated techniques that attempt to imitate the perceptual response of the human visual system.

60

High dynamic range rendering

Applications in computer entertainment


Currently HDRR has been prevalent in games, primarily for PCs, Microsoft's Xbox 360, and Sony's PlayStation 3. It
has also been simulated on the PlayStation 2, GameCube, Xbox and Amiga systems. Sproing Interactive Media has
announced that their new Athena game engine for the Wii will support HDRR, adding Wii to the list of systems that
support it.
In desktop publishing and gaming, color values are often processed several times over. As this includes
multiplication and division (which can accumulate rounding errors), it is useful to have the extended accuracy and
range of 16 bit integer or 16 bit floating point formats. This is useful irrespective of the aforementioned limitations in
some hardware.

Development of HDRR through DirectX


Complex shader effects began their days with the release of Shader Model 1.0 with DirectX 8. Shader Model 1.0
illuminated 3D worlds with what is called standard lighting. Standard lighting, however, had two problems:
1. Lighting precision was confined to 8 bit integers, which limited the contrast ratio to 256:1. Using the HVS color
model, the value (V), or brightness of a color has a range of 0 255. This means the brightest white (a value of
255) is only 255 levels brighter than the darkest shade above pure black (i.e.: value of 0).
2. Lighting calculations were integer based, which didn't offer as much accuracy because the real world is not
confined to whole numbers.
On December 24, 2002, Microsoft released a new version of DirectX. DirectX 9.0 introduced Shader Model 2.0,
which offered one of the necessary components to enable rendering of high-dynamic-range images: lighting
precision was not limited to just 8-bits. Although 8-bits was the minimum in applications, programmers could
choose up to a maximum of 24 bits for lighting precision. However, all calculations were still integer-based. One of
the first graphics cards to support DirectX 9.0 natively was ATI's Radeon 9700, though the effect wasn't
programmed into games for years afterwards. On August 23, 2003, Microsoft updated DirectX to DirectX 9.0b,
which enabled the Pixel Shader 2.x (Extended) profile for ATI's Radeon X series and NVIDIA's GeForce FX series
of graphics processing units.
On August 9, 2004, Microsoft updated DirectX once more to DirectX 9.0c. This also exposed the Shader Model 3.0
profile for high-level shader language (HLSL). Shader Model 3.0's lighting precision has a minimum of 32 bits as
opposed to 2.0's 8-bit minimum. Also all lighting-precision calculations are now floating-point based. NVIDIA states
that contrast ratios using Shader Model 3.0 can be as high as 65535:1 using 32-bit lighting precision. At first, HDRR
was only possible on video cards capable of Shader-Model-3.0 effects, but software developers soon added
compatibility for Shader Model 2.0. As a side note, when referred to as Shader Model 3.0 HDR, HDRR is really
done by FP16 blending. FP16 blending is not part of Shader Model 3.0, but is supported mostly by cards also
capable of Shader Model 3.0 (exceptions include the GeForce 6200 series). FP16 blending can be used as a faster
way to render HDR in video games.
Shader Model 4.0 is a feature of DirectX 10, which has been released with Windows Vista. Shader Model 4.0 will
allow for 128-bit HDR rendering, as opposed to 64-bit HDR in Shader Model 3.0 (although this is theoretically
possible under Shader Model 3.0).
Shader Model 5.0 is a feature in DirectX 11, On Windows Vista and Windows 7, it allows 6:1 compression of HDR
textures, without noticeable loss, which is prevalent on previous versions of DirectX HDR texture compression
techniques.

61

High dynamic range rendering

Development of HDRR through OpenGL


It is possible to develop HDRR through GLSL shader starting from OpenGL 1.4 onwards.

GPUs that support HDRR


This is a list of graphics processing units that may or can support HDRR. It is implied that because the minimum
requirement for HDR rendering is Shader Model 2.0 (or in this case DirectX 9), any graphics card that supports
Shader Model 2.0 can do HDR rendering. However, HDRR may greatly impact the performance of the software
using it if the device is not sufficiently powerful.
GPUs designed for games
Shader Model 2 Compliant (Includes versions 2.0, 2.0a and 2.0b)
From ATI

R300 series: 9500, 9500 Pro, 9550, 9550 SE, 9600, 9600 SE, 9600 TX, 9600 AIW, 9600 Pro, 9600 XT, 9650, 9700, 9700
AIW, 9700 Pro, 9800, 9800 SE, 9800 AIW, 9800 Pro, 9800XT, X300, X300 SE, X550, X600 AIW, X600 Pro, X600 XT
R420 series: X700, X700 Pro, X700 XT, X800, X800SE, X800 GT, X800 GTO, X800 Pro, X800 AIW, X800 XL, X800 XT,
X800 XTPE, X850 Pro, X850 XT, X850 XTPE
Radeon RS690: X1200 mobility

From
NVIDIA

GeForce FX (includes PCX versions): 5100, 5200, 5200 SE/XT, 5200 Ultra, 5300, 5500, 5600, 5600 SE/XT, 5600 Ultra,
5700, 5700 VE, 5700 LE, 5700 Ultra, 5750, 5800, 5800 Ultra, 5900 5900 ZT, 5900 SE/XT, 5900 Ultra, 5950, 5950 Ultra

From S3
Graphics

Delta Chrome: S4, S4 Pro, S8, S8 Nitro, F1, F1 Pole Gamma Chrome: S18 Pro, S18 Ultra, S25, S27

From SiS

Xabre: Xabre II

From XGI

Volari: V3 XT, V5, V5, V8, V8 Ultra, Duo V5 Ultra, Duo V8 Ultra, 8300, 8600, 8600 XT
Shader Model 3.0 Compliant

From ATI

R520 series: X1300 HyperMemory Edition, X1300, X1300 Pro, X1600 Pro, X1600 XT, X1650 Pro, X1650 XT, X1800
GTO, X1800 XL AIW, X1800 XL, X1800 XT, X1900 AIW, X1900 GT, X1900 XT, X1900 XTX, X1950 Pro, X1950 XT,
X1950 XTX, Xenos (Xbox 360)

From
NVIDIA

GeForce 6: 6100, 6150, 6200 LE, 6200, 6200 TC, 6250, 6500, 6600, 6600 LE, 6600 DDR2, 6600 GT, 6610 XL, 6700 XL,
6800, 6800 LE, 6800 XT, 6800 GS, 6800 GTO, 6800 GT, 6800 Ultra, 6800 Ultra Extreme GeForce 7: 7300 LE, 7300 GS,
7300 GT, 7600 GS, 7600 GT, 7800 GS, 7800 GT, 7800 GTX, 7800 GTX 512MB, 7900 GS, 7900 GT, 7950 GT, 7900 GTO,
7900 GTX, 7900 GX2, 7950 GX2, 7950 GT, RSX (PlayStation 3)
Shader Model 4.0/4.1* Compliant

From ATI

R600 series: HD 2900 XT, HD 2900 Pro, HD 2900 GT, HD 2600 XT, HD 2600 Pro, HD 2400 XT, HD 2400 Pro, HD 2350,
HD 3870*, HD 3850*, HD 3650*, HD 3470*, HD 3450*, HD 3870 X2* R700 series: HD 4870 X2, HD 4890, HD 4870*,
HD4850*, HD 4670*, HD 4650*

From
NVIDIA

GeForce 8: 8800 Ultra, 8800 GTX, 8800 GT, 8800 GTS, 8800GTS 512MB, 8800GS, 8600 GTS, 8600 GT, 8600M GS,
8600M GT, 8500 GT, 8400 GS, 8300 GS, 8300 GT, 8300 GeForce 9 Series: 9800 GX2, 9800 GTX (+), 9800 GT, 9600 GT,
9600 GSO, 9500 GT, 9400 GT, 9300 GT, 9300 GS, 9200 GT
GeForce 200 Series: GTX 295, GTX 285, GTX 280, GTX 275, GTX 260, GTS 250, GTS240, GT240*, GT220*
Shader Model 5.0 Compliant

From ATI

R800 Series: HD 5750, HD 5770, HD 5850, HD 5870, HD 5870 X2, HD 5970* R900 Series: HD 6990, HD 6970, HD 6950,
HD 6870, HD 6850, HD 6770, HD 6750, HD 6670, HD 6570, HD 6450

From
NVIDIA

GeForce 400 Series: GTX 480, GTX 475, GTX 470, GTX 465, GTX 460 GeForce 500 Series: GTX 590, GTX 580, GTX
570, GTX 560 Ti, GTX 550 Ti

GPUs designed for workstations

62

High dynamic range rendering

Shader Model 2 Compliant (Includes versions 2.0, 2.0a and 2.0b)


From ATI

FireGL: Z1-128, T2-128, X1-128, X2-256, X2-256t, V3100, V3200, X3-256, V5000, V5100, V7100

From NVIDIA Quadro FX: 330, 500, 600, 700, 1000, 1100, 1300, 2000, 3000
Shader Model 3.0 Compliant
From ATI

FireGL: V7300, V7350

From NVIDIA Quadro FX: 350, 540, 550, 560, 1400, 1500, 3400, 3450, 3500, 4000, 4400, 4500, 4500SDI, 4500 X2, 5500, 5500SDI
From 3Dlabs

Wildcat Realizm: 100, 200, 500, 800

Game engines that support HDR rendering

Unreal Engine 3
Chrome Engine 3
Source
CryEngine, CryEngine 2, CryEngine 3
Dunia Engine
Gamebryo
Unity
id Tech 5
Lithtech
Unigine
Frostbite 2
Real Virtuality 2, Real Virtuality 3, Real Virtuality 4
HPL 3

References
[1] http:/ / www. lge. com/ about/ press_release/ detail/ PRO%7CNEWS%5EPRE%7CMENU_20075_PRE%7CMENU. jhtml
[2] http:/ / www. dolby. com/ promo/ hdr/ technology. html
[3] http:/ / www. engadget. com/ 2007/ 02/ 01/ samsungs-15-4-30-and-40-inch-led-backlit-lcds/

External links
NVIDIA's HDRR technical summary (http://download.nvidia.com/developer/presentations/2004/
6800_Leagues/6800_Leagues_HDR.pdf) (PDF)
A HDRR Implementation with OpenGL 2.0 (http://www.gsulinux.org/~plq)
OpenGL HDRR Implementation (http://www.smetz.fr/?page_id=83)
High Dynamic Range Rendering in OpenGL (http://transporter-game.googlecode.com/files/
HDRRenderingInOpenGL.pdf) (PDF)
High Dynamic Range Imaging environments for Image Based Lighting (http://www.hdrsource.com/)
Microsoft's technical brief on SM3.0 in comparison with SM2.0 (http://www.microsoft.com/whdc/winhec/
partners/shadermodel30_NVIDIA.mspx)
Tom's Hardware: New Graphics Card Features of 2006 (http://www.tomshardware.com/2006/01/13/
new_3d_graphics_card_features_in_2006/)
List of GPU's compiled by Chris Hare (http://users.erols.com/chare/video.htm)
techPowerUp! GPU Database (http://www.techpowerup.com/gpudb/)
Understanding Contrast Ratios in Video Display Devices (http://www.hometheaterhifi.com/volume_13_2/
feature-article-contrast-ratio-5-2006-part-1.html)

63

High dynamic range rendering


Requiem by TBL, featuring real-time HDR rendering in software (http://demoscene.tv/page.php?id=172&
lang=uk&vsmaction=view_prod&id_prod=12561)
List of video games supporting HDR (http://www.uvlist.net/groups/info/hdrlighting)
Examples of high dynamic range photography (http://www.hdr-photography.org/)
Examples of high dynamic range 360-degree panoramic photography (http://www.hdrsource.com/)

Image-based lighting
Image-based lighting (IBL) is a 3D rendering technique which involves capturing an omni-directional
representation of real-world light information as an image, typically using a specialised camera. This image is then
projected onto a dome or sphere analogously to environment mapping, and this is used to simulate the lighting for
the objects in the scene. This allows highly detailed real-world lighting to be used to light a scene, instead of trying
to accurately model illumination using an existing rendering technique.
Image-based lighting often uses high dynamic range imaging for greater realism, though this is not universal. Almost
all modern rendering software offers some type of image-based lighting, though the exact terminology used in the
system may vary.
Image-based lighting is also starting to show up in video games as video game consoles and personal computers start
to have the computational resources to render scenes in real time using this technique. This technique is used in
Forza Motorsport 4 and Crash Time 5: Undercover, by the Chameleon engine used in Need for Speed: Hot Pursuit,
and in the CryEngine 3 middleware.

References
Tutorial [1]

External links
Real-Time HDR Image-Based Lighting Demo [2]

References
[1] http:/ / ict. usc. edu/ pubs/ Image-Based%20Lighting. pdf
[2] http:/ / www. daionet. gr. jp/ ~masa/ rthdribl/

64

Image plane

Image plane
In 3D computer graphics, the image plane is that plane in the world which is identified with the plane of the
monitor. If one makes the analogy of taking a photograph to rendering a 3D image, the surface of the film is the
image plane. In this case, the viewing transformation is a projection that maps the world onto the image plane. A
rectangular region of this plane, called the viewing window or viewport, maps to the monitor. This establishes the
mapping between pixels on the monitor and points (or rather, rays) in the 3D world.
In optics, the image plane is the plane that contains the object's projected image, and lies beyond the back focal
plane.

Irregular Z-buffer
The irregular Z-buffer is an algorithm designed to solve the visibility problem in real-time 3-d computer graphics.
It is related to the classical Z-buffer in that it maintains a depth value for each image sample and uses these to
determine which geometric elements of a scene are visible. The key difference, however, between the classical
Z-buffer and the irregular Z-buffer is that the latter allows arbitrary placement of image samples in the image plane,
whereas the former requires samples to be arranged in a regular grid.
These depth samples are explicitly stored in a two-dimensional spatial data structure. During rasterization, triangles
are projected onto the image plane as usual, and the data structure is queried to determine which samples overlap
each projected triangle. Finally, for each overlapping sample, the standard Z-compare and (conditional) frame buffer
update are performed.

Implementation
The classical rasterization algorithm projects each polygon onto the image plane, and determines which sample
points from a regularly spaced set lie inside the projected polygon. Since the locations of these samples (i.e. pixels)
are implicit, this determination can be made by testing the edges against the implicit grid of sample points. If,
however the locations of the sample points are irregularly spaced and cannot be computed from a formula, then this
approach does not work. The irregular Z-buffer solves this problem by storing sample locations explicitly in a
two-dimensional spatial data structure, and later querying this structure to determine which samples lie within a
projected triangle. This latter step is referred to as "irregular rasterization".
Although the particular data structure used may vary from implementation to implementation, the two studied
approaches are the kd-tree, and a grid of linked lists. A balanced kd-tree implementation has the advantage that it
guarantees O(log(N)) access. Its chief disadvantage is that parallel construction of the kd-tree may be difficult, and
traversal requires expensive branch instructions. The grid of lists has the advantage that it can be implemented more
effectively on GPU hardware, which is designed primarily for the classical Z-buffer.
With the appearance of CUDA, the programmability of current graphics hardware has been drastically improved.
The Master Thesis, "Fast Triangle Rasterization using irregular Z-buffer on CUDA" (see External Links), provide a
complete description to an irregular Z-Buffer based shadow mapping software implementation on CUDA. The
rendering system is running completely on GPUs. It is capable of generating aliasing-free shadows at a throughput of
dozens of million triangles per second.

65

Irregular Z-buffer

Applications
The irregular Z-buffer can be used for any application which requires visibility calculations at arbitrary locations in
the image plane. It has been shown to be particularly adept at shadow mapping, an image space algorithm for
rendering hard shadows. In addition to shadow rendering, potential applications include adaptive anti-aliasing,
jittered sampling, and environment mapping.

External links

The Irregular Z-Buffer: Hardware Acceleration for Irregular Data Structures [1]
The Irregular Z-Buffer And Its Application to Shadow Mapping [2]
Alias-Free Shadow Maps [3]
Fast Triangle Rasterization using irregular Z-buffer on CUDA [4]

References
[1]
[2]
[3]
[4]

http:/ / www. tacc. utexas. edu/ ~cburns/ papers/ izb-tog. pdf


http:/ / www. cs. utexas. edu/ ftp/ pub/ techreports/ tr04-09. pdf
http:/ / www. tml. hut. fi/ ~timo/ publications/ aila2004egsr_paper. pdf
http:/ / publications. lib. chalmers. se/ records/ fulltext/ 123790. pdf

Isosurface
An isosurface is a three-dimensional analog of an isoline. It is a
surface that represents points of a constant value (e.g. pressure,
temperature, velocity, density) within a volume of space; in other
words, it is a level set of a continuous function whose domain is
3D-space.
Isosurfaces are normally displayed using computer graphics, and are
used as data visualization methods in computational fluid dynamics
(CFD), allowing engineers to study features of a fluid flow (gas or
liquid) around objects, such as aircraft wings. An isosurface may
represent an individual shock wave in supersonic flight, or several
Zirconocene with an isosurface showing areas of
isosurfaces may be generated showing a sequence of pressure values in
the molecule susceptible to electrophilic attack.
the air flowing around a wing. Isosurfaces tend to be a popular form of
visualization for volume datasets since they can be rendered by a simple polygonal model, which can be drawn on
the screen very quickly.
In medical imaging, isosurfaces may be used to represent regions of a particular density in a three-dimensional CT
scan, allowing the visualization of internal organs, bones, or other structures.
Numerous other disciplines that are interested in three-dimensional data often use isosurfaces to obtain information
about pharmacology, chemistry, geophysics and meteorology.

66

Isosurface

67

A popular method of constructing an isosurface from a data volume is


the marching cubes algorithm, and another, very similar method is the
marching tetrahedrons algorithm. Yet another is called the asymptotic
decider.
Examples of isosurfaces are 'Metaballs' or 'blobby objects' used in 3D
visualisation. A more general way to construct an isosurface is to use
the function representation.

References
Charles D. Hansen; Chris R. Johnson (2004). Visualization
Handbook [1]. Academic Press. pp.711. ISBN978-0-12-387582-2.

Isosurface of vorticity trailed from a propeller


blade

External links
Isosurface Polygonization [2]

References
[1] http:/ / books. google. com/ books?id=ZFrlULckWdAC& pg=PA7
[2] http:/ / www2. imm. dtu. dk/ ~jab/ gallery/ polygonization. html

Lambert's cosine law

68

Lambert's cosine law


In optics, Lambert's cosine law says that the radiant intensity or luminous intensity observed from an ideal diffusely
reflecting surface or ideal diffuse radiator is directly proportional to the cosine of the angle between the observer's
line of sight and the surface normal.[1][2] The law is also known as the cosine emission law or Lambert's emission
law. It is named after Johann Heinrich Lambert, from his Photometria, published in 1760.
A surface which obeys Lambert's law is said to be Lambertian, and exhibits Lambertian reflectance. Such a surface
has the same radiance when viewed from any angle. This means, for example, that to the human eye it has the same
apparent brightness (or luminance). It has the same radiance because, although the emitted power from a given area
element is reduced by the cosine of the emission angle, the apparent size (solid angle) of the observed area, as seen
by a viewer, is decreased by a corresponding amount. Therefore, its radiance (power per unit solid angle per unit
projected source area) is the same.

Lambertian scatterers and radiators


When an area element is radiating as a result of being illuminated by an external source, the irradiance (energy or
photons/time/area) landing on that area element will be proportional to the cosine of the angle between the
illuminating source and the normal. A Lambertian scatterer will then scatter this light according to the same cosine
law as a Lambertian emitter. This means that although the radiance of the surface depends on the angle from the
normal to the illuminating source, it will not depend on the angle from the normal to the observer. For example, if
the moon were a Lambertian scatterer, one would expect to see its scattered brightness appreciably diminish towards
the terminator due to the increased angle at which sunlight hit the surface. The fact that it does not diminish
illustrates that the moon is not a Lambertian scatterer, and in fact tends to scatter more light into the oblique angles
than would a Lambertian scatterer.
The emission of a Lambertian radiator does not depend upon the amount of incident radiation, but rather from
radiation originating in the emitting body itself. For example, if the sun were a Lambertian radiator, one would
expect to see a constant brightness across the entire solar disc. The fact that the sun exhibits limb darkening in the
visible region illustrates that it is not a Lambertian radiator. A black body is an example of a Lambertian radiator.

Details of equal brightness effect


The situation for a Lambertian surface (emitting or scattering) is
illustrated in Figures 1 and 2. For conceptual clarity we will think in
terms of photons rather than energy or luminous energy. The wedges in
the circle each represent an equal angle d, and for a Lambertian
surface, the number of photons per second emitted into each wedge is
proportional to the area of the wedge.
It can be seen that the length of each wedge is the product of the
diameter of the circle and cos(). It can also be seen that the maximum
rate of photon emission per unit solid angle is along the normal and
diminishes to zero for = 90. In mathematical terms, the radiance
along the normal is Iphotons/(scm2sr) and the number of photons per
second emitted into the vertical wedge is I d dA. The number of
photons per second emitted into the wedge at angle is
Icos()ddA.

Figure 1: Emission rate (photons/s) in a normal


and off-normal direction. The number of
photons/sec directed into any wedge is
proportional to the area of the wedge.

Lambert's cosine law

69

Figure 2 represents what an observer sees. The observer directly above


the area element will be seeing the scene through an aperture of area
dA0 and the area element dA will subtend a (solid) angle of d0. We
can assume without loss of generality that the aperture happens to
subtend solid angle d when "viewed" from the emitting area element.
This normal observer will then be recording IddA photons per
second and so will be measuring a radiance of
photons/(scm2sr).
The observer at angle to the normal will be seeing the scene through
the same aperture of area dA0 and the area element dA will subtend a
(solid) angle of d0cos(). This observer will be recording
Icos()ddA photons per second, and so will be measuring a
radiance of

Figure 2: Observed intensity (photons/(scm2sr))


for a normal and off-normal observer; dA0 is the
area of the observing aperture and d is the solid
angle subtended by the aperture from the
viewpoint of the emitting area element.

photons/(scm2sr),
which is the same as the normal observer.

Relating peak luminous intensity and luminous flux


In general, the luminous intensity of a point on a surface varies by direction; for a Lambertian surface, that
distribution is defined by the cosine law, with peak luminous intensity in the normal direction. Thus when the
Lambertian assumption holds, we can calculate the total luminous flux,
, from the peak luminous intensity,
, by integrating the cosine law:

and so

where

is the determinant of the Jacobian matrix for the unit sphere, and realizing that
[3]

per steradian.

Similarly, the peak intensity will be

is luminous flux

of the total radiated luminous flux. For Lambertian

surfaces, the same factor of


relates luminance to luminous emittance, radiant intensity to radiant flux, and
[citation needed]
radiance to radiant emittance.
Radians and steradians are, of course, dimensionless and so "rad" and
"sr" are included only for clarity.
Example: A surface with a luminance of say 100cd/m2 (= 100 nits, typical PC monitor) will, if it is a perfect
Lambert emitter, have a luminous emittance of 314 lm/m2. If its area is 0.1 m2 (~19" monitor) then the total light
emitted, or luminous flux, would thus be 31.4 lm.

Lambert's cosine law

70

Uses
Lambert's cosine law in its reversed form (Lambertian reflection) implies that the apparent brightness of a
Lambertian surface is proportional to the cosine of the angle between the surface normal and the direction of the
incident light.
This phenomenon is, among others, used when creating mouldings, which are a means of applying light- and
dark-shaded stripes to a structure or object without having to change the material or apply pigment. The contrast of
dark and light areas gives definition to the object. Mouldings are strips of material with various cross-sections used
to cover transitions between surfaces or for decoration.

References
[1] RCA Electro-Optics Handbook, p.18 ff
[2] Modern Optical Engineering, Warren J. Smith, McGraw-Hill, p.228, 256
[3] Incropera and DeWitt, Fundamentals of Heat and Mass Transfer, 5th ed., p.710.

Lambertian reflectance
Lambertian reflectance is the property that defines an ideal diffusely reflecting surface. The apparent brightness of
such a surface to an observer is the same regardless of the observer's angle of view. More technically, the surface's
luminance is isotropic, and the luminous intensity obeys Lambert's cosine law. Lambertian reflectance is named after
Johann Heinrich Lambert, who introduced the concept of perfect diffusion in his 1760 book Photometria.

Examples
Unfinished wood exhibits roughly Lambertian reflectance, but wood finished with a glossy coat of polyurethane
does not, since the glossy coating creates specular highlights. Not all rough surfaces are Lambertian reflectors, but
this is often a good approximation when the characteristics of the surface are unknown.
Spectralon is a material which is designed to exhibit an almost perfect Lambertian reflectance.

Use in computer graphics


In computer graphics, Lambertian reflection is often used as a model for diffuse reflection. This technique causes all
closed polygons (such as a triangle within a 3D mesh) to reflect light equally in all directions when rendered. In
effect, a point rotated around its normal vector will not change the way it reflects light. However, the point will
change the way it reflects light if it is tilted away from its initial normal vector.Wikipedia:Verifiability The reflection
is calculated by taking the dot product of the surface's normal vector, , and a normalized light-direction vector,
, pointing from the surface to the light source. This number is then multiplied by the color of the surface and the
intensity of the light hitting the surface:
,
where

is the intensity of the diffusely reflected light (surface brightness),

is the color and

is the intensity

of the incoming light. Because


,
where is the angle between the direction of the two vectors, the intensity will be the highest if the normal vector
points in the same direction as the light vector (
, the surface will be perpendicular to the direction of
the light), and the lowest if the normal vector is perpendicular to the light vector (
parallel with the direction of the light).

, the surface runs

Lambertian reflectance

71

Lambertian reflection from polished surfaces are typically accompanied by specular reflection (gloss), where the
surface luminance is highest when the observer is situated at the perfect reflection direction (i.e. where the direction
of the reflected light is a reflection of the direction of the incident light in the surface), and falls off sharply. This is
simulated in computer graphics with various specular reflection models such as Phong, Cook-Torrance. etc.

Other waves
While Lambertian reflectance usually refers to the reflection of light by an object, it can be used to refer to the
reflection of any wave. For example, in ultrasound imaging, "rough" tissues are said to exhibit Lambertian
reflectance.

References

Level of detail
In computer graphics, accounting for level of detail involves decreasing the complexity of a 3D object representation
as it moves away from the viewer or according other metrics such as object importance, viewpoint-relative speed or
position. Level of detail techniques increases the efficiency of rendering by decreasing the workload on graphics
pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of
the small effect on object appearance when distant or moving fast.
Although most of the time LOD is applied to geometry detail only, the basic concept can be generalized. Recently,
LOD techniques also included shader management to keep control of pixel complexity. A form of level of detail
management has been applied to textures for years, under the name of mipmapping, also providing higher rendering
quality.
It is commonplace to say that "an object has been LOD'd" when the object is simplified by the underlying LOD-ing
algorithm.

Historical reference
The origin[1] of all the LoD algorithms for 3D computer graphics can be traced back to an article by James H. Clark
in the October 1976 issue of Communications of the ACM. At the time, computers were monolithic and rare, and
graphics was being driven by researchers. The hardware itself was completely different, both architecturally and
performance-wise. As such, many differences could be observed with regard to today's algorithms but also many
common points.
The original algorithm presented a much more generic approach to what will be discussed here. After introducing
some available algorithms for geometry management, it is stated that most fruitful gains came from "...structuring
the environments being rendered", allowing to exploit faster transformations and clipping operations.
The same environment structuring is now proposed as a way to control varying detail thus avoiding unnecessary
computations, yet delivering adequate visual quality:
For example, a dodecahedron looks like a sphere from a sufficiently large distance and thus can be used to model it so long as it is viewed
from that or a greater distance. However, if it must ever be viewed more closely, it will look like a dodecahedron. One solution to this is
simply to define it with the most detail that will ever be necessary. However, then it might have far more detail than is needed to represent it at
large distances, and in a complex environment with many such objects, there would be too many polygons (or other geometric primitives) for
the visible surface algorithms to efficiently handle.

Level of detail

72

The proposed algorithm envisions a tree data structure which encodes in its arcs both transformations and transitions
to more detailed objects. In this way, each node encodes an object and according to a fast heuristic, the tree is
descended to the leafs which provide each object with more detail. When a leaf is reached, other methods could be
used when higher detail is needed, such as Catmull's recursive subdivision[2].

The significant point, however, is that in a complex environment, the amount of information presented about the various objects in the
environment varies according to the fraction of the field of view occupied by those objects.

The paper then introduces clipping (not to be confused with culling (computer graphics), although often similar),
various considerations on the graphical working set and its impact on performance, interactions between the
proposed algorithm and others to improve rendering speed. Interested readers are encouraged in checking the
references for further details on the topic.

Well known approaches


Although the algorithm introduced above covers a whole range of level of detail management techniques, real world
applications usually employ different methods according the information being rendered. Because of the appearance
of the considered objects, two main algorithm families are used.[citation needed]
The first is based on subdividing the space in a finite amount of regions, each with a certain level of detail. The result
is discrete amount of detail levels, from which the name Discrete LoD (DLOD). There's no way to support a smooth
transition between LOD levels at this level, although alpha blending or morphing can be used to avoid visual
popping.
The latter considers the polygon mesh being rendered as a function which must be evaluated requiring to avoid
excessive errors which are a function of some heuristic (usually distance) themselves. The given "mesh" function is
then continuously evaluated and an optimized version is produced according to a tradeoff between visual quality and
performance. Those kind of algorithms are usually referred as Continuous LOD (CLOD).

Details on Discrete LOD


The basic concept of discrete LOD (DLOD) is to provide various
models to represent the same object. Obtaining those models
requires an external algorithm which is often non-trivial and
subject of many polygon reduction techniques. Successive
LOD-ing algorithms will simply assume those models are
available.
DLOD algorithms are often used in performance-intensive
applications with small data sets which can easily fit in memory.
Although out of core algorithms could be used, the information
granularity is not well suited to this kind of application. This kind
of algorithm is usually easier to get working, providing both faster
performance and lower CPU usage because of the few operations
involved.
DLOD methods are
possibly including
approach is used
rendering algorithm

often used for "stand-alone" moving objects,


complex animation methods. A different
for geomipmapping[3], a popular terrain
because this applies to terrain meshes which

An example of various DLOD ranges. Darker areas are


meant to be rendered with higher detail. An additional
culling operation is run, discarding all the information
outside the frustum (colored areas).

Level of detail

73

are both graphically and topologically different from "object" meshes. Instead of computing an error and simplify the
mesh according to this, geomipmapping takes a fixed reduction method, evaluates the error introduced and computes
a distance at which the error is acceptable. Although straightforward, the algorithm provides decent performance.

A discrete LOD example


As a simple example, consider the following sphere. A discrete LOD approach would cache a certain number of
models to be used at different distances. Because the model can trivially be procedurally generated by its
mathematical formulation, using a different amount of sample points distributed on the surface is sufficient to
generate the various models required. This pass is not a LOD-ing algorithm.

Visual impact comparisons and measurements


Image

Vertices ~5500
Notes

~2880

~1580

~670

Maximum
detail,
for closeups.

140
Minimum
detail,
very far objects.

To simulate a realistic transform bound scenario, we'll use an ad-hoc written application. We'll make sure we're not
CPU bound by using simple algorithms and minimum fragment operations. Each frame, the program will compute
each sphere's distance and choose a model from a pool according to this information. To easily show the concept, the
distance at which each model is used is hard coded in the source. A more involved method would compute adequate
models according to the usage distance chosen.
We use OpenGL for rendering because its high efficiency in managing small batches, storing each model in a display
list thus avoiding communication overheads. Additional vertex load is given by applying two directional light
sources ideally located infinitely far away.
The following table compares the performance of LoD aware rendering and a full detail (brute force) method.

Visual impact comparisons and measurements


Brute

DLOD

Comparison

Rendered
images

Render time 27.27 ms

1.29 ms

21 reduction

Scene
vertices
(thousands)

109.44

21 reduction

2328.48

Level of detail

Hierarchical LOD
Because hardware is geared towards large amounts of detail, rendering low polygon objects may score sub-optimal
performances. HLOD avoids the problem by grouping different objects together[4]. This allows for higher efficiency
as well as taking advantage of proximity considerations.

References
1. ^ Communications of the ACM, October 1976 Volume 19 Number 10. Pages 547-554. Hierarchical Geometric
Models for Visible Surface Algorithms by James H. Clark, University of California at Santa Cruz. Digitalized scan
is freely available at http://accad.osu.edu/~waynec/history/PDFs/clark-vis-surface.pdf.
2. ^ Catmull E., A Subdivision Algorithm for Computer Display of Curved Surfaces. Tech. Rep. UTEC-CSc-74-133,
University of Utah, Salt Lake City, Utah, Dec. 1974.
3. ^ de Boer, W.H., Fast Terrain Rendering using Geometrical Mipmapping, in flipCode featured articles, October
2000. Available at http://www.flipcode.com/tutorials/tut_geomipmaps.shtml.
4. ^ Carl Erikson's paper at http://www.cs.unc.edu/Research/ProjectSummaries/hlods.pdf provides a quick, yet
effective overlook at HLOD mechanisms. A more involved description follows in his thesis, at https://wwwx.cs.
unc.edu/~geom/papers/documents/dissertations/erikson00.pdf.

References
[1]
[2]
[3]
[4]

http:/ / en. wikipedia. org/ wiki/ Level_of_detail#endnote_oldOldOld


http:/ / en. wikipedia. org/ wiki/ Level_of_detail#endnote_Catmull
http:/ / en. wikipedia. org/ wiki/ Level_of_detail#endnote_geomipmapping
http:/ / en. wikipedia. org/ wiki/ Level_of_detail#endnote_hlod

Mipmap
In 3D computer graphics, mipmaps (also MIP maps) are pre-calculated, optimized collections of images that
accompany a main texture, intended to increase rendering speed and reduce aliasing artifacts. They are widely used
in 3D computer games, flight simulators and other 3D imaging systems for texture filtering. Their use is known as
mipmapping. The letters "MIP" in the name are an acronym of the Latin phrase multum in parvo, meaning "much in
little".[1] Since mipmaps cannot be calculated in real time, additional storage space is required to take advantage of
them. They also form the basis of wavelet compression.

Basic Use
Mipmaps are used for:
-Speeding up rendering times. (Smaller textures equate to less memory usage.)
-Improving the quality. Rendering large textures where only small subsets of points are used can easily produce
moir patterns.
-Reducing stress on GPU.

74

Mipmap

75

Origin
Mipmapping was invented by Lance Williams in 1983 and is described in his paper Pyramidal parametrics. From
the abstract: "This paper advances a 'pyramidal parametric' prefiltering and sampling geometry which minimizes
aliasing effects and assures continuity within and between target images." The "pyramid" can be imagined as the set
of mipmaps stacked on top of each other.

How it works
Each bitmap image of the mipmap set is a downsized duplicate of the
main texture, but at a certain reduced level of detail. Although the main
texture would still be used when the view is sufficient to render it in
full detail, the renderer will switch to a suitable mipmap image (or in
fact, interpolate between the two nearest, if trilinear filtering is
activated) when the texture is viewed from a distance or at a small size.
Rendering speed increases since the number of texture pixels ("texels")
being processed can be much lower with the simple textures. Artifacts
are reduced since the mipmap images are effectively already
anti-aliased, taking some of the burden off the real-time renderer.
Scaling down and up is made more efficient with mipmaps as well.

An example of mipmap image storage: the


principal image on the left is accompanied by
filtered copies of reduced size.

If the texture has a basic size of 256 by 256 pixels, then the associated mipmap set may contain a series of 8 images,
each one-fourth the total area of the previous one: 128128 pixels, 6464, 3232, 1616, 88, 44, 22, 11 (a
single pixel). If, for example, a scene is rendering this texture in a space of 4040 pixels, then either a scaled up
version of the 3232 (without trilinear interpolation) or an interpolation of the 6464 and the 3232 mipmaps (with
trilinear interpolation) would be used. The simplest way to generate these textures is by successive averaging;
however, more sophisticated algorithms (perhaps based on signal processing and Fourier transforms) can also be
used.

The original RGB image

The increase in storage space required for all of these mipmaps is a


third of the original texture, because the sum of the areas 1/4 + 1/16 +
1/64 + 1/256 + converges to 1/3. In the case of an RGB image with
three channels stored as separate planes, the total mipmap can be
visualized as fitting neatly into a square area twice as large as the
dimensions of the original image on each side (twice as large on each
side is four times the original area - one plane of the original size for
each of red, green and blue makes three times the original area, and
then since the smaller textures take 1/3 of the original, 1/3 of three is
one, so they will take the same total space as just one of the original
red, green, or blue planes). This is the inspiration for the tag "multum
in parvo".

Anisotropic filtering
When a texture is seen at a steep angle, the filtering should not be uniform in each direction (it should be anisotropic
rather than isotropic), and a compromise resolution is used. If a higher resolution

Mipmap

76

is used, the cache coherence goes down, and the aliasing is increased in
one direction, but the image tends to be clearer. If a lower resolution is
used, the cache coherence is improved, but the image is overly blurry.
Nonuniform mipmaps (also known as rip-maps) can solve this
problem, although they have no direct support on modern graphics
hardware. With an 88 base texture map, the rip-map resolutions are
88, 84, 82, 81; 48, 44, 42, 41; 28, 24, 22, 21; 18,
14, 12 and 11. In general, for a

base texture map, the


rip-map resolutions are

for i and j from 0 to n.

Summed-area tables
Summed-area tables can conserve memory and provide more
resolutions. However, they again hurt cache coherence, and need wider
types to store the partial sums than the base texture's word size. Thus,
modern graphics hardware does not support them either.

In the case of an RGB image with three channels


stored as separate planes, the total mipmap can be
visualized as fitting neatly into a square area
twice as large as the dimensions of the original
image on each side. It also shows visually how
using mipmaps requires 33% more memory.

References
[1] http:/ / staff. cs. psu. ac. th/ iew/ cs344-481/ p1-williams. pdf

Newell's algorithm
Newell's Algorithm is a 3D computer graphics procedure for elimination of polygon cycles in the depth sorting
required in hidden surface removal. It was proposed in 1972 by brothers Martin Newell and Dick Newell, and Tom
Sancha, while all three were working at CADCentre.
In the depth sorting phase of hidden surface removal, if two polygons have no overlapping extents or extreme
minimum and maximum values in the x, y, and z directions, then they can be easily sorted. If two polygons, Q and P,
do have overlapping extents in the Z direction, then it is possible that cutting is necessary.
In that case Newell's algorithm tests the following:
1. Test for Z overlap; implied in the selection of the face Q from the sort list
2. The extreme coordinate values in X of the two faces do not overlap (minimax test in
X)
3. The extreme coordinate values in Y of the two faces do not overlap (minimax test in
Y)
4. All vertices of P lie deeper than the plane of Q
5. All vertices of Q lie closer to the viewpoint than the plane of P
6. The rasterisation of P and Q do not overlap

Cyclic polygons must be


eliminated to correctly sort
them by depth

Note that the tests are given in order of increasing computational difficulty.
Note also that the polygons must be planar.
If the tests are all false, then the polygons must be split. Splitting is accomplished by selecting one polygon and
cutting it along the line of intersection with the other polygon. The above tests are again performed, and the
algorithm continues until all polygons pass the above tests.

Newell's algorithm

77

References
Sutherland, Ivan E.; Sproull, Robert F.; Schumacker, Robert A. (1974), "A characterization of ten hidden-surface
algorithms", Computing Surveys 6 (1): 155, doi:10.1145/356625.356626 [1].
Newell, M. E.; Newell, R. G.; Sancha, T. L. (1972), "A new approach to the shaded picture problem", Proc. ACM
National Conference, pp.443450.

References
[1] http:/ / dx. doi. org/ 10. 1145%2F356625. 356626

Non-uniform rational B-spline


Non-uniform rational basis spline
(NURBS) is a mathematical model
commonly used in computer graphics
for generating and representing curves
and surfaces. It offers great flexibility
and precision for handling both
analytic (surfaces defined by common
mathematical formulae) and modeled
shapes.

History
Development of NURBS began in the
1950s by engineers who were in need
of
a
mathematically
precise
representation of freeform surfaces like
those used for ship hulls, aerospace
exterior surfaces, and car bodies,
which could be exactly reproduced
whenever technically needed. Prior
representations of this kind of surface
only existed as a single physical model
created by a designer.

Three-dimensional NURBS surfaces can have complex, organic shapes. Control points
influence the directions the surface takes. The outermost square below delineates the X/Y
extents of the surface.

A NURBS curve.

The pioneers of this development were


Animated version
Pierre Bzier who worked as an
engineer at Renault, and Paul de Casteljau who worked at Citron, both in France. Bzier worked nearly parallel to
de Casteljau, neither knowing about the work of the other. But because Bzier published the results of his work, the
average computer graphics user today recognizes splines which are represented with control points lying off the
curve itself as Bzier splines, while de Casteljaus name is only known and used for the algorithms he developed
to evaluate parametric surfaces. In the 1960s it became clear that non-uniform, rational B-splines are a generalization
of Bzier splines, which can be regarded as uniform, non-rational B-splines.
At first NURBS were only used in the proprietary CAD packages of car companies. Later they became part of
standard computer graphics packages.

Non-uniform rational B-spline


Real-time, interactive rendering of NURBS curves and surfaces was first made available on Silicon Graphics
workstations in 1989. In 1993, the first interactive NURBS modeller for PCs, called NRBS, was developed by CAS
Berlin, a small startup company cooperating with the Technical University of Berlin. Today most professional
computer graphics applications available for desktop use offer NURBS technology, which is most often realized by
integrating a NURBS engine from a specialized company.

Use
NURBS are commonly used in computer-aided design
(CAD), manufacturing (CAM), and engineering (CAE)
and are part of numerous industry wide standards, such
as IGES, STEP, ACIS, and PHIGS. NURBS tools are
also found in various 3D modelling and animation
software packages.
They can be efficiently handled by the computer
programs and yet allow for easy human interaction.
NURBS surfaces are functions of two parameters
mapping to a surface in three-dimensional space. The
Motoryacht design.
shape of the surface is determined by control points.
NURBS surfaces can represent simple geometrical
shapes in a compact form. T-splines and subdivision surfaces are more suitable for complex organic shapes because
they reduce the number of control points twofold in comparison with the NURBS surfaces.
In general, editing NURBS curves and surfaces is highly intuitive and predictable. Control points are always either
connected directly to the curve/surface, or act as if they were connected by a rubber band. Depending on the type of
user interface, editing can be realized via an elements control points, which are most obvious and common for
Bzier curves, or via higher level tools such as spline modeling or hierarchical editing.
A surface under construction, e.g. the hull of a motor yacht, is usually composed of several NURBS surfaces known
as patches. These patches should be fitted together in such a way that the boundaries are invisible. This is
mathematically expressed by the concept of geometric continuity.
Higher-level tools exist which benefit from the ability of NURBS to create and establish geometric continuity of
different levels:
Positional continuity (G0)
holds whenever the end positions of two curves or surfaces are coincidental. The curves or surfaces may still
meet at an angle, giving rise to a sharp corner or edge and causing broken highlights.
Tangential continuity (G1)
requires the end vectors of the curves or surfaces to be parallel and pointing the same way, ruling out sharp
edges. Because highlights falling on a tangentially continuous edge are always continuous and thus look
natural, this level of continuity can often be sufficient.
Curvature continuity (G2)
further requires the end vectors to be of the same length and rate of length change. Highlights falling on a
curvature-continuous edge do not display any change, causing the two surfaces to appear as one. This can be
visually recognized as perfectly smooth. This level of continuity is very useful in the creation of models that
require many bi-cubic patches composing one continuous surface.
Geometric continuity mainly refers to the shape of the resulting surface; since NURBS surfaces are functions, it is
also possible to discuss the derivatives of the surface with respect to the parameters. This is known as parametric

78

Non-uniform rational B-spline

79

continuity. Parametric continuity of a given degree implies geometric continuity of that degree.
First- and second-level parametric continuity (C0 and C1) are for practical purposes identical to positional and
tangential (G0 and G1) continuity. Third-level parametric continuity (C2), however, differs from curvature
continuity in that its parameterization is also continuous. In practice, C2 continuity is easier to achieve if uniform
B-splines are used.
The definition of the continuity 'Cn' requires that the nth derivative of the curve/surface (

) are equal

[1]

at a joint. Note that the (partial) derivatives of curves and surfaces are vectors that have a direction and a
magnitude. Both should be equal.
Highlights and reflections can reveal the perfect smoothing, which is otherwise practically impossible to achieve
without NURBS surfaces that have at least G2 continuity. This same principle is used as one of the surface
evaluation methods whereby a ray-traced or reflection-mapped image of a surface with white stripes reflecting on it
will show even the smallest deviations on a surface or set of surfaces. This method is derived from car prototyping
wherein surface quality is inspected by checking the quality of reflections of a neon-light ceiling on the car surface.
This method is also known as "Zebra analysis".

Technical specifications
A NURBS curve is defined by its order, a set of weighted control points, and a knot vector . NURBS curves and
surfaces are generalizations of both B-splines and Bzier curves and surfaces, the primary difference being the
weighting of the control points which makes NURBS curves rational (non-rational B-splines are a special case of
rational B-splines). Whereas Bzier curves evolve into only one parametric direction, usually called s or u, NURBS
surfaces evolve into two parametric directions, called s and t or u and v.
By evaluating a Bzier or a NURBS
curve at various values of the
parameter, the curve can be
represented in Cartesian two- or
three-dimensional space. Likewise, by
evaluating a NURBS surface at various
values of the two parameters, the
surface can be represented in Cartesian
space.
NURBS curves and surfaces are useful
for a number of reasons:
They are invariant under affine
transformations:[2] operations like
rotations and translations can be
applied to NURBS curves and surfaces by applying them to their control points.
They offer one common mathematical form for both standard analytical shapes (e.g., conics) and free-form
shapes.
They provide the flexibility to design a large variety of shapes.
They reduce the memory consumption when storing shapes (compared to simpler methods).
They can be evaluated reasonably quick by numerically stable and accurate algorithms.
In the next sections, NURBS is discussed in one dimension (curves). It should be noted that all of it can be
generalized to two or even more dimensions.

Non-uniform rational B-spline

Control points
The control points determine the shape of the curve.[3] Typically, each point of the curve is computed by taking a
weighted sum of a number of control points. The weight of each point varies according to the governing parameter.
For a curve of degree d, the weight of any control point is only nonzero in d+1 intervals of the parameter space.
Within those intervals, the weight changes according to a polynomial function (basis functions) of degree d. At the
boundaries of the intervals, the basis functions go smoothly to zero, the smoothness being determined by the degree
of the polynomial.
As an example, the basis function of degree one is a triangle function. It rises from zero to one, then falls to zero
again. While it rises, the basis function of the previous control point falls. In that way, the curve interpolates between
the two points, and the resulting curve is a polygon, which is continuous, but not differentiable at the interval
boundaries, or knots. Higher degree polynomials have correspondingly more continuous derivatives. Note that
within the interval the polynomial nature of the basis functions and the linearity of the construction make the curve
perfectly smooth, so it is only at the knots that discontinuity can arise.
The fact that a single control point only influences those intervals where it is active is a highly desirable property,
known as local support. In modeling, it allows the changing of one part of a surface while keeping other parts equal.
Adding more control points allows better approximation to a given curve, although only a certain class of curves can
be represented exactly with a finite number of control points. NURBS curves also feature a scalar weight for each
control point. This allows for more control over the shape of the curve without unduly raising the number of control
points. In particular, it adds conic sections like circles and ellipses to the set of curves that can be represented
exactly. The term rational in NURBS refers to these weights.
The control points can have any dimensionality. One-dimensional points just define a scalar function of the
parameter. These are typically used in image processing programs to tune the brightness and color curves.
Three-dimensional control points are used abundantly in 3D modeling, where they are used in the everyday meaning
of the word 'point', a location in 3D space. Multi-dimensional points might be used to control sets of time-driven
values, e.g. the different positional and rotational settings of a robot arm. NURBS surfaces are just an application of
this. Each control 'point' is actually a full vector of control points, defining a curve. These curves share their degree
and the number of control points, and span one dimension of the parameter space. By interpolating these control
vectors over the other dimension of the parameter space, a continuous set of curves is obtained, defining the surface.

Knot vector
The knot vector is a sequence of parameter values that determines where and how the control points affect the
NURBS curve. The number of knots is always equal to the number of control points plus curve degree plus one (i.e.
number of control points plus curve order). The knot vector divides the parametric space in the intervals mentioned
before, usually referred to as knot spans. Each time the parameter value enters a new knot span, a new control point
becomes active, while an old control point is discarded. It follows that the values in the knot vector should be in
nondecreasing order, so (0, 0, 1, 2, 3, 3) is valid while (0, 0, 2, 1, 3, 3) is not.
Consecutive knots can have the same value. This then defines a knot span of zero length, which implies that two
control points are activated at the same time (and of course two control points become deactivated). This has impact
on continuity of the resulting curve or its higher derivatives; for instance, it allows the creation of corners in an
otherwise smooth NURBS curve. A number of coinciding knots is sometimes referred to as a knot with a certain
multiplicity. Knots with multiplicity two or three are known as double or triple knots. The multiplicity of a knot is
limited to the degree of the curve; since a higher multiplicity would split the curve into disjoint parts and it would
leave control points unused. For first-degree NURBS, each knot is paired with a control point.
The knot vector usually starts with a knot that has multiplicity equal to the order. This makes sense, since this
activates the control points that have influence on the first knot span. Similarly, the knot vector usually ends with a

80

Non-uniform rational B-spline

81

knot of that multiplicity. Curves with such knot vectors start and end in a control point.
The individual knot values are not meaningful by themselves; only the ratios of the difference between the knot
values matter. Hence, the knot vectors (0, 0, 1, 2, 3, 3) and (0, 0, 2, 4, 6, 6) produce the same curve. The positions of
the knot values influences the mapping of parameter space to curve space. Rendering a NURBS curve is usually
done by stepping with a fixed stride through the parameter range. By changing the knot span lengths, more sample
points can be used in regions where the curvature is high. Another use is in situations where the parameter value has
some physical significance, for instance if the parameter is time and the curve describes the motion of a robot arm.
The knot span lengths then translate into velocity and acceleration, which are essential to get right to prevent damage
to the robot arm or its environment. This flexibility in the mapping is what the phrase non uniform in NURBS refers
to.
Necessary only for internal calculations, knots are usually not helpful to the users of modeling software. Therefore,
many modeling applications do not make the knots editable or even visible. It's usually possible to establish
reasonable knot vectors by looking at the variation in the control points. More recent versions of NURBS software
(e.g., Autodesk Maya and Rhinoceros 3D) allow for interactive editing of knot positions, but this is significantly less
intuitive than the editing of control points.

Comparison of Knots and Control Points


A common misconception is that each knot is paired with a control point. This is true only for degree 1 NURBS
(polylines). For higher degree NURBS, there are groups of 2 x degree knots that correspond to groups of (degree+1)
control points. For example, suppose we have a degree 3 NURBS with 7 control points and knots 0,0,0,1,2,5,8,8,8.
The first four control points are grouped with the first six knots. The second through fifth control points are grouped
with the knots 0,0,1,2,5,8. The third through sixth control points are grouped with the knots 0,1,2,5,8,8. The last four
control points are grouped with the last six knots.
Some modelers that use older algorithms for NURBS evaluation require two extra knot values for a total of
(degree+N+1) knots. When Rhino is exporting and importing NURBS geometry, it automatically adds and removes
these two superfluous knots as the situation requires.

Order
The order of a NURBS curve defines the number of nearby control points that influence any given point on the
curve. The curve is represented mathematically by a polynomial of degree one less than the order of the curve.
Hence, second-order curves (which are represented by linear polynomials) are called linear curves, third-order curves
are called quadratic curves, and fourth-order curves are called cubic curves. The number of control points must be
greater than or equal to the order of the curve.
In practice, cubic curves are the ones most commonly used. Fifth- and sixth-order curves are sometimes useful,
especially for obtaining continuous higher order derivatives, but curves of higher orders are practically never used
because they lead to internal numerical problems and tend to require disproportionately large calculation times.

Construction of the basis functions


The B-spline basis functions used in the construction of NURBS curves are usually denoted as
corresponds to the

-th control point, and

corresponds with the degree of the basis function.

dependence is frequently left out, so we can write


The degree-0 functions

The parameter

. The definition of these basis functions is recursive in

are piecewise constant functions. They are one on the corresponding knot span and

zero everywhere else. Effectively,


are non-zero for

, in which
[4]

is a linear interpolation of

knot spans, overlapping for

and

knot spans. The function

. The latter two functions


is computed as

Non-uniform rational B-spline

82

rises linearly from zero to one on the interval where


non-zero, while

is

falls from one to zero on the interval where

is non-zero. As mentioned before,

is a triangular

function, nonzero over two knot spans rising from zero to one on the
first, and falling to zero on the second knot span. Higher order basis
functions are non-zero over corresponding more knot spans and have
correspondingly higher degree. If is the parameter, and
is the
-th knot, we can write the functions

and

as

From bottom to top: Linear basis functions


(blue) and
(green), their weight

and

functions

and

and the resulting quadratic

basis function. The knots are 0, 1, 2 and 2.5

The functions

and

are positive when the corresponding lower

order basis functions are non-zero. By induction on n it follows that the basis functions are non-negative for all
values of and . This makes the computation of the basis functions numerically stable.
Again by induction, it can be proved that the sum of the basis functions for a particular value of the parameter is
unity. This is known as the partition of unity property of the basis functions.
The figures show the linear and the quadratic basis functions for the
knots {..., 0, 1, 2, 3, 4, 4.1, 5.1, 6.1, 7.1, ...}
One knot span is considerably shorter than the others. On that knot
span, the peak in the quadratic basis function is more distinct, reaching
almost one. Conversely, the adjoining basis functions fall to zero more
quickly. In the geometrical interpretation, this means that the curve
approaches the corresponding control point closely. In case of a double
knot, the length of the knot span becomes zero and the peak reaches
one exactly. The basis function is no longer differentiable at that point.
The curve will have a sharp corner if the neighbour control points are not collinear.

Linear basis functions

Quadratic basis functions

General form of a NURBS curve


Using the definitions of the basis functions

from the previous paragraph, a NURBS curve takes the following

[5]

form:

In this,

is the number of control points

and

are the corresponding weights. The denominator is a

normalizing factor that evaluates to one if all weights are one. This can be seen from the partition of unity property
of the basis functions. It is customary to write this as

in which the functions

Non-uniform rational B-spline

83

are known as the rational basis functions.

General form of a NURBS surface


A NURBS surface is obtained as the tensor product of two NURBS curves, thus using two independent parameters
and (with indices and respectively):[6]

with

as rational basis functions.

Manipulating NURBS objects


A number of transformations can be applied to a NURBS object. For instance, if some curve is defined using a
certain degree and N control points, the same curve can be expressed using the same degree and N+1 control points.
In the process a number of control points change position and a knot is inserted in the knot vector. These
manipulations are used extensively during interactive design. When adding a control point, the shape of the curve
should stay the same, forming the starting point for further adjustments. A number of these operations are discussed
below.[7][8]

Knot insertion
As the term suggests, knot insertion inserts a knot into the knot vector. If the degree of the curve is
control points are replaced by

, then

new ones. The shape of the curve stays the same.

A knot can be inserted multiple times, up to the maximum multiplicity of the knot. This is sometimes referred to as
knot refinement and can be achieved by an algorithm that is more efficient than repeated knot insertion.

Knot removal
Knot removal is the reverse of knot insertion. Its purpose is to remove knots and the associated control points in
order to get a more compact representation. Obviously, this is not always possible while retaining the exact shape of
the curve. In practice, a tolerance in the accuracy is used to determine whether a knot can be removed. The process is
used to clean up after an interactive session in which control points may have been added manually, or after
importing a curve from a different representation, where a straightforward conversion process leads to redundant
control points.

Non-uniform rational B-spline

84

Degree elevation
A NURBS curve of a particular degree can always be represented by a NURBS curve of higher degree. This is
frequently used when combining separate NURBS curves, e.g. when creating a NURBS surface interpolating
between a set of NURBS curves or when unifying adjacent curves. In the process, the different curves should be
brought to the same degree, usually the maximum degree of the set of curves. The process is known as degree
elevation.

Curvature
The most important property in differential geometry is the curvature . It describes the local properties (edges,
corners, etc.) and relations between the first and second derivative, and thus, the precise curve shape. Having
determined the derivatives it is easy to compute
second derivate

or approximated as the arclength from the

. The direct computation of the curvature

with these equations is the big

advantage of parameterized curves against their polygonal representations.

Example: a circle
Non-rational splines or Bzier curves may approximate a circle, but they cannot represent it exactly. Rational splines
can represent any conic section, including the circle, exactly. This representation is not unique, but one possibility
appears below:
x

weight

1 1 0
0

1 0

1 0

The order is three, since a circle is a quadratic curve and the spline's order is one more than the degree of its
piecewise polynomial segments. The knot vector is
. The
circle is composed of four quarter circles, tied together with double knots. Although double knots in a third order
NURBS curve would normally result in loss of continuity in the first derivative, the control points are positioned in
such a way that the first derivative is continuous. In fact, the curve is infinitely differentiable everywhere, as it must
be if it exactly represents a circle.
The curve represents a circle exactly, but it is not exactly parametrized in the circle's arc length. This means, for
example, that the point at does not lie at
(except for the start, middle and end point of each
quarter circle, since the representation is symmetrical). This would be impossible, since the x coordinate of the circle
would provide an exact rational polynomial expression for
, which is impossible. The circle does make one
full revolution as its parameter
as multiples of

goes from 0 to

, but this is only because the knot vector was arbitrarily chosen

Non-uniform rational B-spline

References
Les Piegl & Wayne Tiller: The NURBS Book, Springer-Verlag 19951997 (2nd ed.). The main reference for
Bzier, B-Spline and NURBS; chapters on mathematical representation and construction of curves and surfaces,
interpolation, shape modification, programming concepts.
Dr. Thomas Sederberg, BYU NURBS, http://cagd.cs.byu.edu/~557/text/ch6.pdf
Dr. Lyle Ramshaw. Blossoming: A connect-the-dots approach to splines, Research Report 19, Compaq Systems
Research Center, Palo Alto, CA, June 1987
David F. Rogers: An Introduction to NURBS with Historical Perspective, Morgan Kaufmann Publishers 2001.
Good elementary book for NURBS and related issues.
Gershenfeld, Neil A. The nature of mathematical modeling. Cambridge university press, 1999.

Notes
[1]
[2]
[3]
[4]
[5]
[6]

Foley, van Dam, Feiner & Hughes: Computer Graphics: Principles and Practice, section 11.2, Addison-Wesley 1996 (2nd ed.).
David F. Rogers: An Introduction to NURBS with Historical Perspective, section 7.1
Gershenfeld: The Nature of Mathematical Modeling, page 141, Cambridge-University-Press 1999
Les Piegl & Wayne Tiller: The NURBS Book, chapter 2, sec. 2
Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 2
Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 4

[7] Les Piegl & Wayne Tiller: The NURBS Book, chapter 5
[8] L. Piegl, Modifying the shape of rational B-splines. Part 1: curves, Computer-Aided Design, Volume 21, Issue 8, October 1989, Pages
509-518, ISSN 0010-4485, http:/ / dx. doi. org/ 10. 1016/ 0010-4485(89)90059-6.

External links
Clear explanation of NURBS for non-experts (http://www.rw-designer.com/NURBS)
Interactive NURBS demo (http://geometrie.foretnik.net/files/NURBS-en.swf)
About Nonuniform Rational B-Splines - NURBS (http://www.cs.wpi.edu/~matt/courses/cs563/talks/nurbs.
html)
An Interactive Introduction to Splines (http://ibiblio.org/e-notes/Splines/Intro.htm)
http://www.cs.bris.ac.uk/Teaching/Resources/COMS30115/all.pdf
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/DONAVANIK/bezier.html
http://mathcs.holycross.edu/~croyden/csci343/notes.html (Lecture 33: Bzier Curves, Splines)
http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/notes.html
A free software package for handling NURBS curves, surfaces and volumes (http://octave.sourceforge.net/
nurbs) in Octave and Matlab

85

Normal

86

Normal
In geometry, a normal is an object such as a line or vector that is
perpendicular to a given object. For example, in the two-dimensional
case, the normal line to a curve at a given point is the line
perpendicular to the tangent line to the curve at the point.
In the three-dimensional case a surface normal, or simply normal, to
a surface at a point P is a vector that is perpendicular to the tangent
plane to that surface at P. The word "normal" is also used as an
adjective: a line normal to a plane, the normal component of a force,
the normal vector, etc. The concept of normality generalizes to
orthogonality.
The concept has been generalized to differentiable manifolds of
arbitrary dimension embedded in a Euclidean space. The normal
vector space or normal space of a manifold at a point P is the set of
the vectors which are orthogonal to the tangent space at P. In the case
of differential curves, the curvature vector is a normal vector of special
interest.

A polygon and two of its normal vectors

The normal is often used in computer graphics to determine a surface's


orientation toward a light source for flat shading, or the orientation of
each of the corners (vertices) to mimic a curved surface with Phong
shading.

Normal to surfaces in 3D space


Calculating a surface normal
For a convex polygon (such as a triangle), a surface normal can be
calculated as the vector cross product of two (non-parallel) edges of the
polygon.
For a plane given by the equation
vector

, the

is a normal.

A normal to a surface at a point is the same as a


normal to the tangent plane to that surface at that
point.

For a plane given by the equation


,
i.e., a is a point on the plane and b and c are (non-parallel) vectors lying on the plane, the normal to the plane is a
vector normal to both b and c which can be found as the cross product
.
For a hyperplane in n+1 dimensions, given by the equation
,
where a0 is a point on the hyperplane and ai for i = 1, ..., n are non-parallel vectors lying on the hyperplane, a normal
to the hyperplane is any vector in the null space of A where A is given by
.
That is, any vector orthogonal to all in-plane vectors is by definition a surface normal.

Normal

87

If a (possibly non-flat) surface S is parameterized by a system of curvilinear coordinates x(s, t), with s and t real
variables, then a normal is given by the cross product of the partial derivatives

If a surface S is given implicitly as the set of points

satisfying

, then, a normal at a point

on the surface is given by the gradient


since the gradient at any point is perpendicular to the level set, and

(the surface) is a level set of

.
For a surface S given explicitly as a function

of the independent variables

(e.g.,

), its normal can be found in at least two equivalent ways. The first
one is obtaining its implicit form

, from which the normal follows readily as the

gradient
.
(Notice that the implicit form could be defined alternatively as
;
these two forms correspond to the interpretation of the surface being oriented upwards or downwards, respectively,
as a consequence of the difference in the sign of the partial derivative
.) The second way of obtaining the
normal follows directly from the gradient of the explicit form,
;
by inspection,
, where
Note that this is equal to

is the upward unit vector.


, where

and

are the x and y unit

vectors.
If a surface does not have a tangent plane at a point, it does not have a normal at that point either. For example, a
cone does not have a normal at its tip nor does it have a normal along the edge of its base. However, the normal to
the cone is defined almost everywhere. In general, it is possible to define a normal almost everywhere for a surface
that is Lipschitz continuous.

Uniqueness of the normal


A normal to a surface does not have a
unique direction; the vector pointing in the
opposite direction of a surface normal is
also a surface normal. For a surface which is
the topological boundary of a set in three
dimensions, one can distinguish between the
inward-pointing
normal
and
outer-pointing normal, which can help
define the normal in a unique way. For an
oriented surface, the surface normal is
A vector field of normals to a surface
usually determined by the right-hand rule. If
the normal is constructed as the cross product of tangent vectors (as described in the text above), it is a pseudovector.

Normal

88

Transforming normals
When applying a transform to a surface it is sometimes convenient to derive normals for the resulting surface from
the original normals. All points P on tangent plane are transformed to P. We want to find n perpendicular to P. Let
t be a vector on the tangent plane and Ml be the upper 3x3 matrix (translation part of transformation does not apply
to normal or tangent vectors).

So use the inverse transpose of the linear transformation (the upper 3x3 matrix) when transforming surface normals.
Also note that the inverse transpose is equal to the original matrix if the matrix is orthonormal, i.e. purely rotational
with no scaling or shearing.

Hypersurfaces in n-dimensional space


The definition of a normal to a surface in three-dimensional space can be extended to
hypersurfaces in a

-dimensional

-dimensional space. A hypersurface may be locally defined implicitly as the set of points

satisfying an equation

, where

is a given scalar function. If

is continuously

differentiable then the hypersurface is a differentiable manifold in the neighbourhood of the points where the
gradient is not null. At these points the normal vector space has dimension one and is generated by the gradient

The normal line at a point of the hypersurface is defined only if the gradient is not null. It is the line passing through
the point and having the gradient as direction.

Varieties defined by implicit equations in n-dimensional space


A differential variety defined by implicit equations in the n-dimensional space is the set of the common zeros of a
finite set of differential functions in n variables

The Jacobian matrix of the variety is the kn matrix whose i-th row is the gradient of fi. By implicit function
theorem, the variety is a manifold in the neighborhood of a point of it where the Jacobian matrix has rank k. At such
a point P, the normal vector space is the vector space generated by the values at P of the gradient vectors of the fi.
In other words, a variety is defined as the intersection of k hypersurfaces, and the normal vector space at a point is
the vector space generated by the normal vectors of the hypersurfaces at the point.
The normal (affine) space at a point P of the variety is the affine subspace passing through P and generated by the
normal vector space at P.
These definitions may be extended verbatim to the points where the variety is not a manifold.

Normal

89

Example
Let V be the variety defined in the 3-dimensional space by the equations

This variety is the union of the x-axis and the y-axis.


At a point (a, 0, 0) where a0, the rows of the Jacobian matrix are (0, 0, 1) and (0, a, 0). Thus the normal affine
space is the plane of equation x=a. Similarly, if b0, the normal plane at (0, b, 0) is the plane of equation y=b.
At the point (0, 0, 0) the rows of the Jacobian matrix are (0, 0, 1) and (0,0,0). Thus the normal vector space and the
normal affine space have dimension 1 and the normal affine space is the z-axis.

Uses

Surface normals are essential in defining surface integrals of vector fields.


Surface normals are commonly used in 3D computer graphics for lighting calculations; see Lambert's cosine law.
Surface normals are often adjusted in 3D computer graphics by normal mapping.
Render layers containing surface normal information may be used in Digital compositing to change the apparent
lighting of rendered elements.

Normal in geometric optics


The normal is the line perpendicular to the surface of an optical
medium. In reflection of light, the angle of incidence and the angle of
reflection are respectively the angle between the normal and the
incident ray and the angle between the normal and the reflected ray.

References
External links
An explanation of normal vectors (http://msdn.microsoft.com/
en-us/library/bb324491(VS.85).aspx) from Microsoft's MSDN
Clear pseudocode for calculating a surface normal (http://www.
opengl.org/wiki/Calculating_a_Surface_Normal) from either a
triangle or polygon.

Diagram of specular reflection

Normal mapping

90

Normal mapping
In 3D computer graphics, normal
mapping, or "Dot3 bump mapping", is
a technique used for faking the lighting
of bumps and dents - an
implementation of Bump mapping. It
is used to add details without using
more polygons. A common use of this
technique is to greatly enhance the
appearance and details of a low
polygon model by generating a normal
map from a high polygon model or
height map.

Normal mapping used to re-detail simplified meshes.

Normal maps are commonly stored as


regular RGB images where the RGB components corresponds to the X, Y, and Z coordinates, respectively, of the
surface normal.

History
The idea of taking geometric details from a high polygon model was introduced in "Fitting Smooth Surfaces to
Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996,[1] where this approach was used for
creating displacement maps over nurbs. In 1998, two papers were presented with key ideas for transferring details
with normal maps from high to low polygon meshes: "Appearance Preserving Simplification", by Cohen et al.
SIGGRAPH 1998,[2] and "A general method for preserving attribute values on simplified meshes" by Cignoni et al.
IEEE Visualization '98.[3] The former introduced the idea of storing surface normals directly in a texture, rather than
displacements, though it required the low-detail model to be generated by a particular constrained simplification
algorithm. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the
recreation of any attributes of the high-detail model (color, texture coordinates, displacements, etc.) in a way that is
not dependent on how the low-detail model was created. The combination of storing normals in a texture, with the
more general creation process is still used by most currently available tools.

How it works
To calculate the Lambertian (diffuse)
lighting of a surface, the unit vector
from the shading point to the light
source is dotted with the unit vector
normal to that surface, and the result is
the intensity of the light on that
surface. Imagine a polygonal model of
a sphere - you can only approximate
Example of a normal map (center) with the scene it was calculated from (left) and the
result when applied to a flat surface (right).
the shape of the surface. By using a
3-channel bitmap textured across the
model, more detailed normal vector information can be encoded. Each channel in the bitmap corresponds to a spatial

Normal mapping
dimension (X, Y and Z). These spatial dimensions are relative to a constant coordinate system for object-space
normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture
coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model,
especially in conjunction with advanced lighting techniques.
Since a normal will be used in the dot product calculation for the diffuse lighting computation, we can see that the
{0, 0, 1} would be remapped to the {128, 128, 0} values, giving that kind of sky blue color seen in normal maps
(blue (z) coordinate is perspective (deepness) coordinate and RG-xy flat coordinates on screen). {0.3, 0.4, 0.866}
would be remapped to the ({0.3, 0.4, 0.866}/2+{0.5, 0.5, 0.5})*255={0.15+0.5, 0.2+0.5, -0.433+0.5}*255={0.65,
0.7, 0.067}*255={166, 179, 17} values (
). Coordinate z (blue) minus sign
flipped, because need match normal map normal vector with eye (viewpoint or camera) vector or light vector
(because sign "-" for z axis means vertex is in front of camera and not behind camera; when light vector and normal
vector match surface shined with maximum strength).

Calculating tangent space


In order to find the perturbation in the normal the tangent space must be correctly calculated.[4] Most often the
normal is perturbed in a fragment shader after applying the model and view matrices. Typically the geometry
provides a normal and tangent. The tangent is part of the tangent plane and can be transformed simply with the linear
part of the matrix (the upper 3x3). However, the normal needs to be transformed by the inverse transpose. Most
applications will want cotangent to match the transformed geometry (and associated uv's). So instead of enforcing
the cotangent to be perpendicular to the tangent, it is generally preferable to transform the cotangent just like the
tangent. Let t be tangent, b be cotangent, n be normal, M3x3 be the linear part of model matrix, and V3x3 be the linear
part of the view matrix.

Normal mapping in video games


Interactive normal map rendering was originally only possible on PixelFlow, a parallel rendering machine built at the
University of North Carolina at Chapel Hill.[citation needed] It was later possible to perform normal mapping on
high-end SGI workstations using multi-pass rendering and framebuffer operations[5] or on low end PC hardware with
some tricks using paletted textures. However, with the advent of shaders in personal computers and game consoles,
normal mapping became widely used in commercial video games starting in late 2003. Normal mapping's popularity
for real-time rendering is due to its good quality to processing requirements ratio versus other methods of producing
similar effects. Much of this efficiency is made possible by distance-indexed detail scaling, a technique which
selectively decreases the detail of the normal map of a given texture (cf. mipmapping), meaning that more distant
surfaces require less complex lighting simulation.
Basic normal mapping can be implemented in any hardware that supports palettized textures. The first game console
to have specialized normal mapping hardware was the Sega Dreamcast. However, Microsoft's Xbox was the first
console to widely use the effect in retail games. Out of the sixth generation consoles, only the PlayStation 2's GPU
lacks built-in normal mapping support. Games for the Xbox 360 and the PlayStation 3 rely heavily on normal
mapping and are beginning to implement parallax mapping. The Nintendo 3DS has been shown to support normal
mapping, as demonstrated by Resident Evil Revelations and Metal Gear Solid: Snake Eater.

91

Normal mapping

References
[1] Krishnamurthy and Levoy, Fitting Smooth Surfaces to Dense Polygon Meshes (http:/ / www-graphics. stanford. edu/ papers/ surfacefitting/ ),
SIGGRAPH 1996
[2] Cohen et al., Appearance-Preserving Simplification (http:/ / www. cs. unc. edu/ ~geom/ APS/ APS. pdf), SIGGRAPH 1998 (PDF)
[3] Cignoni et al., A general method for preserving attribute values on simplified meshes (http:/ / vcg. isti. cnr. it/ publications/ papers/ rocchini.
pdf), IEEE Visualization 1998 (PDF)
[4] Mikkelsen, Simulation of Wrinkled Surfaces Revisited (http:/ / image. diku. dk/ projects/ media/ morten. mikkelsen. 08. pdf), 2008 (PDF)
[5] Heidrich and Seidel, Realistic, Hardware-accelerated Shading and Lighting (http:/ / www. cs. ubc. ca/ ~heidrich/ Papers/ Siggraph. 99. pdf),
SIGGRAPH 1999 (PDF)

External links
Introduction to Normal Mapping (http://www.game-artist.net/forums/vbarticles.php?do=article&
articleid=16)
Blender Normal Mapping (http://wiki.blender.org/index.php/Manual/Bump_and_Normal_Maps)
Normal Mapping with paletted textures (http://vcg.isti.cnr.it/activities/geometryegraphics/bumpmapping.
html) using old OpenGL extensions.
Normal Map Photography (http://zarria.net/nrmphoto/nrmphoto.html) Creating normal maps manually by
layering digital photographs
Normal Mapping Explained (http://www.3dkingdoms.com/tutorial.htm)
Simple Normal Mapper (http://sourceforge.net/projects/simplenormalmapper) Open Source normal map
generator

OrenNayar reflectance model


The OrenNayar reflectance model,[1] developed by Michael Oren and Shree K. Nayar, is a reflectivity model for
diffuse reflection from rough surfaces. It has been shown to accurately predict the appearance of a wide range of
natural surfaces, such as concrete, plaster, sand, etc.

Introduction
Reflectance is a physical property of a material that
describes how it reflects incident light. The appearance
of various materials are determined to a large extent by
their reflectance properties. Most reflectance models
can be broadly classified into two categories: diffuse
and specular. In computer vision and computer
graphics, the diffuse component is often assumed to be
Lambertian. A surface that obeys Lambert's Law
appears equally bright from all viewing directions. This
model for diffuse reflection was proposed by Johann
Comparison of a matte vase with the rendering based on the
Heinrich Lambert in 1760 and has been perhaps the
Lambertian model. Illumination is from the viewing direction
most widely used reflectance model in computer vision
and graphics. For a large number of real-world surfaces, such as concrete, plaster, sand, etc., however, the
Lambertian model is an inadequate approximation of the diffuse component. This is primarily because the
Lambertian model does not take the roughness of the surface into account.
Rough surfaces can be modelled as a set of facets with different slopes, where each facet is a small planar patch.
Since photo receptors of the retina and pixels in a camera are both finite-area detectors, substantial macroscopic

92

OrenNayar reflectance model

93

(much larger than the wavelength of incident light) surface roughness is often projected onto a single detection
element, which in turn produces an aggregate brightness value over many facets. Whereas Lamberts law may hold
well when observing a single planar facet, a collection of such facets with different orientations is guaranteed to
violate Lamberts law. The primary reason for this is that the foreshortened facet areas will change for different
viewing directions, and thus the surface appearance will be view-dependent.
Analysis of this phenomenon has a long history and can
be traced back almost a century. Past work has resulted
in empirical models designed to fit experimental data as
well as theoretical results derived from first principles.
Much of this work was motivated by the
non-Lambertian reflectance of the moon.
The OrenNayar reflectance model, developed by
Michael Oren and Shree K. Nayar in 1993, predicts
reflectance from rough diffuse surfaces for the entire
hemisphere of source and sensor directions. The model
takes into account complex physical phenomena such
as masking, shadowing and interreflections between
points on the surface facets. It can be viewed as a
generalization of Lamberts law. Today, it is widely
used in computer graphics and animation for rendering
rough surfaces.[citation needed] It also has important
implications for human vision and computer vision
problems, such as shape from shading, photometric stereo, etc.

Aggregation of the reflection from rough surfaces

Formulation
The surface roughness model used in the
derivation of the Oren-Nayar model is the
microfacet model, proposed by Torrance
and Sparrow,[2] which assumes the surface
to be composed of long symmetric
V-cavities. Each cavity consists of two
planar facets. The roughness of the surface
is specified using a probability function for
the distribution of facet slopes. In particular,
the Gaussian distribution is often used, and
thus the variance of the Gaussian
distribution,
, is a measure of the
roughness of the surfaces. The standard
deviation of the facet slopes (gradient of the
surface elevation), ranges in
.

Diagram of surface reflection

In the OrenNayar reflectance model, each


facet is assumed to be Lambertian in reflectance. As shown in the image at right, given the radiance of the incoming
light
, the radiance of the reflected light
, according to the Oren-Nayar model, is

OrenNayar reflectance model

94

where
,
,
,
,
and

is the albedo of the surface, and

same plane), we have

, and

is the roughness of the surface. In the case of

(i.e., all facets in the

, and thus the Oren-Nayar model simplifies to the Lambertian model:

Results
Here is a real image of a matte vase illuminated from the viewing direction, along with versions rendered using the
Lambertian and Oren-Nayar models. It shows that the Oren-Nayar model predicts the diffuse reflectance for rough
surfaces more accurately than the Lambertian model.
Here are rendered images of a sphere
using
the
Oren-Nayar
model,
corresponding to different surface
roughnesses (i.e. different values):

Plot of the brightness of the rendered images, compared with the


measurements on a cross section of the real vase.

OrenNayar reflectance model

95

Connection with other microfacet reflectance models


Oren-Nayar model
Rough opaque diffuse surfaces

Torrance-Sparrow model

[3]

Microfacet model for refraction

Rough opaque specular surfaces (glossy surfaces) Rough transparent surfaces

Each facet is Lambertian (diffuse) Each facet is a mirror (specular)

Each facet is made of glass (transparent)

References
[1] M. Oren and S.K. Nayar, " Generalization of Lambert's Reflectance Model (http:/ / www1. cs. columbia. edu/ CAVE/ publications/ pdfs/
Oren_SIGGRAPH94. pdf)". SIGGRAPH. pp.239-246, Jul, 1994
[2] Torrance, K. E. and Sparrow, E. M. Theory for off-specular reflection from roughened surfaces (http:/ / www. graphics. cornell. edu/ ~westin/
pubs/ TorranceSparrowJOSA1967. pdf). J. Opt. Soc. Am.. 57, 9(Sep 1967) 1105-1114
[3] B. Walter, et al. " Microfacet Models for Refraction through Rough Surfaces (http:/ / www. cs. cornell. edu/ ~srm/ publications/
EGSR07-btdf. html)". EGSR 2007.

External links
The official project page for the Oren-Nayar model (http://www1.cs.columbia.edu/CAVE/projects/oren/) at
Shree Nayar's CAVE research group webpage (http://www.cs.columbia.edu/CAVE/)

Painter's algorithm
The painter's algorithm, also known as a priority fill, is one of the simplest solutions to the visibility problem in
3D computer graphics. When projecting a 3D scene onto a 2D plane, it is necessary at some point to decide which
polygons are visible, and which are hidden.
The name "painter's algorithm" refers to the technique employed by many painters of painting distant parts of a scene
before parts which are nearer thereby covering some areas of distant parts. The painter's algorithm sorts all the
polygons in a scene by their depth and then paints them in this order, farthest to closest. It will paint over the parts
that are normally not visible thus solving the visibility problem at the cost of having painted invisible areas of
distant objects.

The distant mountains are painted first, followed by the closer meadows; finally, the closest objects in this scene, the
trees, are painted.Wikipedia:Please clarify

Painter's algorithm

The algorithm can fail in some cases, including cyclic overlap or


piercing polygons. In the case of cyclic overlap, as shown in the figure
to the right, Polygons A, B, and C overlap each other in such a way
that it is impossible to determine which polygon is above the others. In
this case, the offending polygons must be cut to allow sorting. Newell's
algorithm, proposed in 1972, provides a method for cutting such
polygons. Numerous methods have also been proposed in the field of
computational geometry.
The case of piercing polygons arises when one polygon intersects
another. As with cyclic overlap, this problem may be resolved by
cutting the offending polygons.
In basic implementations, the painter's algorithm can be inefficient. It
Overlapping polygons can cause the algorithm to
forces the system to render each point on every polygon in the visible
fail
set, even if that polygon is occluded in the finished scene. This means
that, for detailed scenes, the painter's algorithm can overly tax the computer hardware.
A reverse painter's algorithm is sometimes used, in which objects nearest to the viewer are painted first with
the rule that paint must never be applied to parts of the image that are already painted. In a computer graphic system,
this can be very efficient, since it is not necessary to calculate the colors (using lighting, texturing and such) for parts
of the more distant scene that are hidden by nearby objects. However, the reverse algorithm suffers from many of the
same problems as the standard version.
These and other flaws with the algorithm led to the development of Z-buffer techniques, which can be viewed as a
development of the painter's algorithm, by resolving depth conflicts on a pixel-by-pixel basis, reducing the need for a
depth-based rendering order. Even in such systems, a variant of the painter's algorithm is sometimes employed. As
Z-buffer implementations generally rely on fixed-precision depth-buffer registers implemented in hardware, there is
scope for visibility problems due to rounding error. These are overlaps or gaps at joins between polygons. To avoid
this, some graphics engine implementations "overrender"[citation needed], drawing the affected edges of both polygons
in the order given by painter's algorithm. This means that some pixels are actually drawn twice (as in the full
painter's algorithm) but this happens on only small parts of the image and has a negligible performance effect.

References
Foley, James; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1990). Computer Graphics: Principles and
Practice. Reading, MA, USA: Addison-Wesley. p.1174. ISBN0-201-12110-7.

96

Parallax mapping

Parallax mapping
Parallax mapping (also called offset mapping or virtual displacement mapping) is an enhancement of the bump
mapping or normal mapping techniques applied to textures in 3D rendering applications such as video games. To the
end user, this means that textures such as stone walls will have more apparent depth and thus greater realism with
less of an influence on the performance of the simulation. Parallax mapping was introduced by Tomomichi Kaneko
et al., in 2001.[1]
Parallax mapping is implemented by displacing the texture coordinates at a point on the rendered polygon by a
function of the view angle in tangent space (the angle relative to the surface normal) and the value of the height map
at that point. At steeper view-angles, the texture coordinates are displaced more, giving the illusion of depth due to
parallax effects as the view changes.
Parallax mapping described by Kaneko is a single step process that does not account for occlusion. Subsequent
enhancements have been made to the algorithm incorporating iterative approaches to allow for occlusion and
accurate silhouette rendering.[2]

Steep parallax mapping


Steep parallax mapping is one name for the class of algorithms that trace rays against heightfields. The idea is to
walk along a ray that has entered the heightfield's volume, finding the intersection point of the ray with the
heightfield. This closest intersection is what part of the heightfield is truly visible. Relief mapping and parallax
occlusion mapping are other common names for these techniques.
Interval mapping improves on the usual binary search done in relief mapping by creating a line between known
inside and outside points and choosing the next sample point by intersecting this line with a ray, rather than using the
midpoint as in a traditional binary search.

References
[1] Kaneko, T., et al., 2001. Detailed Shape Representation with Parallax Mapping (http:/ / citeseerx. ist. psu. edu/ viewdoc/
download;jsessionid=4E89F25A3EF72D1AA25CEA9767873DE5?doi=10. 1. 1. 115. 1050& rep=rep1& type=pdf). In Proceedings of ICAT
2001, pp. 205-208.
[2] Tatarchuk, N., 2005. Practical Dynamic Parallax Occlusion Mapping (http:/ / ati. amd. com/ developer/ SIGGRAPH05/
Tatarchuk-ParallaxOcclusionMapping-Sketch-print. pdf) Siggraph presentation

External links
Comparison from the Irrlicht Engine: With Parallax mapping (http://www.irrlicht3d.org/images/
parallaxmapping.jpg) vs. Without Parallax mapping (http://www.irrlicht3d.org/images/noparallaxmapping.
jpg)
Parallax mapping implementation in DirectX, forum topic (http://www.gamedev.net/community/forums/
topic.asp?topic_id=387447)
Parallax Mapped Bullet Holes (http://cowboyprogramming.com/2007/01/05/parallax-mapped-bullet-holes/) Details the algorithm used for F.E.A.R. style bullet holes.
Interval Mapping (http://graphics.cs.ucf.edu/IntervalMapping/)
Parallax Mapping with Offset Limiting (http://jerome.jouvie.free.fr/OpenGl/Projects/Shaders.php)
Steep Parallax Mapping (http://graphics.cs.brown.edu/games/SteepParallax/index.html)

97

Particle system

98

Particle system
The term particle system refers to a computer graphics technique that
uses a large number of very small sprites or other graphic objects to
simulate certain kinds of "fuzzy" phenomena, which are otherwise very
hard to reproduce with conventional rendering techniques - usually
highly chaotic systems, natural phenomena, and/or processes caused by
chemical reactions.
Examples of such phenomena which are commonly replicated using
particle systems include fire, explosions, smoke, moving water (such
as a waterfall), sparks, falling leaves, clouds, fog, snow, dust, meteor
tails, stars and galaxies, or abstract visual effects like glowing trails,
magic spells, etc. - these use particles that fade out quickly and are then
re-emitted from the effect's source. Another technique can be used for
things that contain many strands - such as fur, hair, and grass involving rendering an entire particle's lifetime at once, which can then
be drawn and manipulated as a single strand of the material in
question.

A particle system used to simulate a fire, created


in 3dengfx.

Particle systems may be two-dimensional or three-dimensional.

Typical implementation
Typically a particle system's position and motion in 3D space are
Ad hoc particle system used to simulate a galaxy,
controlled by what is referred to as an emitter. The emitter acts as the
created in 3dengfx.
source of the particles, and its location in 3D space determines where
they are generated and whence they proceed. A regular 3D mesh
object, such as a cube or a plane, can be used as an emitter. The emitter
has attached to it a set of particle behavior parameters. These
parameters can include the spawning rate (how many particles are
generated per unit of time), the particles' initial velocity vector (the
direction they are emitted upon creation), particle lifetime (the length
of time each individual particle exists before disappearing), particle
color, and many more. It is common for all or most of these parameters
to be "fuzzy" instead of a precise numeric value, the artist specifies
a central value and the degree of randomness allowable on either side
A particle system used to simulate a bomb
of the center (i.e. the average particle's lifetime might be 50 frames
explosion, created in particleIllusion.
20%). When using a mesh object as an emitter, the initial velocity
vector is often set to be normal to the individual face(s) of the object, making the particles appear to "spray" directly
from each face.
A typical particle system's update loop (which is performed for each frame of animation) can be separated into two
distinct stages, the parameter update/simulation stage and the rendering stage.

Particle system

Simulation stage
During the simulation stage, the number of new particles that must be created is calculated based on spawning rates
and the interval between updates, and each of them is spawned in a specific position in 3D space based on the
emitter's position and the spawning area specified. Each of the particle's parameters (i.e. velocity, color, etc.) is
initialized according to the emitter's parameters. At each update, all existing particles are checked to see if they have
exceeded their lifetime, in which case they are removed from the simulation. Otherwise, the particles' position and
other characteristics are advanced based on a physical simulation, which can be as simple as translating their current
position, or as complicated as performing physically accurate trajectory calculations which take into account external
forces (gravity, friction, wind, etc.). It is common to perform collision detection between particles and specified 3D
objects in the scene to make the particles bounce off of or otherwise interact with obstacles in the environment.
Collisions between particles are rarely used, as they are computationally expensive and not visually relevant for most
simulations.

Rendering stage
After the update is complete, each particle is rendered, usually in the form of a textured billboarded quad (i.e. a
quadrilateral that is always facing the viewer). However, this is not necessary; a particle may be rendered as a single
pixel in small resolution/limited processing power environments. Particles can be rendered as Metaballs in off-line
rendering; isosurfaces computed from particle-metaballs make quite convincing liquids. Finally, 3D mesh objects
can "stand in" for the particles a snowstorm might consist of a single 3D snowflake mesh being duplicated and
rotated to match the positions of thousands or millions of particles.

"Snowflakes" versus "Hair"


Particle systems can be either animated or static; that is, the lifetime of each particle can either be distributed over
time or rendered all at once. The consequence of this distinction is similar to the difference between snowflakes and
hair - animated particles are akin to snowflakes, which move around as distinct points in space, and static particles
are akin to hair, which consists of a distinct number of curves.
The term "particle system" itself often brings to mind only the animated aspect, which is commonly used to create
moving particulate simulations sparks, rain, fire, etc. In these implementations, each frame of the animation
contains each particle at a specific position in its life cycle, and each particle occupies a single point position in
space. For effects such as fire or smoke that dissipate, each particle is given a fade out time or fixed lifetime; effects
such as snowstorms or rain instead usually terminate the lifetime of the particle once it passes out of a particular field
of view.
However, if the entire life cycle of each particle is rendered simultaneously, the result is static particles strands of
material that show the particles' overall trajectory, rather than point particles. These strands can be used to simulate
hair, fur, grass, and similar materials. The strands can be controlled with the same velocity vectors, force fields,
spawning rates, and deflection parameters that animated particles obey. In addition, the rendered thickness of the
strands can be controlled and in some implementations may be varied along the length of the strand. Different
combinations of parameters can impart stiffness, limpness, heaviness, bristliness, or any number of other properties.
The strands may also use texture mapping to vary the strands' color, length, or other properties across the emitter
surface.

99

Particle system

100

A cube emitting 5000 animated particles, obeying


a "gravitational" force in the negative Y direction.

The same cube emitter rendered using static


particles, or strands.

Artist-friendly particle system tools


Particle systems can be created and modified natively in many 3D modeling and rendering packages including
Cinema 4D, Lightwave, Houdini, Maya, XSI, 3D Studio Max and Blender. These editing programs allow artists to
have instant feedback on how a particle system will look with properties and constraints that they specify. There is
also plug-in software available that provides enhanced particle effects.

Developer-friendly particle system tools


Particle systems code that can be included in game engines, digital content creation systems, and effects applications
can be written from scratch or downloaded. Havok provides multiple particle system APIs. Their Havok FX API
focuses especially on particle system effects. Ageia - now a subsidiary of Nvidia - provides a particle system and
other game physics API that is used in many games, including Unreal Engine 3 games. Game Maker provides a
two-dimensional particle system often used by indie, hobbyist, or student game developers, though it cannot be
imported into other engines. Many other solutions also exist, and particle systems are frequently written from scratch
if non-standard effects or behaviors are desired.

External links
Particle Systems: A Technique for Modeling a Class of Fuzzy Objects [1] William T. Reeves (ACM
Transactions on Graphics, April 1983)
The Particle Systems API [2] - David K. McAllister
The ocean spray in your face. [3] Jeff Lander (Graphic Content, July 1998)
Building an Advanced Particle System [4] John van der Burg (Gamasutra, June 2000)
Particle Engine Using Triangle Strips [5] Jeff Molofee (NeHe)
Designing an Extensible Particle System using C++ and Templates [6] Kent Lai (GameDev.net)
repository of public 3D particle scripts in LSL Second Life format [7] - Ferd Frederix
GPU-Particlesystems using WebGL [8] - Particle effects directly in the browser using WebGL for calculations.

Particle system

References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]

http:/ / portal. acm. org/ citation. cfm?id=357320


http:/ / particlesystems. org/
https:/ / www. lri. fr/ ~mbl/ ENS/ IG2/ devoir2/ files/ docs/ particles. pdf
http:/ / www. gamasutra. com/ view/ feature/ 3157/ building_an_advanced_particle_. php
http:/ / nehe. gamedev. net/ data/ lessons/ lesson. asp?lesson=19
http:/ / archive. gamedev. net/ archive/ reference/ articles/ article1982. html
http:/ / secondlife. mitsi. com/ cgi/ llscript. plx?Category=Particles
http:/ / www. gpu-particlesystems. de

Path tracing
Path tracing is a computer graphics method of rendering images of three dimensional scenes such that the global
illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a
single point on the surface of an object. This illuminance is then reduced by a surface reflectance function to
determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every
pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light
sources (light bulbs), and optically-correct cameras, path tracing can produce still images that are indistinguishable
from photographs.
Path tracing naturally simulates many effects that have to be specifically added to other methods (conventional ray
tracing or scanline rendering), such as soft shadows, depth of field, motion blur, caustics, ambient occlusion, and
indirect lighting. Implementation of a renderer including these effects is correspondingly simpler.
Due to its accuracy and unbiased nature, path tracing is used to generate reference images when testing the quality of
other rendering algorithms. In order to get high quality images from path tracing, a large number of rays must be
traced to avoid visible noisy artifacts.

History
The rendering equation and its use in computer graphics was presented by James Kajiya in 1986.[1] Path Tracing
was introduced then as an algorithm to find a numerical solution to the integral of the rendering equation. A decade
later, Lafortune suggested many refinements, including bidirectional path tracing.[2]
Metropolis light transport, a method of perturbing previously found paths in order to increase performance for
difficult scenes, was introduced in 1997 by Eric Veach and Leonidas J. Guibas.
More recently, CPUs and GPUs have become powerful enough to render images more quickly, causing more
widespread interest in path tracing algorithms. Tim Purcell first presented a global illumination algorithm running on
a GPU in 2002.[3] In February 2009 Austin Robison of Nvidia demonstrated the first commercial implementation of
a path tracer running on a GPU [4], and other implementations have followed, such as that of Vladimir Koylazov in
August 2009. [5] This was aided by the maturing of GPGPU programming toolkits such as CUDA and OpenCL and
GPU ray tracing SDKs such as OptiX.

101

Path tracing

Description
Kajiya's rendering equation adheres to three particular principles of optics; the Principle of global illumination, the
Principle of Equivalence (reflected light is equivalent to emitted light), and the Principle of Direction (reflected light
and scattered light have a direction).
In the real world, objects and surfaces are visible due to the fact that they are reflecting light. This reflected light then
illuminates other objects in turn. From that simple observation, two principles follow.
I. For a given indoor scene, every object in the room must contribute illumination to every other object.
II. Second, there is no distinction to be made between illumination emitted from a light source and illumination
reflected from a surface.
Invented in 1984, a rather different method called radiosity was faithful to both principles. However, radiosity relates
the total illuminance falling on a surface with a uniform luminance that leaves the surface. This forced all surfaces to
be Lambertian, or "perfectly diffuse". While radiosity received a lot of attention at its invocation, perfectly diffuse
surfaces do not exist in the real world. The realization that scattering from a surface depends on both incoming and
outgoing directions is the key principle behind the Bidirectional Reflectance Distribution Function (BRDF). This
direction dependence was a focus of research throughout the 1990s, since accounting for direction always exacted a
price of steep increases in calculation times on desktop computers. Principle III follows.
III. The illumination coming from surfaces must scatter in a particular direction that is some function of the
incoming direction of the arriving illumination, and the outgoing direction being sampled.
Kajiya's equation is a complete summary of these three principles, and path tracing, which approximates a solution to
the equation, remains faithful to them in its implementation. There are other principles of optics which are not the
focus of Kajiya's equation, and therefore are often difficult or incorrectly simulated by the algorithm. Path Tracing is
confounded by optical phenomena not contained in the three principles. For example,
Bright, sharp caustics; radiance scales by the density of illuminance in space.
Subsurface scattering; a violation of principle III above.
Chromatic aberration. fluorescence. iridescence. Light is a spectrum of frequencies.

Bidirectional path tracing


Sampling the integral for a point can be done by solely gathering from the surface, or by solely shooting rays from
light sources. (1) Shooting rays from the light sources and creating paths in the scene. The path is cut off at a
random number of bouncing steps and the resulting light is sent through the projected pixel on the output image.
During rendering, billions of paths are created, and the output image is the mean of every pixel that received some
contribution. (2) Gathering rays from a point on a surface. A ray is projected from the surface to the scene in a
bouncing path that terminates when a light source is intersected. The light is then sent backwards through the path
and to the output pixel. The creation of a single path is called a "sample". For a single point on a surface,
approximately 800 samples (up to as many as 3 thousand samples) are taken. The final output of the pixel is the
arithmetic mean of all those samples, not the sum.
Bidirectional Path Tracing combines both Shooting and Gathering in the same algorithm to obtain faster
convergence of the integral. A shooting path and a gathering path are traced independently, and then the head of the
shooting path is connected to the tail of the gathering path. The light is then attenuated at every bounce and back out
into the pixel. This technique at first seems paradoxically slower, since for every gathering sample we additionally
trace a whole shooting path. In practice however, the extra speed of convergence far outweighs any performance loss
from the extra ray casts on the shooting side.
The following pseudocode is a procedure for performing naive path tracing. This function calculates a single sample
of a pixel, where only the Gathering Path is considered.

102

Path tracing

103

Color TracePath(Ray r, depth) {


if (depth == MaxDepth) {
return Black;

// Bounced enough times.

r.FindNearestObject();
if (r.hitSomething == false) {
return Black;

// Nothing was hit.

Material m = r.thingHit->material;
Color emittance = m.emittance;

// Pick a random direction from here and keep going.


Ray newRay;
newRay.origin = r.pointWhereObjWasHit;
newRay.direction = RandomUnitVectorInHemisphereOf(r.normalWhereObjWasHit);

// This is NOT a cosine-weighted distribution!

// Compute the BRDF for this ray (assuming Lambertian reflection)


float cos_theta = DotProduct(newRay.direction, r.normalWhereObjWasHit);
Color BDRF = 2 * m.reflectance * cos_theta;
Color reflected = TracePath(newRay, depth + 1);

// Apply the Rendering Equation here.


return emittance + (BDRF * reflected);
}

All these samples must then be averaged to obtain the output color. Note this method of always sampling a random
ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has
to use importance-sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance,
a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being
the correct reflected ray - which is the only ray through which any radiance will be reflected - is zero. In these
situations, one must divide the reflectance by the probability density function of the sampling scheme, as per
Monte-Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be
1).
There are other considerations to take into account to ensure conservation of energy. In particular, in the naive case,
the reflectance of a diffuse BRDF must not exceed

or the object will reflect more light than it receives (this

however depends on the sampling scheme used, and can be difficult to get right).

Path tracing

104

Performance
A path tracer continuously samples pixels of an image. The image starts to become recognisable after only a few
samples per pixel, perhaps 100. However, for the image to "converge" and reduce noise to acceptable levels usually
takes around 5000 samples for most images, and many more for pathological cases. Noise is particularly a problem
for animations, giving them a normally-unwanted "film-grain" quality of random speckling.
The central performance bottleneck in Path Tracing is the complex geometrical calculation of casting a ray.
Importance Sampling is a technique which is motivated to cast less rays through the scene while still converging
correctly to outgoing luminance on the surface point. This is done by casting more rays in directions in which the
luminance would have been greater anyway. If the density of rays cast in certain directions matches the strength of
contributions in those directions, the result is identical, but far fewer rays were actually cast. Importance Sampling is
used to match ray density to Lambert's Cosine law, and also used to match BRDFs.
Metropolis light transport can result in a lower-noise image with fewer samples. This algorithm was created in order
to get faster convergence in scenes in which the light must pass through odd corridors or small holes in order to
reach the part of the scene that the camera is viewing. It is also shown promise on correctly rendering pathological
situations with caustics. Instead of generating random paths, new sampling paths are created as slight mutations of
existing ones. In this sense, the algorithm "remembers" the successful paths from light sources to the camera.

Scattering distribution functions


The reflective properties (amount, direction and colour) of surfaces are
modelled using BRDFs. The equivalent for transmitted light (light that
goes through the object) are BSDFs. A path tracer can take full
advantage of complex, carefully modelled or measured distribution
functions, which controls the appearance ("material", "texture" or
"shading" in computer graphics terms) of an object.

In real time
An example of advanced path-tracing engine capable of real-time
graphic is Brigade [6] by Jacco Bikker. The first version of this
highly-optimized game-oriented engine was released on January 26th,
2012. It's the successor of the Arauna real-time ray-tracing engine,
made by the same author, and it requires the CUDA architecture (by
Nvidia) to run.

Scattering distribution functions

Notes
[1]
[2]
[3]
[4]
[5]
[6]

http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_kajiya1986rendering


http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_lafortune1996mathematical
http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_purcell2002ray
http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_robisonNVIRT
http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_pathGPUimplementations
http:/ / igad. nhtv. nl/ ~bikker/

1. ^ Kajiya, J. T. (1986). "The rendering equation". Proceedings of the 13th annual conference on Computer
graphics and interactive techniques. ACM. CiteSeerX: 10.1.1.63.1402 (http://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.63.1402).
2. ^ Lafortune, E, Mathematical Models and Monte Carlo Algorithms for Physically Based Rendering (http://
www.graphics.cornell.edu/~eric/thesis/index.html), (PhD thesis), 1996.

Path tracing
3. ^ Purcell, T J; Buck, I; Mark, W; and Hanrahan, P, "Ray Tracing on Programmable Graphics Hardware", Proc.
SIGGRAPH 2002, 703 - 712. See also Purcell, T, Ray tracing on a stream processor (http://graphics.stanford.
edu/papers/tpurcell_thesis/) (PhD thesis), 2004.
4. ^ Robison, Austin, "Interactive Ray Tracing on the GPU and NVIRT Overview" (http://realtimerendering.com/
downloads/NVIRT-Overview.pdf), slide 37, I3D 2009.
5. ^ Vray demo (http://www.youtube.com/watch?v=eRoSFNRQETg); Other examples include Octane Render,
Arion, and Luxrender.
6. ^ Veach, E., and Guibas, L. J. Metropolis light transport (http://graphics.stanford.edu/papers/metro/metro.
pdf). In SIGGRAPH97 (August 1997), pp.6576.
7. This "Introduction to Global Illumination" (http://www.thepolygoners.com/tutorials/GIIntro/GIIntro.htm)
has some good example images, demonstrating the image noise, caustics and indirect lighting properties of
images rendered with path tracing methods. It also discusses possible performance improvements in some detail.
8. SmallPt (http://www.kevinbeason.com/smallpt/) is an educational path tracer by Kevin Beason. It uses 99
lines of C++ (including scene description). This page has a good set of examples of noise resulting from this
technique.

Per-pixel lighting
In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates
illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as
vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values
over the model's faces to calculate the final per-pixel color values.
Per-pixel lighting is commonly used with techniques like normal mapping, bump mapping, specularity, and shadow
volumes. Each of these techniques provides some additional data about the surface being lit or the scene and light
sources that contributes to the final look and feel of the surface.
Most modern video game engines implement lighting using per-pixel techniques instead of vertex lighting to achieve
increased detail and realism. The id Tech 4 engine, used to develop such games as Brink and Doom 3, was one of the
first game engines to implement a completely per-pixel shading engine. All versions of the CryENGINE, Frostbite
Engine, and Unreal Engine, among others, also implement per-pixel shading techniques.
Deferred shading is a recent development in per-pixel lighting notable for its use in the Frostbite Engine and
Battlefield 3. Deferred shading techniques are capable of rendering potentially large numbers of small lights
inexpensively (other per-pixel lighting approaches require full-screen calculations for each light in a scene,
regardless of size).

History
While only recently have personal computers and video hardware become powerful enough to perform full per-pixel
shading in real-time applications such as games, many of the core concepts used in per-pixel lighting models have
existed for decades.
Frank Crow published a paper describing the theory of shadow volumes in 1977.[1] This technique uses the stencil
buffer to specify areas of the screen that correspond to surfaces that lie in a "shadow volume", or a shape
representing a volume of space eclipsed from a light source by some object. These shadowed areas are typically
shaded after the scene is rendered to buffers by storing shadowed areas with the stencil buffer.
Jim Blinn first introduced the idea of normal mapping in a 1978 SIGGRAPH paper.[2] Blinn pointed out that the
earlier idea of unlit texture mapping proposed by Edwin Catmull was unrealistic for simulating rough surfaces.
Instead of mapping a texture onto an object to simulate roughness, Blinn proposed a method of calculating the

105

Per-pixel lighting
degree of lighting a point on a surface should receive based on an established "perturbation" of the normals across
the surface.

Implementations
Hardware Rendering
Real-time applications, such as computer games, usually implement per-pixel lighting through the use of pixel
shaders, allowing the GPU hardware to process the effect. The scene to be rendered is first rasterized onto a number
of buffers storing different types of data to be used in rendering the scene, such as depth, normal direction, and
diffuse color. Then, the data is passed into a shader and used to compute the final appearance of the scene,
pixel-by-pixel.
Deferred shading is a per-pixel shading technique that has recently become feasible for games.[3] With deferred
shading, a "g-buffer" is used to store all terms needed to shade a final scene on the pixel level. The format of this
data varies from application to application depending on the desired effect, and can include normal data, positional
data, specular data, diffuse data, emissive maps and albedo, among others. Using multiple render targets, all of this
data can be rendered to the g-buffer with a single pass, and a shader can calculate the final color of each pixel based
on the data from the g-buffer in a final "deferred pass".

Software Rendering
Per-pixel lighting is also performed in software on many high-end commercial rendering applications which
typically do not render at interactive framerates. This is called offline rendering or software rendering. NVidia's
mental ray rendering software, which is integrated with such suites as Autodesk's Softimage is a well-known
example.

Notes
[1] Crow, Franklin C: "Shadow Algorithms for Computer Graphics", Computer Graphics (SIGGRAPH '77 Proceedings), vol. 11, no. 2, 242-248.
[2] Blinn, James F. "Simulation of Wrinkled Surfaces", Computer Graphics (SIGGRAPH '78 Proceedings, vol. 12, no. 3, 286-292.
[3] Hargreaves, Shawn and Mark Harris: "6800 Leagues Under the Sea: Deferred Shading". NVidia Developer Assets.

106

Phong reflection model

107

Phong reflection model


The Phong reflection model (also called Phong illumination or Phong lighting) is an empirical model of the local
illumination of points on a surface. In 3D computer graphics, it is sometimes ambiguously referred to as Phong
shading, in particular if the model is used in combination with the interpolation method of the same name and in the
context of pixel shaders or other places where a lighting calculation can be referred to as shading.

History
The Phong reflection model was developed by Bui Tuong Phong at the University of Utah, who published it in his
1973 Ph.D. dissertation.[1][2] It was published in conjunction with a method for interpolating the calculation for each
individual pixel that is rasterized from a polygonal surface model; the interpolation technique is known as Phong
shading, even when it is used with a reflection model other than Phong's. Phong's methods were considered radical at
the time of their introduction, but have evolved into a baseline shading method for many rendering applications.
Phong's methods have proven popular due to their generally efficient use of computation time per rendered pixel.

Description
Phong reflection is an empirical model of local illumination. It describes the way a surface reflects light as a
combination of the diffuse reflection of rough surfaces with the specular reflection of shiny surfaces. It is based on
Bui Tuong Phong's informal observation that shiny surfaces have small intense specular highlights, while dull
surfaces have large highlights that fall off more gradually. The model also includes an ambient term to account for
the small amount of light that is scattered about the entire scene.

Visual illustration of the Phong equation: here the light is white, the ambient and diffuse colors are both blue, and the specular color is white,
reflecting a small part of the light hitting the surface, but only in very narrow highlights. The intensity of the diffuse component varies with the
direction of the surface, and the ambient component is uniform (independent of direction).

For each light source in the scene, components

and

are defined as the intensities (often as RGB values) of the

specular and diffuse components of the light sources respectively. A single term

controls the ambient lighting; it

is sometimes computed as a sum of contributions from all light sources.


For each material in the scene, the following parameters are defined:
, which is a specular reflection constant, the ratio of reflection of the specular term of incoming light,
, which is a diffuse reflection constant, the ratio of reflection of the diffuse term of incoming light
(Lambertian reflectance),
, which is an ambient reflection constant, the ratio of reflection of the ambient term present in all points in
the scene rendered, and

Phong reflection model

108

, which is a shininess constant for this material, which is larger for surfaces that are smoother and more
mirror-like. When this constant is large the specular highlight is small.
Furthermore, we have
, which is the set of all light sources,
, which is the direction vector from the point on the surface toward each light source (

specifies the

light source),
, which is the normal at this point on the surface,
, which is the direction that a perfectly reflected ray of light would take from this point on the surface,
and
, which is the direction pointing towards the viewer (such as a virtual camera).
Then the Phong reflection model provides an equation for computing the illumination of each surface point

where the direction vector


normal

is calculated as the reflection of

on the surface characterized by the surface

using:

and the hats indicate that the vectors are normalized. The diffuse term is not affected by the viewer direction (
The specular term is large only when the viewer direction (
alignment is measured by the
normalized vectors
and

) is aligned with the reflection direction

).

. Their

power of the cosine of the angle between them. The cosine of the angle between the
is equal to their dot product. When is large, in the case of a nearly mirror-like

reflection, the specular highlight will be small, because any viewpoint not aligned with the reflection will have a
cosine less than one which rapidly approaches zero when raised to a high power.
Although the above formulation is the common way of presenting the Phong reflection model, each term should only
be included if the term's dot product is positive. (Additionally, the specular term should only be included if the dot
product of the diffuse term is positive.)
When the color is represented as RGB values, as often is the case in computer graphics, this equation is typically
modeled separately for R, G and B intensities, allowing different reflections constants
and
for the
different color channels.

Computationally more efficient alterations


When implementing the Phong reflection model, there are a number of methods for approximating the model, rather
than implementing the exact formulas, which can speed up the calculation; for example, the BlinnPhong reflection
model is a modification of the Phong reflection model, which is more efficient if the viewer and the light source are
treated to be at infinity.
Another approximation also addresses the computation of the specular term since the calculation of the power term
may be computationally expensive. Considering that the specular term should be taken into account only if its dot
product is positive, it can be approximated by realizing that

for
real

, for a sufficiently large, fixed integer


number

(not

necessarily

an

integer).

The

(typically 4 will be enough), where


value

can

; this squared distance between the vectors


normalization errors in those vectors than is Phong's dot-product-based

be

further

and

approximated

is a
as

is much less sensitive to


.

Phong reflection model


The

109

value can be chosen to be a fixed power of 2,

where

can be efficiently calculated by squaring

is a small integer; then the expression

times. Here the shininess parameter is

proportional to the original parameter .


This method substitutes a few multiplications for a variable exponentiation, and removes the need for an accurate
reciprocal-square-root-based vector normalization.

Inverse Phong reflection model


The Phong reflection model in combination with Phong shading is an approximation of shading of objects in real
life. This means that the Phong equation can relate the shading seen in a photograph with the surface normals of the
visible object. Inverse refers to the wish to estimate the surface normals given a rendered image, natural or
computer-made.
The Phong reflection model contains many parameters, such as the surface diffuse reflection parameter (albedo)
which may vary within the object. Thus the normals of an object in a photograph can only be determined, by
introducing additional information such as the number of lights, light directions and reflection parameters.
For example we have a cylindrical object, for instance a finger, and wish to compute the normal

on

a line on the object. We assume only one light, no specular reflection, and uniform known (approximated) reflection
parameters. We can then simplify the Phong equation to:
With

a constant equal to the ambient light and

a constant equal to the diffusion reflection. We can re-write

the equation to:


Which can be rewritten for a line through the cylindrical object as:

For instance if the light direction is 45 degrees above the object

we get two equations with two

unknowns.

Because of the powers of two in the equation there are two possible solutions for the normal direction. Thus some
prior information of the geometry is needed to define the correct normal direction. The normals are directly related to
angles of inclination of the line on the object surface. Thus the normals allow the calculation of the relative surface
heights of the line on the object using a line integral, if we assume a continuous surface.
If the object is not cylindrical, we have three unknown normal values

. Then the two equations

still allow the normal to rotate around the view vector, thus additional constraints are needed from prior geometric
information. For instance in face recognition those geometric constraints can be obtained using principal component
analysis (PCA) on a database of depth-maps of faces, allowing only surface normals solutions which are found in a
normal population.

Phong reflection model

110

Applications
As already implied, the Phong reflection model is often used together with Phong shading to shade surfaces in 3D
computer graphics software. Apart from this, it may also be used for other purposes. For example, it has been used to
model the reflection of thermal radiation from the Pioneer probes in an attempt to explain the Pioneer anomaly.

External links
Phong reflection model in Matlab [3]

References
[1] Bui Tuong Phong, Illumination for computer generated pictures (http:/ / www. cs. northwestern. edu/ ~ago820/ cs395/ Papers/ Phong_1975.
pdf), Communications of ACM 18 (1975), no. 6, 311317.
[2] University of Utah School of Computing, http:/ / www. cs. utah. edu/ school/ history/ #phong-ref
[3] http:/ / michal. is/ projects/ phong-reflection-model-matlab/

Phong shading
Phong shading refers to an interpolation technique for surface shading in 3D computer graphics. It is also called
Phong interpolation or normal-vector interpolation shading. Specifically, it interpolates surface normals across
rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong
shading may also refer to the specific combination of Phong interpolation and the Phong reflection model.

History
Phong shading and the Phong reflection model were developed at the University of Utah by Bui Tuong Phong, who
published them in his 1973 Ph.D. dissertation.[1][2] Phong's methods were considered radical at the time of their
introduction, but have evolved into a baseline shading method for many rendering applications. Phong's methods
have proven popular due to their generally efficient use of computation time per rendered pixel.

Phong interpolation
Phong shading improves upon
Gouraud shading and provides a better
approximation of the shading of a
smooth surface. Phong shading
assumes a smoothly varying surface
normal vector. The Phong interpolation
method works better than Gouraud
shading when applied to a reflection
model that has small specular
highlights such as the Phong reflection
model.

Phong shading interpolation example

The most serious problem with Gouraud shading occurs when specular highlights are found in the middle of a large
polygon. Since these specular highlights are absent from the polygon's vertices and Gouraud shading interpolates
based on the vertex colors, the specular highlight will be missing from the polygon's interior. This problem is fixed
by Phong shading.

Phong shading
Unlike Gouraud shading, which interpolates colors across polygons, in Phong shading a normal vector is linearly
interpolated across the surface of the polygon from the polygon's vertex normals. The surface normal is interpolated
and normalized at each pixel and then used in a reflection model, e.g. the Phong reflection model, to obtain the final
pixel color. Phong shading is more computationally expensive than Gouraud shading since the reflection model must
be computed at each pixel instead of at each vertex.
In modern graphics hardware, variants of this algorithm are implemented using pixel or fragment shaders.

Phong reflection model


Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model,
which is an empirical model of local illumination. It describes the way a surface reflects light as a combination of the
diffuse reflection of rough surfaces with the specular reflection of shiny surfaces. It is based on Bui Tuong Phong's
informal observation that shiny surfaces have small intense specular highlights, while dull surfaces have large
highlights that fall off more gradually. The reflection model also includes an ambient term to account for the small
amount of light that is scattered about the entire scene.

Visual illustration of the Phong equation: here the light is white, the ambient and diffuse colors are both blue, and the specular color is white,
reflecting a small part of the light hitting the surface, but only in very narrow highlights. The intensity of the diffuse component varies with the
direction of the surface, and the ambient component is uniform (independent of direction).

References
[1] B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311317.
[2] University of Utah School of Computing, http:/ / www. cs. utah. edu/ school/ history/ #phong-ref

111

Photon mapping

Photon mapping
In computer graphics, photon mapping is a two-pass global illumination algorithm developed by Henrik Wann
Jensen that approximately solves the rendering equation. Rays from the light source and rays from the camera are
traced independently until some termination criterion is met, then they are connected in a second step to produce a
radiance value. It is used to realistically simulate the interaction of light with different objects. Specifically, it is
capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse
interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of
the effects caused by particulate matter such as smoke or water vapor. It can also be extended to more accurate
simulations of light such as spectral rendering.
Unlike path tracing, bidirectional path tracing and Metropolis light transport, photon mapping is a "biased" rendering
algorithm, which means that averaging many renders using this method does not converge to a correct solution to the
rendering equation. However, since it is a consistent method, a correct solution can be achieved by increasing the
number of photons.

Effects
Caustics
Light refracted or reflected causes patterns called caustics, usually
visible as concentrated patches of light on nearby surfaces. For
example, as light rays pass through a wine glass sitting on a table, they
are refracted and patterns of light are visible on the table. Photon
mapping can trace the paths of individual photons to model where
these concentrated patches of light will appear.

Diffuse interreflection
A model of a wine glass ray traced with photon
Diffuse interreflection is apparent when light from one diffuse object is
mapping to show caustics.
reflected onto another. Photon mapping is particularly adept at
handling this effect because the algorithm reflects photons from one
surface to another based on that surface's bidirectional reflectance distribution function (BRDF), and thus light from
one object striking another is a natural result of the method. Diffuse interreflection was first modeled using radiosity
solutions. Photon mapping differs though in that it separates the light transport from the nature of the geometry in the
scene. Color bleed is an example of diffuse interreflection.

112

Photon mapping

Subsurface scattering
Subsurface scattering is the effect evident when light enters a material and is scattered before being absorbed or
reflected in a different direction. Subsurface scattering can accurately be modeled using photon mapping. This was
the original way Jensen implemented it; however, the method becomes slow for highly scattering materials, and
bidirectional surface scattering reflectance distribution functions (BSSRDFs) are more efficient in these situations.

Usage
Construction of the photon map (1st pass)
With photon mapping, light packets called photons are sent out into the scene from the light sources. Whenever a
photon intersects with a surface, the intersection point and incoming direction are stored in a cache called the photon
map. Typically, two photon maps are created for a scene: one especially for caustics and a global one for other light.
After intersecting the surface, a probability for either reflecting, absorbing, or transmitting/refracting is given by the
material. A Monte Carlo method called Russian roulette is used to choose one of these actions. If the photon is
absorbed, no new direction is given, and tracing for that photon ends. If the photon reflects, the surface's
bidirectional reflectance distribution function is used to determine the ratio of reflected radiance. Finally, if the
photon is transmitting, a function for its direction is given depending upon the nature of the transmission.
Once the photon map is constructed (or during construction), it is typically arranged in a manner that is optimal for
the k-nearest neighbor algorithm, as photon look-up time depends on the spatial distribution of the photons. Jensen
advocates the usage of kd-trees. The photon map is then stored on disk or in memory for later usage.

Rendering (2nd pass)


In this step of the algorithm, the photon map created in the first pass is used to estimate the radiance of every pixel of
the output image. For each pixel, the scene is ray traced until the closest surface of intersection is found.
At this point, the rendering equation is used to calculate the surface radiance leaving the point of intersection in the
direction of the ray that struck it. To facilitate efficiency, the equation is decomposed into four separate factors:
direct illumination, specular reflection, caustics, and soft indirect illumination.
For an accurate estimate of direct illumination, a ray is traced from the point of intersection to each light source. As
long as a ray does not intersect another object, the light source is used to calculate the direct illumination. For an
approximate estimate of indirect illumination, the photon map is used to calculate the radiance contribution.
Specular reflection can be, in most cases, calculated using ray tracing procedures (as it handles reflections well).
The contribution to the surface radiance from caustics is calculated using the caustics photon map directly. The
number of photons in this map must be sufficiently large, as the map is the only source for caustics information in
the scene.
For soft indirect illumination, radiance is calculated using the photon map directly. This contribution, however, does
not need to be as accurate as the caustics contribution and thus uses the global photon map.

113

Photon mapping
Calculating radiance using the photon map
In order to calculate surface radiance at an intersection point, one of the cached photon maps is used. The steps are:
1. Gather the N nearest photons using the nearest neighbor search function on the photon map.
2. Let S be the sphere that contains these N photons.
3. For each photon, divide the amount of flux (real photons) that the photon represents by the area of S and multiply
by the BRDF applied to that photon.
4. The sum of those results for each photon represents total surface radiance returned by the surface intersection in
the direction of the ray that struck it.

Optimizations
To avoid emitting unneeded photons, the initial direction of the outgoing photons is often constrained. Instead of
simply sending out photons in random directions, they are sent in the direction of a known object that is a desired
photon manipulator to either focus or diffuse the light. There are many other refinements that can be made to the
algorithm: for example, choosing the number of photons to send, and where and in what pattern to send them. It
would seem that emitting more photons in a specific direction would cause a higher density of photons to be
stored in the photon map around the position where the photons hit, and thus measuring this density would give
an inaccurate value for irradiance. This is true; however, the algorithm used to compute radiance does not depend
on irradiance estimates.
For soft indirect illumination, if the surface is Lambertian, then a technique known as irradiance caching may be
used to interpolate values from previous calculations.
To avoid unnecessary collision testing in direct illumination, shadow photons can be used. During the photon
mapping process, when a photon strikes a surface, in addition to the usual operations performed, a shadow photon
is emitted in the same direction the original photon came from that goes all the way through the object. The next
object it collides with causes a shadow photon to be stored in the photon map. Then during the direct illumination
calculation, instead of sending out a ray from the surface to the light that tests collisions with objects, the photon
map is queried for shadow photons. If none are present, then the object has a clear line of sight to the light source
and additional calculations can be avoided.
To optimize image quality, particularly of caustics, Jensen recommends use of a cone filter. Essentially, the filter
gives weight to photons' contributions to radiance depending on how far they are from ray-surface intersections.
This can produce sharper images.
Image space photon mapping [1] achieves real-time performance by computing the first and last scattering using a
GPU rasterizer.

Variations
Although photon mapping was designed to work primarily with ray tracers, it can also be extended for use with
scanline renderers.

External links

Global Illumination using Photon Maps [2]


Realistic Image Synthesis Using Photon Mapping [3] ISBN 1-56881-147-0
Photon mapping introduction [4] from Worcester Polytechnic Institute
Bias in Rendering [5]
Siggraph Paper [6]

114

Photon mapping

References
[1]
[2]
[3]
[4]
[5]
[6]

http:/ / research. nvidia. com/ publication/ hardware-accelerated-global-illumination-image-space-photon-mapping


http:/ / graphics. ucsd. edu/ ~henrik/ papers/ photon_map/ global_illumination_using_photon_maps_egwr96. pdf
http:/ / graphics. ucsd. edu/ ~henrik/ papers/ book/
http:/ / www. cs. wpi. edu/ ~emmanuel/ courses/ cs563/ write_ups/ zackw/ photon_mapping/ PhotonMapping. html
http:/ / web. archive. org/ web/ 20120607035534/ http:/ / www. cgafaq. info/ wiki/ Bias_in_rendering
http:/ / www. cs. princeton. edu/ courses/ archive/ fall02/ cs526/ papers/ course43sig02. pdf

Polygon
Polygons are used in computer graphics to compose images that are three-dimensional in appearance. Usually (but
not always) triangular, polygons arise when an object's surface is modeled, vertices are selected, and the object is
rendered in a wire frame model. This is quicker to display than a shaded model; thus the polygons are a stage in
computer animation. The polygon count refers to the number of polygons being rendered per frame.

Competing methods for rendering polygons that avoid seams


Point

Floating Point
Fixed-Point
Polygon
because of rounding, every scanline has its own direction in space and may show its front or back side to the
viewer.
Fraction (mathematics)
Bresenham's line algorithm
Polygons have to be split into triangles
The whole triangle shows the same side to the viewer
The point numbers from the Transform and lighting stage have to converted to Fraction (mathematics)
Barycentric coordinates (mathematics)
Used in raytracing

115

Potentially visible set

Potentially visible set


Potentially Visible Sets are used to accelerate the rendering of 3D environments. This is a form of occlusion culling,
whereby a candidate set of potentially visible polygons are pre-computed, then indexed at run-time in order to
quickly obtain an estimate of the visible geometry. The term PVS is sometimes used to refer to any occlusion culling
algorithm (since in effect, this is what all occlusion algorithms compute), although in almost all the literature, it is
used to refer specifically to occlusion culling algorithms that pre-compute visible sets and associate these sets with
regions in space. In order to make this association, the camera view-space (the set of points from which the camera
can render an image) is typically subdivided into (usually convex) regions and a PVS is computed for each region.

Benefits vs. Cost


The benefit of offloading visibility as a pre-process are:
The application just has to look up the pre-computed set given its view position. This set may be further reduced
via frustum culling. Computationally, this is far cheaper than computing occlusion based visibility every frame.
Within a frame, time is limited. Only 1/60th of a second (assuming a 60Hz frame-rate) is available for visibility
determination, rendering preparation (assuming graphics hardware), AI, physics, or whatever other app specific
code is required. In contrast, the offline pre-processing of a potentially visible set can take as long as required in
order to compute accurate visibility.
The disadvantages are:

There are additional storage requirements for the PVS data.


Preprocessing times may be long or inconvenient.
Can't be used for completely dynamic scenes.
The visible set for a region can in some cases be much larger than for a point.

Primary Problem
The primary problem in PVS computation then becomes: Compute the set of polygons that can be visible from
anywhere inside each region of a set of polyhedral regions.
There are various classifications of PVS algorithms with respect to the type of visibility set they compute.[1]

Conservative algorithms
These overestimate visibility consistently, such that no triangle that is visible may be omitted. The net result is that
no image error is possible, however, it is possible to greatly overestimate visibility, leading to inefficient rendering
(due to the rendering of invisible geometry). The focus on conservative algorithm research is maximizing occluder
fusion in order to reduce this overestimation. The list of publications on this type of algorithm is extensive - good
surveys on this topic include Cohen-Or et al. and Durand.[2]

116

Potentially visible set

Aggressive algorithms
These underestimate visibility consistently, such that no redundant (invisible) polygons exist in the PVS set,
although it may be possible to miss a polygon that is actually visible leading to image errors. The focus on
aggressive algorithm research is to reduce the potential error.[3]

Approximate algorithms
These can result in both redundancy and image error.

Exact algorithms
These provide optimal visibility sets, where there is no image error and no redundancy. They are, however, complex
to implement and typically run a lot slower than other PVS based visibility algorithms. Teller computed exact
visibility for a scene subdivided into cells and portals[4] (see also portal rendering).
The first general tractable 3D solutions were presented in 2002 by Nirenstein et al. and Bittner.[5] Haumont et al.
improve on the performance of these techniques significantly. Bittner et al. solve the problem for 2.5D urban scenes.
Although not quite related to PVS computation, the work on the 3D Visibility Complex and 3D Visibility Skeleton
by Durand provides an excellent theoretical background on analytic visibility.
Visibility in 3D is inherently a 4-Dimensional problem. To tackle this, solutions are often performed using Plcker
coordinates, which effectively linearize the problem in a 5D projective space. Ultimately, these problems are solved
with higher dimensional constructive solid geometry.

Secondary Problems
Some interesting secondary problems include:
Compute an optimal sub-division in order to maximize visibility culling.
Compress the visible set data in order to minimize storage overhead.

Implementation Variants
It is often undesirable or inefficient to simply compute triangle level visibility. Graphics hardware prefers objects
to be static and remain in video memory. Therefore, it is generally better to compute visibility on a per-object
basis and to sub-divide any objects that may be too large individually. This adds conservativity, but the benefit is
better hardware utilization and compression (since visibility data is now per-object, rather than per-triangle).
Computing cell or sector visibility is also advantageous, since by determining visible regions of space, rather than
visible objects, it is possible to not only cull out static objects in those regions, but dynamic objects as well.

References
[1] S. Nirenstein, E. Blake, and J. Gain. Exact from-region visibility culling (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 131.
7204), In Proceedings of the 13th workshop on Rendering, pages 191202. Eurographics Association, June 2002.
[2] 3D Visibility: Analytical study and Applications (http:/ / people. csail. mit. edu/ fredo/ THESE/ ), Frdo Durand, PhD thesis, Universit
Joseph Fourier, Grenoble, France, July 1999. is strongly related to exact visibility computations.
[3] Shaun Nirenstein and Edwin Blake, Hardware Accelerated Visibility Preprocessing using Adaptive Sampling (http:/ / citeseerx. ist. psu. edu/
viewdoc/ summary?doi=10. 1. 1. 64. 3231), Rendering Techniques 2004: Proceedings of the 15th Eurographics Symposium on Rendering,
207- 216, Norrkping, Sweden, June 2004.
[4] Seth Teller, Visibility Computations in Densely Occluded Polyhedral Environments (http:/ / www. eecs. berkeley. edu/ Pubs/ TechRpts/ 1992/
CSD-92-708. pdf) (Ph.D. dissertation, Berkeley, 1992)
[5] Jiri Bittner. Hierarchical Techniques for Visibility Computations (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 2. 9886),
PhD Dissertation. Department of Computer Science and Engineering. Czech Technical University in Prague. Submitted October 2002,
defended March 2003.

117

Potentially visible set

External links
Cited author's pages (including publications):

Jiri Bittner (http://www.cgg.cvut.cz/~bittner/)


Daniel Cohen-Or (http://www.math.tau.ac.il/~dcor/)
Fredo Durand (http://people.csail.mit.edu/fredo/)
Denis Haumont (http://www.ulb.ac.be/polytech/sln/team/dhaumont/dhaumont.html)
Shaun Nirenstein (http://www.nirenstein.com)
Seth Teller (http://people.csail.mit.edu/seth/)
Peter Wonka (http://www.public.asu.edu/~pwonka/)

Other links:
Selected publications on visibility (http://artis.imag.fr/~Xavier.Decoret/bib/visibility/)

Precomputed Radiance Transfer


Precomputed Radiance Transfer (PRT) is a computer graphics technique used to render a scene in real time with
complex light interactions being precomputed to save time. Radiosity methods can be used to determine the diffuse
lighting of the scene, however PRT offers a method to dynamically change the lighting environment.
In essence, PRT computes the illumination of a point as a linear combination of incident irradiance. An efficient
method must be used to encode this data, such as spherical harmonics.
When spherical harmonics is used to approximate the light transport function, only low frequency effect can be
handled with a reasonable number of parameters. Ren Ng extended this work to handle higher frequency shadows by
replacing spherical harmonics with non-linear wavelets.
Teemu Mki-Patola gives a clear introduction to the topic based on the work of Peter-Pike Sloan et al. At
SIGGRAPH 2005, a detailed course on PRT was given.

References
Peter-Pike Sloan, Jan Kautz, and John Snyder. "Precomputed Radiance Transfer for Real-time rendering in
Dynamic, Low-Frequency Lighting Environments". ACM Transactions on Graphics, Proceedings of the 29th
Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 527-536. New York,
NY: ACM Press, 2002. (http://www.mpi-inf.mpg.de/~jnkautz/projects/prt/prtSIG02.pdf)
NG, R., RAMAMOORTHI, R., AND HANRAHAN, P. 2003. All-Frequency Shadows Using Non-Linear
Wavelet Lighting Approximation. ACM Transactions on Graphics 22, 3, 376381. (http://graphics.stanford.
edu/papers/allfreq/allfreq.press.pdf)

118

Procedural generation

Procedural generation
Procedural generation is a widely used term in the production of media; it refers to content generated
algorithmically rather than manually. Often, this means creating content on the fly rather than prior to distribution.
This is often related to computer graphics applications and video game level design.

Overview
The term procedural refers to the process that computes a particular function. Fractals, an example of procedural
generation, dramatically express this concept, around which a whole body of mathematicsfractal geometryhas
evolved. Commonplace procedural content includes textures and meshes. Sound is often procedurally generated as
well and has applications in both speech synthesis as well as music. It has been used to create compositions in
various genres of electronic music by artists such as Brian Eno who popularized the term "generative music".
While software developers have applied procedural generation techniques for years, few products have employed
this approach extensively. Procedurally generated elements have appeared in earlier video games: The Elder Scrolls
II: Daggerfall takes place on a mostly procedurally generated world, giving a world roughly twice the actual size of
the British Isles. Soldier of Fortune from Raven Software uses simple routines to detail enemy models. Avalanche
Studios employed procedural generation to create a large and varied group of tropical islands in great detail for Just
Cause. See also "No Man's Sky," a game being developed by games studio Hello Games which is all based upon
Procedurally generated elements.
The modern demoscene uses procedural generation to package a great deal of audiovisual content into relatively
small programs. Farbrausch is a team famous for such achievements, although many similar techniques were already
implemented by The Black Lotus in the 1990s.
In recent years, there has been an increasing interest in procedural content generation within the academic game
research community, especially among researchers interested in applying artificial intelligence methods to the
problems of PCG. New methods and applications are presented annually in conferences such as the IEEE
Conference on Computational Intelligence and Games and Artificial Intelligence and Interactive Digital
Entertainment. In particular, progress has been made in using evolutionary computation and related techniques to
generate content such as levels and game rules, an approach called search-based procedural content generation (see
[Search-based procedural content generation: a taxonomy and survey [1]] for more information). In addition, the
Experience-driven Procedural Content Generation [2] framework couples player experience models and search for
the generation of personalised content for the player.

Contemporary application
Video games
The earliest computer games were severely limited by memory constraints. This forced content, such as maps, to be
generated algorithmically on the fly: there simply wasn't enough space to store a large amount of pre-made levels
and artwork. Pseudorandom number generators were often used with predefined seed values in order to create very
large game worlds that appeared premade. For example, The Sentinel supposedly had 10,000 different levels stored
in only 48 and 64 kilobytes. An extreme case was Elite, which was originally planned to contain a total of 248
(approximately 282 trillion) galaxies with 256 solar systems each. The publisher, however, was afraid that such a
gigantic universe would cause disbelief in players, and eight of these galaxies were chosen for the final version.
Other notable early examples include the 1985 game Rescue on Fractalus that used fractals to procedurally create in
real time the craggy mountains of an alien planet and River Raid, the 1982 Activision game that used a
pseudorandom number sequence generated by a linear feedback shift register in order to generate a scrolling maze of

119

Procedural generation
obstacles.
Today, most games include thousands of times as much data in terms of memory as algorithmic mechanics. For
example, all of the buildings in the large game worlds of the Grand Theft Auto games have been individually
designed and placed by artists. In a typical modern video game, game content such as textures and character and
environment models are created by artists beforehand, then rendered in the game engine. As the technical
capabilities of computers and video game consoles increases, the amount of work required by artists also greatly
increases. First, gaming PCs, previous-generation game consoles like the Xbox 360 and PlayStation 3, and
current-generation game consoles such as the Wii U, PlayStation 4, and Xbox One are capable of rendering scenes
containing many very detailed objects with high-resolution textures in high-definition. This means that artists must
invest a great deal more time in creating a single character, vehicle, building, or texture, since players will tend to
expect ever-increasingly detailed environments.
Furthermore, the number of unique objects displayed in a video game is increasing. In addition to highly detailed
models, players expect a variety of models that appear substantially different from one another. In older games, a
single character or object model might have been used over and over again throughout a game. With the increased
visual fidelity of modern games, however, it is very jarring (and threatens the suspension of disbelief) to see many
copies of a single object, while the real world contains far more variety. Again, artists would be required to complete
exponentially more work in order to create many different varieties of a particular object. The need to hire larger art
staffs is one of the reasons for the rapid increase in game development costs.
Some initial approaches to procedural synthesis attempted to solve these problems by shifting the burden of content
generation from the artists to programmers who can create code which automatically generates different meshes
according to input parameters. Although sometimes this still happens, what has been recognized is that applying a
purely procedural model is often hard at best, requiring huge amounts of time to evolve into a functional, usable and
realistic-looking method. Instead of writing a procedure that completely builds content procedurally, it has been
proven to be much cheaper and more effective to rely on artist created content for some details. For example,
SpeedTree is middleware used to generate a large variety of trees procedurally, yet its leaf textures can be fetched
from regular files, often representing digitally acquired real foliage. Other effective methods to generate hybrid
content are to procedurally merge different pre-made assets or to procedurally apply some distortions to them.
Supposing, however, a single algorithm can be envisioned to generate a realistic-looking tree, the algorithm could be
called to generate random trees, thus filling a whole forest at runtime, instead of storing all the vertices required by
the various models. This would save storage media space and reduce the burden on artists, while providing a richer
experience. The same method would require far more processing power. Since CPUs are constantly increasing in
speed, however, the latter is becoming less of a hurdle.
A different problem is that it is not easy to develop a good algorithm for a single tree, let alone for a variety of
species (compare sumac, birch, maple). An additional caveat is that assembling a realistic-looking forest could not be
done by simply assembling trees because in the real world there are interactions between the various trees which can
dramatically change their appearance and distribution.
In 2004, a PC first-person shooter called .kkrieger was released that made heavy use of procedural synthesis: while
quite short and very simple, the advanced video effects were packed into just 96 Kilobytes. In contrast, many modern
games have to be released on DVDs, often exceeding 2 gigabytes in size, more than 20,000 times larger. Naked
Sky's RoboBlitz used procedural generation to maximize content in a less than 50MB downloadable file for Xbox
Live Arcade. Will Wright's Spore also makes use of procedural synthesis.
In 2008, Valve Software released Left 4 Dead, a first-person shooter based on the Source engine that utilized
procedural generation as a major game mechanic. The game featured a built-in artificial intelligence structure,
dubbed the "Director," which analyzed player statistics and game states on the fly to provide dynamic experiences on
each and every playthrough. Based on different player variables, such as remaining health, ammo, and number of
players, the A.I. Director could potentially create or remove enemies and items so that any given match maintained

120

Procedural generation
an exciting and breakneck pace. Left 4 Dead 2, released in November 2009, expanded on this concept, introducing
even more advanced mechanics to the A.I. Director, such as the ability to generate new paths for players to follow
according to their individual statuses.
One indie game that makes extensive use of procedural generation is Minecraft. In the game the initial state of the
world is mostly random (with guidelines in order to generate Earth-like terrain), and new areas are generated
whenever the player moves towards the edges of the world. This has the benefit that every time a new game is made,
the world is completely different and will need a different method to be successful, adding replay value.
Another indie game that relies heavily on procedural generation is Dwarf Fortress. Before the player starts a game a
whole fantasy world is generated, complete with its terrain, history, notable characters, and monsters.

Film
As in video games, procedural generation is often used in film to rapidly create visually interesting and accurate
spaces. This comes in a wide variety of applications.
One application is known as an "imperfect factory," where artists can rapidly generate a large number of similar
objects. This accounts for the fact that, in real life, no two objects are ever exactly alike. For instance, an artist could
model a product for a grocery store shelf, and then create an imperfect factory that would generate a large number of
similar objects to populate the shelf.
Noise is extremely important to procedural workflow in film, the most prolific of which is Perlin noise. Noise refers
to an algorithm that generates a patterned sequence of pseudorandom numbers.

Software examples
Middleware

Acropora, a procedural 3D modeling software utilizing voxels to create organic objects and terrain.
Art of Illusion, an open source and free 3D modeler, has an internal node-based procedural texture editor.
CityEngine, a procedural 3D modeling software, specialized in city modeling.
Filter Forge, an Adobe Photoshop plugin for designing procedural textures using node-based editing.
Grome, popular terrain and outdoor scenes modeler for games and simulation software.
Houdini, a procedural 3D animation package. A free version of the software is available.
Softimage, a 3D computer graphics application that allows node-based procedural creation and deformation of
geometry.
SpeedTree, a middleware product for procedurally generating trees.
Terragen, a landscape generation software. Terragen 2 permits procedural generation of an entire world.
World Machine, a powerful node-based procedurally generated terrain software with a plugin system to write
new, complex nodes. Exports to Terragen, among other formats, for rendering, as well as having internal texture
generation tools.

121

Procedural generation

Games with procedural levels


Arcade games
The Sentinel (1986) - Used procedural generation to create 10,000 unique levels.
Darwinia (2005) - Has procedural landscapes that allowed for greatly reduced game development time.[citation
needed]

Space simulations with procedural worlds and universes


Elite (1984) - Everything about the universe, planet positions, names, politics and general descriptions, is
generated procedurally; Ian Bell has released the algorithms in C as Text Elite.[3]
Starflight (1986)[citation needed]
Exile (1988) - Game levels were created in a pseudorandom fashion, as areas important to gameplay were
generated.[citation needed]
Frontier: Elite II (1993) - Much as the game Elite had a procedural universe, so did its sequel.[4]
Frontier: First Encounters (1995)
Mankind (1998) - a MMORTS where everything about the galaxy, systems names, planets, maps and resources is
generated procedurally from a simple tga image.

Noctis (2002)[citation needed]


Infinity: The Quest for Earth (In development, not yet released)
Vega Strike - An open source game very similar to Elite.
Limit Theory (Also in development)
Space Engine
No Man's Sky (In development)
Elite: Dangerous

Racing games
Fuel (2009) - Generates an open world through procedural techniques
Gran Turismo 5 (2010) - Features randomly generated rally stages
GRID 2 (2013) - Features a system dubbed "LiveRoutes" that dynamically changes the track by changing the
corner layout on a given circuit lap-by-lap.
Role-playing games
Cube World - Often compared to Minecraft for procedurally generated voxel worlds explored by players,
containing randomly generateddungeons, including underground caverns and over-world castles, as well as
separatebiomessuch as grasslands, snowlands,deserts, and oceans.[5]
Captive (1990) - Generates (theoretically up to 65,535) game levels procedurally[6]
Virtual Hydlide (1995)
Shin Megami Tensei: Persona 3 (2006) - Features procedurally generated dungeons.
The Elder Scrolls II: Daggerfall (1996)
Diablo (1998) and Diablo II (2000) - Both use procedural generation for level design. [7]
Torchlight - A Diablo clone differing mostly by art style and feel.
DynGen Hunter - A mobile RPG with a procedurally generated, persistent world
Dwarf Fortress - Procedurally generates a large game world, including civilization structures, a large world
history, interactive geography including erosion and magma flows, ecosystems which react with each other and
the game world. The process of initially generating the world can take up to half an hour even on a modern PC,
and is then stored in text files reaching over 100MB to be reloaded whenever the game is played.
Dark Cloud and Dark Cloud 2 - Both generate game levels procedurally.

122

Procedural generation

Hellgate: London (2007)


The Disgaea series of games use procedural level generation for the "Item World".
Realm of the Mad God
Path of Exile
Nearly all roguelikes use this technique.

Strategy games

Majesty:The Fantasy Kingdom Sim (2000) - Uses procedural generation for all levels and scenarios.
Seven Kingdoms (1997) - Uses procedural generation for levels.[citation needed]
Xconq, an open source strategy game and game engine.
Frozen Synapse - Levels in single player are mostly randomly generated, with bits and pieces that are constant in
every generation. Multiplayer maps are randomly generated. Skirmish maps are randomly generated, and allow
the player to change the algorithm used.
Atom Zombie Smasher - Levels are generated randomly.
Freeciv - Uses procedural generation for levels.
Third-person shooters
Inside a Star-Filled Sky - All levels are procedurally generated and unlimited in visual detail.
Sandbox games
Subversion (TBA) - Uses procedural generation to create cities on a given terrain.
Minecraft - The game world is procedurally generated as the player explores it, with the full size possible
stretching out to be nearly eight times the surface area of the Earth before running into technical limits.[8]
Sir, You Are Being Hunted - As of August 22nd 2013, the game currently has three biomes: rural, mountain, and
fens.
Starbound - The game's procedural generation will allow players to explore over four hundred quadrillion unique
planets, as well as find procedurally generated weapons and armor.[9]
Terraria - Worlds are procedurally generated, so that each world is different from another one.
Almost entirely procedural games
SCP Containment Breach Map is divided into three zones each with their own sets of procedural generated
rooms.
Noctis (2000)
.kkrieger (2004)
Synth (video game) (2009) - 100% procedural graphics and levels
Games with miscellaneous procedural effects
ToeJam & Earl (1991) - The random levels were procedurally generated.
The Elder Scrolls III: Morrowind (2002) - Water effects are generated on the fly with procedural animation by the
technique demonstrated in NVIDIA's "Water Interaction" demo.
RoboBlitz (2006) for XBox360 live arcade and PC
Spore (2008)
Left 4 Dead (2008) - Certain events, item locations, and number of enemies are procedurally generated according
to player statistics.
Left 4 Dead 2 (2009) - Certain areas of maps are randomly generated and weather effects are dynamically altered
based on current situation.

123

Procedural generation
Borderlands (2009) - The weapons, items and some levels are procedurally generated based on individual players'
current level.
Star Trek Online (2010) - Star Trek Online procedurally generates new races, new objects, star systems and
planets for exploration. The player can save the coordinates of a system they find, so that they can return or let
other players find the system.
Galactic Arms Race (2010) - Evolves unique particle system weapons based on past player choices using a
custom version of the NEAT evolutionary algorithm.[10]
Terraria (2011) - Terraria procedurally generates a 2D landscape for the player to explore.
Black Mesa (2012) - A common use of procedural generation in video games is to make a few basic character
models, then run them through procedural generation software to make diverse character models, thus enabling
every character to have his, her, or its own model without the development team having to expend too many
resources. An example of a game that does this is the Half-Life 1 total conversion modification Black Mesa,
which has a built-in "Face Creation System" exclusively for that purpose.

References
[1] http:/ / julian. togelius. com/ Togelius2011Searchbased. pdf
[2] https:/ / www. itu. dk/ ~yannakakis/ EDPCG. pdf
[3] Ian Bell's Text Elite Page (http:/ / www. iancgbell. clara. net/ elite/ text/ index. htm)
[4] The Frontier Galaxy (http:/ / www. jongware. com/ galaxy1. html)
[5] Hernandez, Patricia (July 18, 2013)."I Can't Stop Playing Cube World".Kotaku. Retrieved July 19, 2013
[6] http:/ / captive. atari. org/ Technical/ MapGen/ Introduction. php
[7] http:/ / diablo. gamepedia. com/ Procedural_Generation
[8] http:/ / notch. tumblr. com/ post/ 458869117/ how-saving-and-loading-will-work-once-infinite-is-in
[9] http:/ / community. playstarbound. com/ index. php?threads/ a-little-perspective-on-the-size-of-starbound. 27178/ #post-1063769
[10] "Galactic Arms Race : Evolving the Shooter", Game Developer Magazine, April 2010.

External links
The Future Of Content (http://www.gamasutra.com/php-bin/news_index.php?story=5570) - Will Wright
keynote on Spore & procedural generation at the Game Developers Conference 2005. (registration required to
view video).
Procedural Graphics - an introduction by in4k (http://in4k.untergrund.net/index.
php?title=Procedural_Graphics_-_an_introduction)
Texturing & Modeling: A Procedural Approach (http://cobweb.ecn.purdue.edu/~ebertd/book2e.html)
Ken Perlin's Discussion of Perlin Noise (http://www.noisemachine.com/talk1/)
Procedural Content Generation Wiki (http://pcg.wikidot.com/): a community dedicated to documenting,
analyzing, and discussing all forms of procedural content generation.
Procedural Trees and Procedural Fire in a Virtual World (http://software.intel.com/en-us/articles/
procedural-trees-and-procedural-fire-in-a-virtual-world/): A white paper on creating procedural trees and
procedural fire using the Intel Smoke framework
A Real-Time Procedural Universe (http://www.gamasutra.com/view/feature/3098/
a_realtime_procedural_universe_.php) a tutorial on generating procedural planets in real-time
[Search-based procedural content generation: a taxonomy and survey (http://julian.togelius.com/
Togelius2011Searchbased.pdf)]
[PCG Google Group (https://groups.google.com/forum/#!forum/proceduralcontent)]

124

Procedural texture

Procedural texture
A procedural texture is a computer-generated image
created using an algorithm intended to create a realistic
representation of natural elements such as wood, marble,
granite, metal, stone, and others.
Usually, the natural look of the rendered result is achieved by
the usage of fractal noise and turbulence functions. These
functions are used as a numerical representation of the
randomness found in nature.

Solid texturing
Solid texturing is a process where the texture generating
function is evaluated over
at each visible surface point of
the model. Traditionally these functions use Perlin noise as
A procedural floor grate texture generated with the texture
their basis function, but some simple functions may use more
[1]
editor Genetica .
trivial methods such as the sum of sinusoidal functions for
instance. Solid textures are an alternative to the traditional
2D texture images which are applied to the surfaces of a model. It is a difficult and tedious task to get multiple 2D
textures to form a consistent visual appearance on a model without it looking obviously tiled. Solid textures were
created to specifically solve this problem.
Instead of editing images to fit a model, a function is used to evaluate the colour of the point being textured. Points
are evaluated based on their 3D position, not their 2D surface position. Consequently, solid textures are unaffected
by distortions of the surface parameter space, such as you might see near the poles of a sphere. Also, continuity
between the surface parameterization of adjacent patches isnt a concern either. Solid textures will remain consistent
and have features of constant size regardless of distortions in the surface coordinate systems. [2]

Cellular texturing
Cellular texturing differs from the majority of other procedural texture generating techniques as it does not depend
on noise functions as its basis, although it is often used to complement the technique. Cellular textures are based on
feature points which are scattered over a three dimensional space. These points are then used to split up the space
into small, randomly tiled regions called cells. These cells often look like lizard scales, pebbles, or flagstones.
Even though these regions are discrete, the cellular basis function itself is continuous and can be evaluated anywhere
in space. [3]

Genetic textures
Genetic texture generation is highly experimental approach for generating textures. It is a highly automated process
that uses a human to completely moderate the eventual outcome. The flow of control usually has a computer
generate a set of texture candidates. From these, a user picks a selection. The computer then generates another set of
textures by mutating and crossing over elements of the user selected textures.[4] For more information on exactly
how this mutation and cross over generation method is achieved, see Genetic algorithm. The process continues until
a suitable texture for the user is generated. This isn't a commonly used method of generating textures as its very
difficult to control and direct the eventual outcome. Because of this, it is typically used for experimentation or
abstract textures only.

125

Procedural texture

126

Self-organizing textures
Starting from a simple white noise, self-organization processes lead to structured patterns - still with a part of
randomness. Reaction-diffusion systems are a good example to generate such kind of textures.

Example of a procedural marble texture


(Taken from The Renderman Companion Book, by Steve Upstill)
/* Copyrighted Pixar 1988 */
/* From the RenderMan Companion p.355 */
/* Listing 16.19 Blue marble surface shader*/
/*
* blue_marble(): a marble stone texture in shades of blue
* surface
*/
blue_marble(
float

color

Ks
= .4,
Kd
= .6,
Ka
= .1,
roughness = .1,
txtscale = 1;
specularcolor = 1)

{
point
float
point
point
float

PP;
/*
csp;
/*
Nf;
/*
V;
/*
pixelsize, twice,

scaled point in shader space */


color spline parameter */
forward-facing normal */
for specular() */
scale, weight, turbulence;

/* Obtain a forward-facing normal for lighting calculations. */


Nf = faceforward( normalize(N), I);
V = normalize(-I);
/*
* Compute "turbulence" a la [PERLIN85]. Turbulence is a sum of
* "noise" components with a "fractal" 1/f power spectrum. It gives the
* visual impression of turbulent fluid flow (for example, as in the
* formation of blue_marble from molten color splines!). Use the
* surface element area in texture space to control the number of
* noise components so that the frequency content is appropriate
* to the scale. This prevents aliasing of the texture.
*/
PP = transform("shader", P) * txtscale;
pixelsize = sqrt(area(PP));
twice = 2 * pixelsize;
turbulence = 0;

Procedural texture
for (scale = 1; scale > twice; scale /= 2)
turbulence += scale * noise(PP/scale);
/* Gradual fade out of highest-frequency component near limit */
if (scale > pixelsize) {
weight = (scale / pixelsize) - 1;
weight = clamp(weight, 0, 1);
turbulence += weight * scale * noise(PP/scale);
}
/*
* Magnify the upper part of the turbulence range 0.75:1
* to fill the range 0:1 and use it as the parameter of
* a color spline through various shades of blue.
*/
csp = clamp(4 * turbulence - 3, 0, 1);
Ci = color spline(csp,
color (0.25, 0.25, 0.35),
/* pale blue
*/
color (0.25, 0.25, 0.35), /* pale blue
*/
color (0.20, 0.20, 0.30), /* medium blue
*/
color (0.20, 0.20, 0.30), /* medium blue
*/
color (0.20, 0.20, 0.30), /* medium blue
*/
color (0.25, 0.25, 0.35), /* pale blue
*/
color (0.25, 0.25, 0.35), /* pale blue
*/
color (0.15, 0.15, 0.26), /* medium dark blue */
color (0.15, 0.15, 0.26), /* medium dark blue */
color (0.10, 0.10, 0.20), /* dark blue
*/
color (0.10, 0.10, 0.20), /* dark blue
*/
color (0.25, 0.25, 0.35), /* pale blue
*/
color (0.10, 0.10, 0.20)
/* dark blue
*/
);
/* Multiply this color by the diffusely reflected light. */
Ci *= Ka*ambient() + Kd*diffuse(Nf);
/* Adjust for opacity. */
Oi = Os;
Ci = Ci * Oi;
/* Add in specular highlights. */
Ci += specularcolor * Ks * specular(Nf,V,roughness);
}
This article was taken from The Photoshop Roadmap [5] with written authorization

127

Procedural texture

128

References
[1]
[2]
[3]
[4]
[5]

http:/ / www. spiralgraphics. biz/ gallery. htm


Ebert et al: Texturing and Modeling A Procedural Approach, page 10. Morgan Kaufmann, 2003.
Ebert et al: Texturing and Modeling A Procedural Approach, page 135. Morgan Kaufmann, 2003.
Ebert et al: Texturing and Modeling A Procedural Approach, page 547. Morgan Kaufmann, 2003.
http:/ / www. photoshoproadmap. com

Some programs for creating textures using procedural texturing

Allegorithmic Substance Designer


Filter Forge
Genetica (program) (http://www.spiralgraphics.biz/genetica.htm)
DarkTree (http://www.darksim.com/html/dt25_description.html)
Context Free Art (http://www.contextfreeart.org/index.html)
TexRD (http://www.texrd.com) (based on reaction-diffusion: self-organizing textures)
Texture Garden (http://texturegarden.com)
Enhance Textures (http://www.shaders.co.uk)

3D projection
Part of a series on

Graphical
projection

v
t

e [1]

3D projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current
methods for displaying graphical data are based on planar two-dimensional media, the use of this type of projection
is widespread, especially in computer graphics, engineering and drafting.

Orthographic projection
When the human eye looks at a scene, objects in the distance appear smaller than objects close by. Orthographic
projection ignores this effect to allow the creation of to-scale drawings for construction and engineering.
Orthographic projections are a small set of transforms often used to show profile, detail or precise measurements of a
three dimensional object. Common names for orthographic projections include plane, cross-section, bird's-eye, and
elevation.
If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z
axis), the mathematical transformation is as follows; To project the 3D point
,
,
onto the 2D point
,
using an orthographic projection parallel to the y axis (profile view), the following equations can be used:

3D projection

129

where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be
used to properly align the viewport. Using matrix multiplication, the equations become:
.
While orthographically projected images represent the three dimensional nature of the object projected, they do not
represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In
particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of
whether they are far away or near to the virtual viewer. As a result, lengths near to the viewer are not foreshortened
as they would be in a perspective projection.

Weak perspective projection


A "weak" perspective projection uses the same principles of an orthographic projection, but requires the scaling
factor to be specified, thus ensuring that closer objects appear bigger in the projection, and vice-versa. It can be seen
as a hybrid between an orthographic and a perspective projection, and described either as a perspective projection
with individual point depths
replaced by an average constant depth
, or simply as an orthographic
projection plus a scaling.
The weak-perspective model thus approximates perspective projection while using a simpler model, similar to the
pure (unscaled) orthographic perspective. It is a reasonable approximation when the depth of the object along the
line of sight is small compared to the distance from the camera, and the field of view is small. With these conditions,
it can be assumed that all points on a 3D object are at the same distance
from the camera without significant
errors in the projection (compared to the full perspective model).

Perspective projection
When the human eye views a scene, objects in the distance appear smaller than objects close by - this is known as
perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective definition
shows distant objects as smaller to provide additional realism.
The perspective projection requires a more involved definition as compared to orthographic projections. A
conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the
object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control
the behavior of the projection transformation. The following variables are defined to describe this transformation:

- the 3D position of a point A that is to be projected.


- the 3D position of a point C representing the camera.

- The orientation of the camera (represented, for instance, by TaitBryan angles).


- the viewer's position relative to the display surface.

Which results in:

- the 2D projection of

When
Otherwise, to compute

and

.
the 3D vector

we first define a vector


from

as the position of point A with respect to a coordinate

system defined by the camera, with origin in C and rotated by


achieved by subtracting

is projected to the 2D vector

with respect to the initial coordinate system. This is

and then applying a rotation by

to the result. This transformation is often

called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the

3D projection

130

x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes):

This representation corresponds to rotating by three Euler angles (more properly, TaitBryan angles), using the xyz
convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x
(reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading
left-to-right)". Note that if the camera is not rotated (
), then the matrices drop out (as
identities), and this reduces to simply a shift:
Alternatively, without using matrices (let's replace (ax-cx) with x and so on, and abbreviate cos to c and sin to s):

This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection
plane; literature also may use x/z):

Or, in matrix form using homogeneous coordinates, the system

in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving

The distance of the viewer from the display surface,


, directly relates to the field of view, where
is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the
corners of your viewing surface)
The above equations can also be rewritten as:

In which

is the display size,

is the recording surface size (CCD or film),

recording surface to the entrance pupil (camera center), and

is the distance from the

is the distance, from the 3D point being projected, to

the entrance pupil.


Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.

3D projection

131

Diagram

To determine which screen x-coordinate corresponds to a point at

multiply the point coordinates by:

where
is the screen x coordinate
is the model x coordinate
is the focal lengththe axial distance from the camera center to the image plane
is the subject distance.
Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram
and equation.

References
[1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Views& action=edit

External links
A case study in camera projection (http://nccasymposium.bmth.ac.uk/2007/muhittin_bilginer/index.html)
Creating 3D Environments from Digital Photographs (http://nccasymposium.bmth.ac.uk/2009/
McLaughlin_Chris/McLaughlin_C_WebBasedNotes.pdf)

Further reading
Kenneth C. Finney (2004). 3D Game Programming All in One (http://books.google.com/
?id=cknGqaHwPFkC&pg=PA93&dq="3D+projection"). Thomson Course. p.93. ISBN978-1-59200-136-1.
Koehler; Dr. Ralph. 2D/3D Graphics and Splines with Source Code. ISBN0759611874.

Quaternions and spatial rotation

132

Quaternions and spatial rotation


Unit quaternions, also known as versors, provide a convenient mathematical notation for representing orientations
and rotations of objects in three dimensions. Compared to Euler angles they are simpler to compose and avoid the
problem of gimbal lock. Compared to rotation matrices they are more numerically stable and may be more efficient.
Quaternions have found their way into applications in computer graphics, computer vision, robotics, navigation,
molecular dynamics, flight dynamics,[1] and orbital mechanics of satellites.[2]
When used to represent rotation, unit quaternions are also called rotation quaternions. When used to represent an
orientation (rotation relative to a reference position), they are called orientation quaternions or attitude
quaternions.

Using quaternion rotations


According to Euler's rotation theorem, any rotation or sequence of rotations of a rigid body or Coordinate system
about a fixed point is equivalent to a single rotation by a given angle about a fixed axis (called Euler axis) that runs
through the fixed point. The Euler axis is typically represented by a unit vectoru. Therefore, any rotation in three
dimensions can be represented as a combination of a vectoru and a scalar . Quaternions give a simple way to
encode this axisangle representation in four numbers, and to apply the corresponding rotation to a position vector
representing a point relative to the origin in R3.
A Euclidean vector such as (2, 3, 4) or (ax, ay, az) can be rewritten as 2 i + 3 j + 4 k or ax i + ay j + az k, where i, j, k
are unit vectors representing the three Cartesian axes. A rotation through an angle of around the axis defined by a
unit vector

is represented by a quaternion using an extension of Euler's formula:

The rotation is clockwise if our line of sight points in the same direction as u.
It can be shown that this rotation can be applied to an ordinary vector

in

3-dimensional space, considered as a quaternion with a real coordinate equal to zero, by evaluating the conjugation
ofp byq:
using the Hamilton product, where p = (px, py, pz) is the new position vector of the point after the rotation.
In this instance, q is a unit quaternion and

It follows that conjugation by the product of two quaternions is the composition of conjugations by these
quaternions. If p and q are unit quaternions, then rotation (conjugation) bypq is
,
which is the same as rotating (conjugating) byq and then byp. The scalar component of the result is necessarily
zero.
The quaternion inverse of a rotation is the opposite rotation, since

. The square of a
n

quaternion rotation is a rotation by twice the angle around the same axis. More generally q is a rotation byn times
the angle around the same axis as q. This can be extended to arbitrary real n, allowing for smooth interpolation
between spatial orientations; see Slerp.
Two rotation quaternions can be combined into one equivalent quaternion by the relation:

Quaternions and spatial rotation

133

in which q corresponds to the rotation q1 followed by the rotation q2. (Note that quaternion multiplication is not
commutative.) Thus, an arbitrary number of rotations can be composed together and then applied as a single rotation.

Example
The conjugation operation
Conjugating p by q refers to the operation p q p
q1.
Consider the rotation f around the axis
, with a rotation angle of 120, or 2/3radians.

A rotation of 120 around the first diagonal permutes i, j, and k


cyclically.

The length of v is 3, the half angle is /3 (60) with cosine 1/2, (cos
60 = 0.5) and sine 3/2, (sin 60 0.866). We are therefore dealing
with a conjugation by the unit quaternion

p q p for q = 1 + i + j + k/2 on the unit


3-sphere. Note this one-sided (namely, left)
multiplication yields a 60 rotation of quaternions

Quaternions and spatial rotation


If f is the rotation function,

It can be proved that the inverse of a unit quaternion is obtained simply by changing the sign of its imaginary
components. As a consequence,

and

This can be simplified, using the ordinary rules for quaternion arithmetic, to

As expected, the rotation corresponds to keeping a cube held fixed at one point, and rotating it 120 about the long
diagonal through the fixed point (observe how the three axes are permuted cyclically).
Quaternion arithmetic in practice
Let's show how we reached the previous result. Let's develop the expression of f (in two stages), and apply the rules

It gives us:

which is the expected result. As we can see, such computations are relatively long and tedious if done manually;
however, in a computer program, this amounts to calling the quaternion multiplication routine twice.

134

Quaternions and spatial rotation

135

Quaternion-derived rotation matrix


A quaternion rotation can be algebraically manipulated into a quaternion-derived rotation matrix. By simplifying the
quaternion multiplications q p q*, they can be rewritten as a rotation matrix given an axisangle representation:

where s and c are shorthand for sin and cos , respectively. Although care should be taken (due to degeneracy as
the quaternion approaches the identity quaternion(1) or the sine of the angle approaches zero) the axis and angle can
be extracted via:

Note that the equality holds only when the square root of the sum of the squared imaginary terms takes the same
sign as qr.
As with other schemes to apply rotations, the centre of rotation must be translated to the origin before the rotation is
applied and translated back to its original position afterwards.

Explanation
Quaternions briefly
The complex numbers can be defined by introducing an abstract symbol i which satisfies the usual rules of algebra
and additionally the rule i2 = 1. This is sufficient to reproduce all of the rules of complex number arithmetic: for
example:
.
In the same way the quaternions can be defined by introducing abstract symbols i, j, k which satisfy the rules i2 = j2
= k2 = i j k = 1 and the usual algebraic rules except the commutative law of multiplication (a familiar example of
such a noncommutative multiplication is matrix multiplication). From this all of the rules of quaternion arithmetic
follow: for example, one can show that:

.
The imaginary part

of a quaternion behaves like a vector

in three dimension vector

space, and the real part a behaves like a scalar in R. When quaternions are used in geometry, it is more convenient to
define them as a scalar plus a vector:
.
Those who have studied vectors at school might find it strange to add a number to a vector, as they are objects of
very different natures, or to multiply two vectors together, as this operation is usually undefined. However, if one
remembers that it is a mere notation for the real and imaginary parts of a quaternion, it becomes more legitimate. In
other words, the correct reasoning is the addition of two quaternions, one with zero vector/imaginary part, and
another one with zero scalar/real part:
.
We can express quaternion multiplication in the modern language of vector cross and dot products (which were
actually inspired by the quaternions in the first place [citation needed]). In place of the rules i2 = j2 = k2 = ijk = 1 we

Quaternions and spatial rotation

136

have the quaternion multiplication rule:

where:

is the resulting quaternion,


is vector cross product (a vector),
is vector scalar product (a scalar).

Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), while
scalarscalar and scalarvector multiplications commute. From these rules it follows immediately that (see details):
.
The (left and right) multiplicative inverse or reciprocal of a nonzero quaternion is given by the conjugate-to-norm
ratio (see details):
,
as can be verified by direct calculation.

Proof of the quaternion rotation identity


Let

be a unit vector (the rotation axis) and let

yields the vector

where

and

rotated by an angle

around the axis

. Our goal is to show that

. Expanding out, we have

are the components of v perpendicular and parallel to u respectively. This is the formula of a

rotation by around the uaxis.

Quaternions and spatial rotation

Quaternion rotation operations


A very formal explanation of the properties used in this section is given by Altman.[3]

The hypersphere of rotations


Visualizing the space of rotations
Unit quaternions represent the group of Euclidean rotations in three dimensions in a very straightforward way. The
correspondence between rotations and quaternions can be understood by first visualizing the space of rotations itself.
In order to visualize the space of rotations, it helps to
consider a simpler case. Any rotation in three
dimensions can be described by a rotation by some
angle about some axis; for our purposes, we will use an
axis vector to establish handedness for our angle.
Consider the special case in which the axis of rotation
lies in the xyplane. We can then specify the axis of one
of these rotations by a point on a circle through which
the vector crosses, and we can select the radius of the
circle to denote the angle of rotation. Similarly, a
rotation whose axis of rotation lies in the xyplane can
be described as a point on a sphere of fixed radius in
three dimensions. Beginning at the north pole of a
sphere in three dimensional space, we specify the point
at the north pole to be the identity rotation (a zero angle
Two rotations by different angles and different axes in the space of
rotation). Just as in the case of the identity rotation, no
rotations. The length of the vector is related to the magnitude of the
axis of rotation is defined, and the angle of rotation
rotation.
(zero) is irrelevant. A rotation having a very small
rotation angle can be specified by a slice through the
sphere parallel to the xyplane and very near the north pole. The circle defined by this slice will be very small,
corresponding to the small angle of the rotation. As the rotation angles become larger, the slice moves in the negative
zdirection, and the circles become larger until the equator of the sphere is reached, which will correspond to a
rotation angle of 180degrees. Continuing southward, the radii of the circles now become smaller (corresponding to
the absolute value of the angle of the rotation considered as a negative number). Finally, as the south pole is reached,
the circles shrink once more to the identity rotation, which is also specified as the point at the south pole.
Notice that a number of characteristics of such rotations and their representations can be seen by this visualization.
The space of rotations is continuous, each rotation has a neighborhood of rotations which are nearly the same, and
this neighborhood becomes flat as the neighborhood shrinks. Also, each rotation is actually represented by two
antipodal points on the sphere, which are at opposite ends of a line through the center of the sphere. This reflects the
fact that each rotation can be represented as a rotation about some axis, or, equivalently, as a negative rotation about
an axis pointing in the opposite direction (a so-called double cover). The "latitude" of a circle representing a
particular rotation angle will be half of the angle represented by that rotation, since as the point is moved from the
north to south pole, the latitude ranges from zero to 180degrees, while the angle of rotation ranges from 0 to
360degrees. (the "longitude" of a point then represents a particular axis of rotation.) Note however that this set of
rotations is not closed under composition. Two successive rotations with axes in the xyplane will not necessarily
give a rotation whose axis lies in the xyplane, and thus cannot be represented as a point on the sphere. This will not
be the case with a general rotation in 3-space, in which rotations do form a closed set under composition.

137

Quaternions and spatial rotation

This visualization can be extended to a general rotation


in 3-dimensional space. The identity rotation is a point,
and a small angle of rotation about some axis can be
represented as a point on a sphere with a small radius.
As the angle of rotation grows, the sphere grows, until
the angle of rotation reaches 180 degrees, at which
point the sphere begins to shrink, becoming a point as
the angle approaches 360 degrees (or zero degrees from
the negative direction). This set of expanding and
contracting spheres represents a hypersphere in four
dimensional space (a 3-sphere). Just as in the simpler
example above, each rotation represented as a point on
the hypersphere is matched by its antipodal point on
that hypersphere. The "latitude" on the hypersphere
will be half of the corresponding angle of rotation, and
the neighborhood of any point will become "flatter"
(i.e. be represented by a 3-D Euclidean space of points)
as the neighborhood shrinks. This behavior is matched
The sphere of rotations for the rotations that have a "horizontal" axis
by the set of unit quaternions: A general quaternion
(in the xyplane).
represents a point in a four dimensional space, but
constraining it to have unit magnitude yields a three dimensional space equivalent to the surface of a hypersphere.
The magnitude of the unit quaternion will be unity, corresponding to a hypersphere of unit radius. The vector part of
a unit quaternion represents the radius of the 2-sphere corresponding to the axis of rotation, and its magnitude is the
cosine of half the angle of rotation. Each rotation is represented by two unit quaternions of opposite sign, and, as in
the space of rotations in three dimensions, the quaternion product of two unit quaternions will yield a unit
quaternion. Also, the space of unit quaternions is "flat" in any infinitesimal neighborhood of a given unit quaternion.

Parameterizing the space of rotations


We can parameterize the surface of a sphere with two coordinates, such as latitude and longitude. But latitude and
longitude are ill-behaved (degenerate) at the north and south poles, though the poles are not intrinsically different
from any other points on the sphere. At the poles (latitudes +90 and 90), the longitude becomes meaningless.
It can be shown that no two-parameter coordinate system can avoid such degeneracy. We can avoid such problems
by embedding the sphere in three-dimensional space and parameterizing it with three Cartesian coordinates (w, x, y),
placing the north pole at (w, x, y) = (1, 0, 0), the south pole at (w, x, y) = (1, 0, 0), and the equator at w = 0, x2 + y2 =
1. Points on the sphere satisfy the constraint w2 + x2 + y2 = 1, so we still have just two degrees of freedom though
there are three coordinates. A point (w, x, y) on the sphere represents a rotation in the ordinary space around the
horizontal axis directed by the vector (x, y, 0) by an angle
.
In the same way the hyperspherical space of 3D rotations can be parameterized by three angles (Euler angles), but
any such parameterization is degenerate at some points on the hypersphere, leading to the problem of gimbal lock.
We can avoid this by using four Euclidean coordinates w, x, y, z, with w2 + x2 + y2 + z2 = 1. The point(w, x, y, z)
represents a rotation around the axis directed by the vector(x, y, z) by an angle

138

Quaternions and spatial rotation

Explaining quaternions' properties with rotations


Non-commutativity
The multiplication of quaternions is non-commutative. This fact explains how the p q p q1 formula can work at
all, having q q1 = 1 by definition. Since the multiplication of unit quaternions corresponds to the composition of
three dimensional rotations, this property can be made intuitive by showing that three dimensional rotations are not
commutative in general.
Set two books next to each other. Rotate one of them 90degrees clockwise around the z axis, then flip it 180degrees
around the x axis. Take the other book, flip it 180 around x axis first, and 90 clockwise around z later. The two
books do not end up parallel. This shows that, in general, the composition of two different rotations around two
distinct spatial axes will not commute.

Orientation
The vector cross product, used to define the axisangle representation, does confer an orientation ("handedness") to
space: in a three-dimensional vector space, the three vectors in the equation a b = c will always form a
right-handed set (or a left-handed set, depending on how the cross product is defined), thus fixing an orientation in
the vector space. Alternatively, the dependence on orientation is expressed in referring to such u that specifies a
rotation as to axial vectors. In quaternionic formalism the choice of an orientation of the space corresponds to order
of multiplication: ij = k but ji = k. If one reverses the orientation, then the formula above becomes p q1 p q,
i.e. a unit q is replaced with the conjugate quaternion the same behaviour as of axial vectors.

Comparison with other representations of rotations


Advantages of quaternions
The representation of a rotation as a quaternion (4numbers) is more compact than the representation as an
orthogonal matrix (9numbers). Furthermore, for a given axis and angle, one can easily construct the corresponding
quaternion, and conversely, for a given quaternion one can easily read off the axis and the angle. Both of these are
much harder with matrices or Euler angles.
In video games and other applications, one is often interested in smooth rotations, meaning that the scene should
slowly rotate and not in a single step. This can be accomplished by choosing a curve such as the spherical linear
interpolation in the quaternions, with one endpoint being the identity transformation 1 (or some other initial rotation)
and the other being the intended final rotation. This is more problematic with other representations of rotations.
When composing several rotations on a computer, rounding errors necessarily accumulate. A quaternion thats
slightly off still represents a rotation after being normalised: a matrix thats slightly off may not be orthogonal
anymore and is harder to convert back to a proper orthogonal matrix.
Quaternions also avoid a phenomenon called gimbal lock which can result when, for example in pitch/yaw/roll
rotational systems, the pitch is rotated 90 up or down, so that yaw and roll then correspond to the same motion, and
a degree of freedom of rotation is lost. In a gimbal-based aerospace inertial navigation system, for instance, this
could have disastrous results if the aircraft is in a steep dive or ascent.

139

Quaternions and spatial rotation

140

Conversion to and from the matrix representation


From a quaternion to an orthogonal matrix
The orthogonal matrix corresponding to a rotation by the unit quaternion z = a + b i + c j + d k (with | z | = 1) when
post-multiplying with a column vector is given by

From an orthogonal matrix to a quaternion


One must be careful when converting a rotation matrix to a quaternion, as several straightforward methods tend to be
unstable when the trace (sum of the diagonal elements) of the rotation matrix is zero or very small. For a stable
method of converting an orthogonal matrix to a quaternion, see Rotation matrix #Quaternion.
Fitting quaternions
The above section described how to recover a quaternionq from a 33 rotation matrix Q. Suppose, however, that we
have some matrix Q that is not a pure rotationdue to round-off errors, for exampleand we wish to find the
quaternionq that most accurately represents Q. In that case we construct a symmetric 44 matrix

and find the eigenvector (x, y, z, w) corresponding to the largest eigenvalue (that value will be 1 if and only if Q is a
pure rotation). The quaternion so obtained will correspond to the rotation closest to the original matrix Q
Wikipedia:Disputed statement

Performance comparisons
This section discusses the performance implications of using quaternions versus other methods (axis/angle or
rotation matrices) to perform rotations in 3D.
Results

Storage requirements
Method

Storage

Rotation matrix 9
Quaternion

Angle/axis

3*

* Note: angle/axis can be stored as 3 elements by multiplying the unit rotation axis by half of the rotation angle,
forming the logarithm of the quaternion, at the cost of additional calculations.

Quaternions and spatial rotation

141

Performance comparison of rotation chaining operations


Method

# multiplies # add/subtracts total operations

Rotation matrices 27

18

45

Quaternions

12

28

16

Performance comparison of vector rotating operations


Method

# multiplies # add/subtracts # sin/cos total operations

Rotation matrix 9

15

Quaternions

15

15

30

Angle/axis

23

16

41

Used methods
There are three basic approaches to rotating a vectorv:
1. Compute the matrix product of a 3 3 rotation matrixR and the original 3 1 column matrix representing v.
This requires 3 (3multiplications+ 2additions)= 9multiplications and 6additions, the most efficient method
for rotating a vector.
2. A rotation can be represented by a unit-length quaternion q = (w, r) with scalar (real) partw and vector
(imaginary) partr. The rotation can be applied to a 3D vector v via the formula
. This requires only 15 multiplications and 15 additions to evaluate (or 18
muls and 12 adds if the factor of 2 is done via multiplication.) This yields the same result as the less efficient but
more compact formula
.
3. Use the angle/axis formula to convert an angle/axis to a rotation matrixR then multiplying with a vector.
Converting the angle/axis to R using common subexpression elimination costs 14multiplies, 2function calls (sin,
cos), and 10add/subtracts; from item1, rotating using R adds an additional 9 multiplications and 6 additions for a
total of 23multiplies, 16add/subtracts, and 2 function calls (sin, cos).

Pairs of unit quaternions as rotations in 4D space


A pair of unit quaternions zl and zr can represent any rotation in 4D space. Given a four dimensional vector v, and
pretending that it is a quaternion, we can rotate the vector v like this:

It is straightforward to check that for each matrix: M MT= I, that is, that each matrix (and hence both matrices
together) represents a rotation. Note that since
, the two matrices must commute. Therefore,
there are two commuting subgroups of the set of four dimensional rotations. Arbitrary four dimensional rotations
have 6degrees of freedom, each matrix represents 3 of those 6degrees of freedom.
Since an infinitesimal four-dimensional rotation can be represented by a pair of quaternions (as follows), all
(non-infinitesimal) four-dimensional rotations can also be represented.

Quaternions and spatial rotation

References
[1] Amnon Katz (1996) Computational Rigid Vehicle Dynamics, Krieger Publishing Co. ISBN 978-1575240169
[2] J. B. Kuipers (1999) Quaternions and rotation Sequences: a Primer with Applications to Orbits, Aerospace, and Virtual Reality, Princeton
University Press ISBN 978-0-691-10298-6
[3] Simon L. Altman (1986) Rotations, Quaternions, and Double Groups, Dover Publications (see especially Ch. 12).

E. P. Battey-Pratt & T. J. Racey (1980) Geometric Model for Fundamental Particles International Journal of
Theoretical Physics. Vol 19, No. 6

External links and resources


Shoemake, Ken. Quaternions (http://www.cs.caltech.edu/courses/cs171/quatut.pdf)
Simple Quaternion type and operations in over thirty computer languages (http://rosettacode.org/wiki/
Simple_Quaternion_type_and_operations) on Rosetta Code
Hart, Francis, Kauffman. Quaternion demo (http://graphics.stanford.edu/courses/cs348c-95-fall/software/
quatdemo/)
Dam, Koch, Lillholm. Quaternions, Interpolation and Animation (http://www.diku.dk/publikationer/tekniske.
rapporter/1998/98-5.ps.gz)
Vicci, Leandra. Quaternions and Rotations in 3-Space: The Algebra and its Geometric Interpretation (http://
www.cs.unc.edu/techreports/01-014.pdf)
Howell, Thomas and Lafon, Jean-Claude. The Complexity of the Quaternion Product, TR75-245, Cornell
University, 1975 (http://world.std.com/~sweetser/quaternions/ps/cornellcstr75-245.pdf)
Berthold K.P. Horn. Some Notes on Unit Quaternions and Rotation (http://people.csail.mit.edu/bkph/articles/
Quaternions.pdf).

142

Radiosity

Radiosity
Radiosity is a global illumination algorithm
used in 3D computer graphics rendering.
Radiosity is an application of the finite
element method to solving the rendering
equation for scenes with surfaces that reflect
light diffusely. Unlike rendering methods
that use Monte Carlo algorithms (such as
path tracing), which handle all types of light
paths, typical radiosity methods only
account for paths which leave a light source
and are reflected diffusely some number of
times (possibly zero) before hitting the eye;
such paths are represented by the code
"LD*E". Radiosity is a global illumination
Screenshot of scene rendered with RRV (simple implementation of radiosity
algorithm in the sense that the illumination
renderer based on OpenGL) 79th iteration.
arriving at the eye comes not just from the
light sources, but all the scene surfaces
interacting with each other as well. Radiosity calculations are viewpoint independent which increases the
computations involved, but makes them useful for all viewpoints.
Radiosity methods were first developed in about 1950 in the engineering field of heat transfer. They were later
refined specifically for application to the problem of rendering computer graphics in 1984 by researchers at Cornell
University.[1]
Notable commercial radiosity engines are Enlighten by Geomerics, used for games including Battlefield 3 and Need
for Speed: The Run, 3D Studio Max, formZ, LightWave 3D and the Electric Image Animation System.

Visual characteristics
The inclusion of radiosity calculations
in the rendering process often lends an
added element of realism to the
finished scene, because of the way it
mimics
real-world
phenomena.
Consider a simple room scene.
The image on the left was rendered
with a typical direct illumination
renderer. There are three types of
lighting in this scene which have been
specifically chosen and placed by the
Difference between standard direct illumination without shadow umbra, and radiosity
with shadow umbra
artist in an attempt to create realistic
lighting: spot lighting with shadows
(placed outside the window to create the light shining on the floor), ambient lighting (without which any part of the
room not lit directly by a light source would be totally dark), and omnidirectional lighting without shadows (to
reduce the flatness of the ambient lighting).

143

Radiosity
The image on the right was rendered using a radiosity algorithm. There is only one source of light: an image of the
sky placed outside the window. The difference is marked. The room glows with light. Soft shadows are visible on
the floor, and subtle lighting effects are noticeable around the room. Furthermore, the red color from the carpet has
bled onto the grey walls, giving them a slightly warm appearance. None of these effects were specifically chosen or
designed by the artist.

Overview of the radiosity algorithm


The surfaces of the scene to be rendered are each divided up into one or more smaller surfaces (patches). A view
factor is computed for each pair of patches. View factors (also known as form factors) are coefficients describing
how well the patches can see each other. Patches that are far away from each other, or oriented at oblique angles
relative to one another, will have smaller view factors. If other patches are in the way, the view factor will be
reduced or zero, depending on whether the occlusion is partial or total.
The view factors are used as coefficients in a linearized form of the rendering equation, which yields a linear system
of equations. Solving this system yields the radiosity, or brightness, of each patch, taking into account diffuse
interreflections and soft shadows.
Progressive radiosity solves the system iteratively in such a way that after each iteration we have intermediate
radiosity values for the patch. These intermediate values correspond to bounce levels. That is, after one iteration, we
know how the scene looks after one light bounce, after two passes, two bounces, and so forth. Progressive radiosity
is useful for getting an interactive preview of the scene. Also, the user can stop the iterations once the image looks
good enough, rather than wait for the computation to numerically converge.
Another common method for solving
the radiosity equation is "shooting
radiosity," which iteratively solves the
radiosity equation by "shooting" light
from the patch with the most error at
each step. After the first pass, only
As the algorithm iterates, light can be seen to flow into the scene, as multiple bounces are
those patches which are in direct line
computed. Individual patches are visible as squares on the walls and floor.
of sight of a light-emitting patch will
be illuminated. After the second pass, more patches will become illuminated as the light begins to bounce around the
scene. The scene continues to grow brighter and eventually reaches a steady state.

Mathematical formulation
The basic radiosity method has its basis in the theory of thermal radiation, since radiosity relies on computing the
amount of light energy transferred among surfaces. In order to simplify computations, the method assumes that all
scattering is perfectly diffuse. Surfaces are typically discretized into quadrilateral or triangular elements over which a
piecewise polynomial function is defined.
After this breakdown, the amount of light energy transfer can be computed by using the known reflectivity of the
reflecting patch, combined with the view factor of the two patches. This dimensionless quantity is computed from
the geometric orientation of two patches, and can be thought of as the fraction of the total possible emitting area of
the first patch which is covered by the second patch.
More correctly, radiosity B is the energy per unit area leaving the patch surface per discrete time interval and is the
combination of emitted and reflected energy:

where:

144

Radiosity

145

B(x)i dAi is the total energy leaving a small area dAi around a point x.
E(x)i dAi is the emitted energy.
(x) is the reflectivity of the point, giving reflected energy per unit area by multiplying by the incident energy per
unit area (the total energy which arrives from other patches).
S denotes that the integration variable x' runs over all the surfaces in the scene
r is the distance between x and x'
x and x' are the angles between the line joining x and x' and vectors normal to the surface at x and x'
respectively.
Vis(x,x' ) is a visibility function, defined to be 1 if the two points x and x' are visible from each other, and 0 if they
are not.
If the surfaces are approximated by a finite number of planar patches,
each of which is taken to have a constant radiosity Bi and reflectivity
i, the above equation gives the discrete radiosity equation,

where Fij is the geometrical view factor for the radiation leaving j and
hitting patch i.
This equation can then be applied to each patch. The equation is
monochromatic, so color radiosity rendering requires calculation for
each of the required colors.

Solution methods
The equation can formally be solved as matrix equation, to give the
vector solution:

The geometrical form factor (or "projected solid


angle") Fij.
Fij can be obtained by projecting the element Aj
onto a the surface of a unit hemisphere, and then
projecting that in turn onto a unit circle around
the point of interest in the plane of Ai. The form
factor is then equal to the proportion of the unit
circle covered by this projection.
Form factors obey the reciprocity relation AiFij =
AjFji

This gives the full "infinite bounce" solution for B directly. However
the number of calculations to compute the matrix solution scales according to n3, where n is the number of patches.
This becomes prohibitive for realistically large values of n.
Instead, the equation can more readily be solved iteratively, by repeatedly applying the single-bounce update formula
above. Formally, this is a solution of the matrix equation by Jacobi iteration. Because the reflectivities i are less
than 1, this scheme converges quickly, typically requiring only a handful of iterations to produce a reasonable
solution. Other standard iterative methods for matrix equation solutions can also be used, for example the
GaussSeidel method, where updated values for each patch are used in the calculation as soon as they are computed,
rather than all being updated synchronously at the end of each sweep. The solution can also be tweaked to iterate
over each of the sending elements in turn in its main outermost loop for each update, rather than each of the
receiving patches. This is known as the shooting variant of the algorithm, as opposed to the gathering variant. Using
the view factor reciprocity, Ai Fij = Aj Fji, the update equation can also be re-written in terms of the view factor Fji
seen by each sending patch Aj:

This is sometimes known as the "power" formulation, since it is now the total transmitted power of each element that
is being updated, rather than its radiosity.
The view factor Fij itself can be calculated in a number of ways. Early methods used a hemicube (an imaginary cube
centered upon the first surface to which the second surface was projected, devised by Cohen and Greenberg in 1985).

Radiosity
The surface of the hemicube was divided into pixel-like squares, for each of which a view factor can be readily
calculated analytically. The full form factor could then be approximated by adding up the contribution from each of
the pixel-like squares. The projection onto the hemicube, which could be adapted from standard methods for
determining the visibility of polygons, also solved the problem of intervening patches partially obscuring those
behind.
However all this was quite computationally expensive, because ideally form factors must be derived for every
possible pair of patches, leading to a quadratic increase in computation as the number of patches increased. This can
be reduced somewhat by using a binary space partitioning tree to reduce the amount of time spent determining which
patches are completely hidden from others in complex scenes; but even so, the time spent to determine the form
factor still typically scales as n log n. New methods include adaptive integration[2]

Sampling approaches
The form factors Fij themselves are not in fact explicitly needed in either of the update equations; neither to estimate
the total intensity j Fij Bj gathered from the whole view, nor to estimate how the power Aj Bj being radiated is
distributed. Instead, these updates can be estimated by sampling methods, without ever having to calculate form
factors explicitly. Since the mid 1990s such sampling approaches have been the methods most predominantly used
for practical radiosity calculations.
The gathered intensity can be estimated by generating a set of samples in the unit circle, lifting these onto the
hemisphere, and then seeing what was the radiosity of the element that a ray incoming in that direction would have
originated on. The estimate for the total gathered intensity is then just the average of the radiosities discovered by
each ray. Similarly, in the power formulation, power can be distributed by generating a set of rays from the radiating
element in the same way, and spreading the power to be distributed equally between each element a ray hits.
This is essentially the same distribution that a path-tracing program would sample in tracing back one diffuse
reflection step; or that a bidirectional ray tracing program would sample to achieve one forward diffuse reflection
step when light source mapping forwards. The sampling approach therefore to some extent represents a convergence
between the two techniques, the key difference remaining that the radiosity technique aims to build up a sufficiently
accurate map of the radiance of all the surfaces in the scene, rather than just a representation of the current view.

Reducing computation time


Although in its basic form radiosity is assumed to have a quadratic increase in computation time with added
geometry (surfaces and patches), this need not be the case. The radiosity problem can be rephrased as a problem of
rendering a texture mapped scene. In this case, the computation time increases only linearly with the number of
patches (ignoring complex issues like cache use).
Following the commercial enthusiasm for radiosity-enhanced imagery, but prior to the standardization of rapid
radiosity calculation, many architects and graphic artists used a technique referred to loosely as false radiosity. By
darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via
self-illumination or diffuse mapping, a radiosity-like effect of patch interaction could be created with a standard
scanline renderer (cf. ambient occlusion).
Static, pre-computed radiosity may be displayed in realtime via Lightmaps on current desktop computers with
standard graphics acceleration hardware.

146

Radiosity

147

Advantages
One of the advantages of the Radiosity algorithm is that it is relatively
simple to explain and implement. This makes it a useful algorithm for
teaching students about global illumination algorithms. A typical direct
illumination renderer already contains nearly all of the algorithms
(perspective transformations, texture mapping, hidden surface
removal) required to implement radiosity. A strong grasp of
mathematics is not required to understand or implement this
algorithm[citation needed].

Limitations

A modern render of the iconic Utah teapot.


Radiosity was used for all diffuse illumination in
this scene.

Typical radiosity methods only account for light paths of the form
LD*E, i.e., paths which start at a light source and make multiple diffuse bounces before reaching the eye. Although
there are several approaches to integrating other illumination effects such as specular[3] and glossy [4] reflections,
radiosity-based methods are generally not used to solve the complete rendering equation.
Basic radiosity also has trouble resolving sudden changes in visibility (e.g., hard-edged shadows) because coarse,
regular discretization into piecewise constant elements corresponds to a low-pass box filter of the spatial domain.
Discontinuity meshing [5] uses knowledge of visibility events to generate a more intelligent discretization.

Confusion about terminology


Radiosity was perhaps the first rendering algorithm in widespread use which accounted for diffuse indirect lighting.
Earlier rendering algorithms, such as Whitted-style ray tracing were capable of computing effects such as reflections,
refractions, and shadows, but despite being highly global phenomena, these effects were not commonly referred to as
"global illumination." As a consequence, the term "global illumination" became confused with "diffuse
interreflection," and "Radiosity" became confused with "global illumination" in popular parlance. However, the three
are distinct concepts.
The radiosity method in the current computer graphics context derives from (and is fundamentally the same as) the
radiosity method in heat transfer. In this context radiosity is the total radiative flux (both reflected and re-radiated)
leaving a surface, also sometimes known as radiant exitance. Calculation of Radiosity rather than surface
temperatures is a key aspect of the radiosity method that permits linear matrix methods to be applied to the problem.

References
[1] "Cindy Goral, Kenneth E. Torrance, Donald P. Greenberg and B. Battaile, Modeling the interaction of light between diffuse surfaces (http:/ /
www. cs. rpi. edu/ ~cutler/ classes/ advancedgraphics/ S07/ lectures/ goral. pdf)",, Computer Graphics, Vol. 18, No. 3.
[2] G Walton, Calculation of Obstructed View Factors by Adaptive Integration, NIST Report NISTIR-6925 (http:/ / www. bfrl. nist. gov/
IAQanalysis/ docs/ NISTIR-6925. pdf), see also http:/ / view3d. sourceforge. net/
[3] http:/ / portal. acm. org/ citation. cfm?id=37438& coll=portal& dl=ACM
[4] http:/ / www. cs. huji. ac. il/ labs/ cglab/ papers/ clustering/
[5] http:/ / www. cs. cmu. edu/ ~ph/ discon. ps. gz

Radiosity

Further reading
Radiosity Overview, from HyperGraph of SIGGRAPH (http://www.siggraph.org/education/materials/
HyperGraph/radiosity/overview_1.htm) (provides full matrix radiosity algorithm and progressive radiosity
algorithm)
Radiosity, by Hugo Elias (http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm) (also provides a
general overview of lighting algorithms, along with programming examples)
Radiosity, by Allen Martin (http://web.cs.wpi.edu/~matt/courses/cs563/talks/radiosity.html) (a slightly
more mathematical explanation of radiosity)
ROVER, by Tralvex Yeap (http://www.tralvex.com/pub/rover/abs-mnu.htm) (Radiosity Abstracts &
Bibliography Library)
Radiosity: Basic Implementations (https://www.academia.edu/738011/
The_Radiosity_Algorithm_Basic_Implementations) (Basic radiosity survey)

External links
RADical, by Parag Chaudhuri (http://www.cse.iitd.ernet.in/~parag/projects/CG2/asign2/report/RADical.
shtml) (an implementation of shooting & sorting variant of progressive radiosity algorithm with OpenGL
acceleration, extending from GLUTRAD by Colbeck)
Radiosity Renderer and Visualizer (http://dudka.cz/rrv) (simple implementation of radiosity renderer based on
OpenGL)
Enlighten (http://www.geomerics.com) (Licensed software code that provides realtime radiosity for computer
game applications. Developed by the UK company Geomerics)

Ray casting
Ray casting is the use of ray-surface intersection tests to solve a variety of problems in computer graphics. The term
was first used in computer graphics in a 1982 paper by Scott Roth to describe a method for rendering CSG models.

Usage
Ray casting can refer to:
the general problem of determining the first object intersected by a ray,
a technique for hidden surface removal based on finding the first intersection of a ray cast from the eye through
each pixel of an image,
a non-recursive ray tracing rendering algorithm that only casts primary rays, or
a direct volume rendering method, also called volume ray casting.
Although "ray casting" and "ray tracing" were often used interchangeably in early computer graphics literature, more
recent usage tries to distinguish the two. The distinction is that ray casting is a rendering algorithm that never
recursively traces secondary rays, whereas other ray tracing-based rendering algorithms may.

Concept
Ray casting is the most basic of many computer graphics rendering algorithms that use the geometric algorithm of
ray tracing. Ray tracing-based rendering algorithms operate in image order to render three dimensional scenes to two
dimensional images. Geometric rays are traced from the eye of the observer to sample the light (radiance) travelling
toward the observer from the ray direction. The speed and simplicity of ray casting comes from computing the color
of the light without recursively tracing additional rays that sample the radiance incident on the point that the ray hit.

148

Ray casting
This eliminates the possibility of accurately rendering reflections, refractions, or the natural falloff of shadows;
however all of these elements can be faked to a degree, by creative use of texture maps or other methods. The high
speed of calculation made ray casting a handy rendering method in early real-time 3D video games.
In nature, a light source emits a ray of light that travels, eventually, to a surface that interrupts its progress. One can
think of this "ray" as a stream of photons travelling along the same path. At this point, any combination of three
things might happen with this light ray: absorption, reflection, and refraction. The surface may reflect all or part of
the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the
reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the
light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the
color). Between absorption, reflection, and refraction, all of the incoming light must be accounted for, and no more.
A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to
be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive,
and reflective properties are again calculated based on the incoming rays. Some of these rays travel in such a way
that they hit our eye, causing us to see the scene and so contribute to the final rendered image. Attempting to
simulate this real-world process of tracing light rays using a computer can be considered extremely wasteful, as only
a minuscule fraction of the rays in a scene would actually reach the eye.
The first ray casting algorithm used for rendering was presented by Arthur Appel in 1968.[1] The idea behind ray
casting is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray - think of
an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through
that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the
shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that
surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer
graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability
to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be
intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modelling
techniques and easily rendered.
An early use of Appel's ray casting rendering algorithm was by Mathematical Applications Group, Inc., (MAGI) of
Elmsford, New York.[2]

Ray casting in computer games


Wolfenstein 3-D
The world in Wolfenstein 3-D is built from a square based grid of uniform height walls meeting solid coloured floors
and ceilings. In order to draw the world, a single ray is traced for every column of screen pixels and a vertical slice
of wall texture is selected and scaled according to where in the world the ray hits a wall and how far it travels before
doing so.[3]
The purpose of the grid based levels is twofold - ray to wall collisions can be found more quickly since the potential
hits become more predictable and memory overhead is reduced. However, encoding wide-open areas takes extra
space.

149

Ray casting

Comanche series
The so-called "Voxel Space" engine developed by NovaLogic for the Comanche games traces a ray through each
column of screen pixels and tests each ray against points in a heightmap. Then it transforms each element of the
heightmap into a column of pixels, determines which are visible (that is, have not been occluded by pixels that have
been drawn in front), and draws them with the corresponding color from the texture map.[4]

Computational geometry setting


In computational geometry, the ray casting problem is also known as the ray shooting problem and may be stated
as the following query problem. Given a set of objects in d-dimensional space, preprocess them into a data structure
so that for each query ray the first object hit by the ray can be found quickly. The problem has been investigated for
various settings: space dimension, types of objects, restrictions on query rays, etc.[5] One technique is to use a sparse
voxel octree.

References
[1] "Ray-tracing and other Rendering Approaches" (http:/ / nccastaff. bournemouth. ac. uk/ jmacey/ CGF/ slides/ RayTracing4up. pdf) (PDF),
lecture notes, MSc Computer Animation and Visual Effects, Jon Macey, University of Bournemouth
[2] Goldstein, R. A., and R. Nagel. 3-D visual simulation. Simulation 16(1), pp. 2531, 1971.
[3] Wolfenstein-style ray casting tutorial (http:/ / www. permadi. com/ tutorial/ raycast/ ) by F. Permadi
[4] Andre LaMothe. Black Art of 3D Game Programming. 1995, ISBN 1-57169-004-2, pp. 14, 398, 935-936, 941-943.
[5] "Ray shooting, depth orders and hidden surface removal", by Mark de Berg, Springer-Verlag, 1993, ISBN 3-540-57020-9, 201 pp.

External links
Raycasting planes in WebGL with source code (http://adrianboeing.blogspot.com/2011/01/
raycasting-two-planes-in-webgl.html)
Raycasting (http://leftech.com/raycaster.htm)
Interactive raycaster for the Commodore 64 in 254 bytes (with source code) (http://pouet.net/prod.
php?which=61298)

150

Ray tracing

151

Ray tracing
In computer graphics, ray tracing is a technique for
generating an image by tracing the path of light through
pixels in an image plane and simulating the effects of
its encounters with virtual objects. The technique is
capable of producing a very high degree of visual
realism, usually higher than that of typical scanline
rendering methods, but at a greater computational cost.
This makes ray tracing best suited for applications
where the image can be rendered slowly ahead of time,
such as in still images and film and television visual
effects, and more poorly suited for real-time
applications like video games where speed is critical.
Ray tracing is capable of simulating a wide variety of
optical effects, such as reflection and refraction,
scattering, and dispersion phenomena (such as
chromatic aberration).

This recursive ray tracing of a sphere demonstrates the effects of


shallow depth of field, area light sources and diffuse interreflection.

Algorithm overview
Optical ray tracing describes a method for
producing visual images constructed in 3D
computer graphics environments, with more
photorealism than either ray casting or
scanline rendering techniques. It works by
tracing a path from an imaginary eye
through each pixel in a virtual screen, and
calculating the color of the object visible
through it.
Scenes in ray tracing are described
mathematically by a programmer or by a
visual artist (typically using intermediary
tools). Scenes may also incorporate data
from images and models captured by means
such as digital photography.

The ray tracing algorithm builds an image by extending rays into a scene

Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest
object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the
material properties of the object, and combine this information to calculate the final color of the pixel. Certain
illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as actual
light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of
light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could

Ray tracing
potentially waste a tremendous amount of computation on light paths that are never recorded.
Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a
maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and
the pixel's value is updated.

Detailed description of ray tracing computer algorithm and its genesis


What happens in nature
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One
can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a
straight line (ignoring relativistic effects). Any combination of four things might happen with this light ray:
absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of
intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more
directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself
in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less
commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength
colour in a random direction, though this is rare enough that it can be discounted from most rendering applications.
Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no
more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add
up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive,
refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays
travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.

Ray casting algorithm


The first ray tracing algorithm used for rendering was presented by Arthur Appel[1] in 1968. This algorithm has since
been termed "ray casting". The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the
closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being
a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the
lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that
if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface
is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered
over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and
spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects
can be created by using solid modeling techniques and easily rendered.

152

Ray tracing

153

Recursive ray tracing algorithm


The next important research breakthrough
came from Turner Whitted in 1979.[2]
Previous algorithms traced rays from the eye
into the scene until they hit an object, but
determined the ray color without recursively
tracing more rays. Whitted continued the
process. When a ray hits a surface, it can
generate up to three new types of rays:
reflection, refraction, and shadow. A
reflection
ray
is
traced
in
the
mirror-reflection direction. The closest
object it intersects is what will be seen in the
reflection. Refraction rays traveling through
transparent material work similarly, with the
addition that a refractive ray could be
entering or exiting a material. A shadow ray
is traced toward each light. If any opaque
object is found between the surface and the
light, the surface is in shadow and the light
does not illuminate it. This recursive ray
tracing added more realism to ray traced
images.

Ray tracing can create realistic images.

Advantages over other rendering


methods
Ray tracing's popularity stems from its basis
in a realistic simulation of lighting over
In addition to the high degree of realism, ray tracing can simulate the effects of a
camera due to depth of field and aperture shape (in this case a hexagon).
other rendering methods (such as scanline
rendering or ray casting). Effects such as
reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing
algorithm. The computational independence of each ray makes ray tracing amenable to parallelization.[3]

Disadvantages
A serious disadvantage of ray tracing is performance. Scanline algorithms and other algorithms use data coherence to
share computations between pixels, while ray tracing normally starts the process anew,

Ray tracing

154

treating each eye ray separately. However,


this separation offers other advantages, such
as the ability to shoot more rays as needed
to perform spatial anti-aliasing and improve
image quality where needed.
Although it does handle interreflection and
optical effects such as refraction accurately,
traditional ray tracing is also not necessarily
photorealistic. True photorealism occurs
when the rendering equation is closely
approximated or fully implemented.
Implementing the rendering equation gives
true photorealism, as the equation describes
every physical effect of light flow.
However, this is usually infeasible given the
computing resources required.
The realism of all rendering methods can be
evaluated as an approximation to the
equation. Ray tracing, if it is limited to
Whitted's algorithm, is not necessarily the
most realistic. Methods that trace rays, but
include additional techniques (photon
mapping, path tracing), give far more
accurate simulation of real-world lighting.
It is also possible to approximate the
equation using ray casting in a different way
than what is traditionally considered to be
"ray tracing". For performance, rays can be
clustered according to their direction, with
rasterization hardware and depth peeling
used to efficiently sum the rays.[4]

The number of reflections a ray can take and how it is affected each time it
encounters a surface is all controlled via software settings during ray tracing. Here,
each ray was allowed to reflect up to 16 times. Multiple reflections of reflections
can thus be seen. Created with Cobalt

The number of refractions a ray can take and how it is affected each time it
encounters a surface is all controlled via software settings during ray tracing. Here,
each ray was allowed to refract and reflect up to 9 times. Fresnel reflections were
used. Also note the caustics. Created with Vray

Reversed direction of traversal of scene by the rays


The process of shooting rays from the eye to the light source to render an image is sometimes called backwards ray
tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology.
Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term backwards
ray tracing to mean shooting rays from the lights and gathering the results. Therefore it is clearer to distinguish
eye-based versus light-based ray tracing.
While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can
benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide
reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto
reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based
and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and
lights, and the paths subsequently joined by a connecting ray after some length.

Ray tracing

155

Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic
photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of
3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the
scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface
points.[5][6] The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant
reuse of photons, reducing computation, at the cost of statistical bias.
An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider
a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have
direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a
very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random
search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space
of rays.[7]
To the right is an image showing a simple example of a path of rays
recursively generated from the camera (or eye) to the light source using
the above algorithm. A diffuse surface reflects light in all directions.
First, a ray is created at an eyepoint and traced through a pixel and into
the scene, where it hits a diffuse surface. From that surface the
algorithm recursively generates a reflection ray, which is traced
through the scene, where it hits another diffuse surface. Finally,
another reflection ray is generated and traced through the scene, where
it hits the light source and is absorbed. The color of the pixel now
depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source.
For example if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of
the pixel is blue.

Example
As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection
between a ray and a sphere. In vector notation, the equation of a sphere with center and radius is

Any point on a ray starting from point

with direction

(here

is a unit vector) can be written as

where is its distance between and . In our problem, we know


and , and we need to find . Therefore, we substitute for :

Let

for simplicity; then

Knowing that d is a unit vector allows us this minor simplification:

This quadratic equation has solutions

(e.g. the position of a light source)

Ray tracing

156

The two values of

found by solving this equation are the two ones such that

are the points where the ray

intersects the sphere.


Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from
with opposite direction).
If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere.
Let us suppose now that there is at least a positive solution, and let be the minimal one. In addition, let us suppose
that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We
need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is
equal and opposite to the angle of incidence between the incident ray and the normal to the sphere.
The normal to the sphere is simply

where
with respect to

is the intersection point found before. The reflection direction can be found by a reflection of
, that is

Thus the reflected ray has equation

Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our
reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the
original light source and the one of the sphere are combined by the reflection.
This is merely the math behind the linesphere intersection and the subsequent determination of the colour of the
pixel being calculated. There is, of course, far more to the general process of raytracing, but this demonstrates an
example of the algorithms used.

Adaptive depth control


This means that we stop generating reflected/transmitted rays when the computed intensity becomes less than a
certain threshold. You must always set a certain maximum depth or else the program would generate an infinite
number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective.
To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays
are traced.
Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the
reflection from the second: 0.5 * 0.5 = 0.25, the third: 0.25 * 0.5 = 0.125, the fourth: 0.125 * 0.5 = 0.0625, the fifth:
0.0625 * 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would
also decrease the intensity contribution.
For a transmitted ray we could do something similar but in that case the distance traveled through the object would
cause even faster intensity decrease. As an example of this, Hall & Greenberg[citation needed]found that even for a very
reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.

Ray tracing

Bounding volumes
We enclose groups of objects in sets of hierarchical bounding volumes and first test for intersection with the
bounding volume, and then only if there is an intersection, against the objects enclosed by the volume.
Bounding volumes should be easy to test for intersection, for example a sphere or box (slab). The best bounding
volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and
thin then a sphere will enclose mainly empty space and a box is much better. Boxes are also easier for hierarchical
bounding volumes.
Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational
time from a linear dependence on the number of objects to something between linear and a logarithmic dependence.
This is because, for a perfect case, each intersection test would divide the possibilities by two, and we would have a
binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this.
Kay & Kajiya give a list of desired properties for hierarchical bounding volumes:
Subtrees should contain objects that are near each other and the further down the tree the closer should be the
objects.
The volume of each node should be minimal.
The sum of the volumes of all bounding volumes should be minimal.
Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove
more potential objects than one farther down the tree.
The time spent constructing the hierarchy should be much less than the time saved by using it.

In real time
The first implementation of a "real-time" ray-tracer was credited at the 2005 SIGGRAPH computer graphics
conference as the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system.
Initially published in 1987 at USENIX, the BRL-CAD ray-tracer is the first known implementation of a parallel
network distributed ray-tracing system that achieved several frames per second in rendering performance.[8] This
performance was attained by means of the highly optimized yet platform independent LIBRT ray-tracing engine in
BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a
commodity network. BRL-CAD's ray-tracer, including REMRT/RT tools, continue to be available and developed
today as Open source software.
Since then, there have been considerable efforts and research towards implementing ray tracing in real time speeds
for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics
applications such as demoscene productions, computer and video games, and image rendering. Some real-time
software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s.
The OpenRT project includes a highly optimized software core for ray tracing along with an OpenGL-like API in
order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing
hardware, such as the experimental Ray Processing Unit developed at the Saarland University, has been designed to
accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the University of
Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be
rendered via ray tracing without intensive resource usage.
On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray
Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14-29 frames per
second. The demonstration ran on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93GHz.
At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API
exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive
intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport,

157

Ray tracing
and many other rendering algorithms that cannot be implemented with tail recursion. Nvidia has shipped over
350,000,000 OptiX capable GPUs as of April 2013. OptiX-based renderers are used in Adobe AfterEffects,
Bunkspeed Shot, Autodesk Maya, 3ds max, and many other renderers.
Imagination Technologies offers a free API called OpenRL which accelerates tail recursive ray tracing-based
rendering algorithms and, together with their proprietary ray tracing hardware, works with Autodesk Maya to
provide what 3D World calls "real-time raytracing to the everyday artist".[9]

References
[1] Appel A. (1968) Some techniques for shading machine renderings of solids (http:/ / graphics. stanford. edu/ courses/ Appel. pdf). AFIPS
Conference Proc. 32 pp.37-45
[2] Whitted T. (1979) An improved illumination model for shaded display (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 156.
1534). Proceedings of the 6th annual conference on Computer graphics and interactive techniques
[3] A. Chalmers, T. Davis, and E. Reinhard. Practical parallel rendering, ISBN 1-56881-179-9. AK Peters, Ltd., 2002.
[4] GPU Gems 2, Chapter 38. High-Quality Global Illumination Rendering Using Rasterization, Addison-Wesley (http:/ / http. developer. nvidia.
com/ GPUGems2/ gpugems2_chapter38. html)
[5] Global Illumination using Photon Maps (http:/ / graphics. ucsd. edu/ ~henrik/ papers/ photon_map/
global_illumination_using_photon_maps_egwr96. pdf)
[6] Photon Mapping - Zack Waters (http:/ / web. cs. wpi. edu/ ~emmanuel/ courses/ cs563/ write_ups/ zackw/ photon_mapping/ PhotonMapping.
html)
[7] http:/ / graphics. stanford. edu/ papers/ metro/ metro. pdf
[8] See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 8698.
[9] 3D World, April 2013

External links
What is ray tracing ? (http://www.codermind.com/articles/Raytracer-in-C++
-Introduction-What-is-ray-tracing.html)
Ray Tracing and Gaming - Quake 4: Ray Traced Project (http://www.pcper.com/reviews/Graphics-Cards/
Ray-Tracing-and-Gaming-Quake-4-Ray-Traced-Project)
Ray tracing and Gaming - One Year Later (http://www.pcper.com/reviews/Processors/
Ray-Tracing-and-Gaming-One-Year-Later)
Interactive Ray Tracing: The replacement of rasterization? (http://www.few.vu.nl/~kielmann/theses/
avdploeg.pdf)
A series of tutorials on implementing a raytracer using C++ (http://devmaster.net/posts/2836/
raytracing-theory-implementation-part-1-introduction)
Tutorial on implementing a raytracer in PHP (http://quaxio.com/raytracer/)
The Compleat Angler (1978) (http://www.youtube.com/watch?v=WV4qXzM641o)
Writing a Simple Ray Tracer (scratchapixel) (http://scratchapixel.com/lessons/3d-basic-lessons/
lesson-1-writing-a-simple-raytracer/)

158

Reflection

159

Reflection
Reflection in computer graphics is used to emulate reflective objects
like mirrors and shiny surfaces.
Reflection is accomplished in a ray trace renderer by following a ray
from the eye to the mirror and then calculating where it bounces from,
and continuing the process until no surface is found, or a non-reflective
surface is found. Reflection on a shiny surface like wood or tile can
add to the photorealistic effects of a 3D rendering.
Polished - A Polished Reflection is an undisturbed reflection, like a
mirror or chrome.
Blurry - A Blurry Reflection means that tiny random bumps on the
surface of the material cause the reflection to be blurry.
Metallic - A reflection is Metallic if the highlights and reflections
retain the color of the reflective object.
Glossy - This term can be misused. Sometimes it is a setting which
Ray traced model demonstrating specular
reflection.
is the opposite of Blurry. (When "Glossiness" has a low value, the
reflection is blurry.) However, some people use the term "Glossy
Reflection" as a synonym for "Blurred Reflection." Glossy used in this context means that the reflection is
actually blurred.

Examples
Polished or Mirror reflection
Mirrors are usually almost 100% reflective.

Mirror on wall rendered with 100% reflection.

Reflection

160

Metallic Reflection
Normal, (nonmetallic), objects reflect
light and colors in the original color of
the object being reflected.
Metallic objects reflect lights and
colors altered by the color of the
metallic object itself.

The large sphere on the left is blue with its reflection marked as metallic. The large sphere
on the right is the same color but does not have the metallic property selected.

Blurry Reflection
Many
materials
are
imperfect
reflectors, where the reflections are
blurred to various degrees due to
surface roughness that scatters the rays
of the reflections.

The large sphere on the left has sharpness set to 100%. The sphere on the right has
sharpness set to 50% which creates a blurry reflection.

Reflection

161

Glossy Reflection
Fully
glossy
reflection,
shows
highlights from light sources, but does
not show a clear reflection from
objects.

The sphere on the left has normal, metallic reflection. The sphere on the right has the
same parameters, except that the reflection is marked as "glossy".

Reflection mapping
In computer graphics, environment mapping, or reflection mapping,
is an efficient image-based lighting technique for approximating the
appearance of a reflective surface by means of a precomputed texture
image. The texture is used to store the image of the distant
environment surrounding the rendered object.
Several ways of storing the surrounding environment are employed.
The first technique was sphere mapping, in which a single texture
contains the image of the surroundings as reflected on a mirror ball. It
has been almost entirely surpassed by cube mapping, in which the
An example of reflection mapping.
environment is projected onto the six faces of a cube and stored as six
square textures or unfolded into six square regions of a single texture.
Other projections that have some superior mathematical or computational properties include the paraboloid mapping,
the pyramid mapping, the octahedron mapping, and the HEALPix mapping.
The reflection mapping approach is more efficient than the classical ray tracing approach of computing the exact
reflection by tracing a ray and following its optical path. The reflection color used in the shading computation at a
pixel is determined by calculating the reflection vector at the point on the object and mapping it to the texel in the
environment map. This technique often produces results that are superficially similar to those generated by
raytracing, but is less computationally expensive since the radiance value of the reflection comes from calculating
the angles of incidence and reflection, followed by a texture lookup, rather than followed by tracing a ray against the
scene geometry and computing the radiance of the ray, simplifying the GPU workload.
However in most circumstances a mapped reflection is only an approximation of the real reflection. Environment
mapping relies on two assumptions that are seldom satisfied:

Reflection mapping
1) All radiance incident upon the object being shaded comes from an infinite distance. When this is not the case the
reflection of nearby geometry appears in the wrong place on the reflected object. When this is the case, no parallax is
seen in the reflection.
2) The object being shaded is convex, such that it contains no self-interreflections. When this is not the case the
object does not appear in the reflection; only the environment does.
Reflection mapping is also a traditional image-based lighting technique for creating reflections of real-world
backgrounds on synthetic objects.
Environment mapping is generally the fastest method of rendering a reflective surface. To further increase the speed
of rendering, the renderer may calculate the position of the reflected ray at each vertex. Then, the position is
interpolated across polygons to which the vertex is attached. This eliminates the need for recalculating every pixel's
reflection direction.
If normal mapping is used, each polygon has many face normals (the direction a given point on a polygon is facing),
which can be used in tandem with an environment map to produce a more realistic reflection. In this case, the angle
of reflection at a given point on a polygon will take the normal map into consideration. This technique is used to
make an otherwise flat surface appear textured, for example corrugated metal, or brushed aluminium.

Types of reflection mapping


Sphere mapping
Sphere mapping represents the sphere of incident illumination as though it were seen in the reflection of a reflective
sphere through an orthographic camera. The texture image can be created by approximating this ideal setup, or using
a fisheye lens or via prerendering a scene with a spherical mapping.
The spherical mapping suffers from limitations that detract from the realism of resulting renderings. Because
spherical maps are stored as azimuthal projections of the environments they represent, an abrupt point of singularity
(a black hole effect) is visible in the reflection on the object where texel colors at or near the edge of the map are
distorted due to inadequate resolution to represent the points accurately. The spherical mapping also wastes pixels
that are in the square but not in the sphere.
The artifacts of the spherical mapping are so severe that it is effective only for viewpoints near that of the virtual
orthographic camera.

162

Reflection mapping

163

Cube mapping
Cube mapping and other polyhedron mappings address the severe
distortion of sphere maps. If cube maps are made and filtered correctly,
they have no visible seams, and can be used independent of the
viewpoint of the often-virtual camera acquiring the map. Cube and
other polyhedron maps have since superseded sphere maps in most
computer graphics applications, with the exception of acquiring
image-based lighting.
Generally, cube mapping uses the same skybox that is used in outdoor
renderings. Cube mapped reflection is done by determining the vector
that the object is being viewed at. This camera ray is reflected about
the surface normal of where the camera vector intersects the object.
This results in the reflected ray which is then passed to the cube map to
get the texel which provides the radiance value used in the lighting
calculation. This creates the effect that the object is reflective.

A diagram depicting an apparent reflection being


provided by cube mapped reflection. The map is
actually projected onto the surface from the point
of view of the observer. Highlights which in
raytracing would be provided by tracing the ray
and determining the angle made with the normal,
can be 'fudged', if they are manually painted into
the texture field (or if they already appear there
depending on how the texture map was obtained),
from where they will be projected onto the
mapped object along with the rest of the texture
detail.

HEALPix mapping
HEALPix environment mapping is similar to the other polyhedron
mappings, but can be hierarchical, thus providing a unified framework
for generating polyhedra that better approximate the sphere. This
allows lower distortion at the cost of increased computation.[1]

History
Precursor work in texture mapping had been established by Edwin
Catmull, with refinements for curved surfaces by James Blinn, in 1974.
[2] Blinn went on to further refine his work, developing environment
mapping by 1976. [3]
Gene Miller experimented with spherical environment mapping in
1982 at MAGI Synthavision.

Example of a three-dimensional model using


cube mapped reflection

Wolfgang Heidrich introduced Paraboloid Mapping in 1998.[4]


Emil Praun introduced Octahedron Mapping in 2003.[5]
Mauro Steigleder introduced Pyramid Mapping in 2005.[6]
Tien-Tsin Wong, et al. introduced the existing HEALPix mapping for rendering in 2006.

Reflection mapping

References
[1] Tien-Tsin Wong, Liang Wan, Chi-Sing Leung, and Ping-Man Lam. Real-time Environment Mapping with Equal Solid-Angle Spherical
Quad-Map (http:/ / appsrv. cse. cuhk. edu. hk/ ~lwan/ paper/ sphquadmap/ sphquadmap. htm), Shader X4: Lighting & Rendering, Charles
River Media, 2006
[2] http:/ / www. comphist. org/ computing_history/ new_page_6. htm
[3] http:/ / www. debevec. org/ ReflectionMapping/
[4] Heidrich, W., and H.-P. Seidel. "View-Independent Environment Maps." Eurographics Workshop on Graphics Hardware 1998, pp. 3945.
[5] Emil Praun and Hugues Hoppe. "Spherical parametrization and remeshing." ACM Transactions on Graphics,22(3):340349, 2003.
[6] Mauro Steigleder. "Pencil Light Transport." A thesis presented to the University of Waterloo, 2005.

External links
The Story of Reflection mapping (http://www.pauldebevec.com/ReflectionMapping/) by Paul Debevec
NVIDIA's paper (http://developer.nvidia.com/attach/6595)Wikipedia:Link rot about sphere & cube env.
mapping

Relief mapping
In computer graphics, relief mapping is a texture mapping technique used to render the surface details of three
dimensional objects accurately and efficiently. It can produce accurate depictions of self-occlusion, self-shadowing,
and parallax. It is a form of short-distance raytrace done on a pixel shader.[citation needed] Relief mapping is highly
comparable in both function and approach to another displacement texture mapping technique, Parallax occlusion
mapping, considering that they both rely on raytraces, though the two are not to be confused with each other, as
parallax occlusion mapping uses reverse heightmap tracing.

References
External links
Manuel's Relief texture mapping (http://www.inf.ufrgs.br/~oliveira/RTM.html)

164

Render Output unit

165

Render Output unit


The render output unit, often abbreviated as "ROP", and sometimes called (perhaps more properly) raster
operations pipeline, is one of the final steps in the rendering process of modern 3D accelerator boards. The pixel
pipelines take pixel and texel information and process it, via specific matrix and vector operations, into a final pixel
or depth value. The ROPs perform the transactions between the relevant buffers in the local memory this includes
writing or reading values, as well as blending them together.
Historically the number of ROPs, TMUs, and pixel shaders have been equal. However, as of 2004, several GPUs
have decoupled these areas to allow optimum transistor allocation for application workload and available memory
performance. As the trend continues, it is expected that graphics processors will continue to decouple the various
parts of their architectures to enhance their adaptability to future graphics applications. This design also allows chip
makers to build a modular line-up, where the top-end GPU are essentially using the same logic as the low-end
products.

Rendering
Rendering is the process of generating an image from a model (or models in what
collectively could be called a scene file), by means of computer programs. Also, the
results of such a model can be called a rendering. A scene file contains objects in a
strictly defined language or data structure; it would contain geometry, viewpoint,
texture, lighting, and shading information as a description of the virtual scene. The
data contained in the scene file is then passed to a rendering program to be
processed and output to a digital image or raster graphics image file. The term
"rendering" may be by analogy with an "artist's rendering" of a scene. Though the
technical details of rendering methods vary, the general challenges to overcome in
producing a 2D image from a 3D representation stored in a scene file are outlined as
the graphics pipeline along a rendering device, such as a GPU. A GPU is a
purpose-built device able to assist a CPU in performing complex rendering
calculations. If a scene is to look relatively realistic and predictable under virtual
lighting, the rendering software should solve the rendering equation. The rendering
equation doesn't account for all lighting phenomena, but is a general lighting model
for computer-generated imagery. 'Rendering' is also used to describe the process of
calculating effects in a video editing program to produce final video output.
Rendering is one of the major sub-topics of 3D computer graphics, and in practice is
always connected to the others. In the graphics pipeline, it is the last major step,
giving the final appearance to the models and animation. With the increasing
sophistication of computer graphics since the 1970s, it has become a more distinct
subject.
A variety of rendering techniques
applied to a single 3D scene

Rendering

166

Rendering has uses in architecture, video games, simulators, movie or


TV visual effects, and design visualization, each employing a different
balance of features and techniques. As a product, a wide variety of
renderers are available. Some are integrated into larger modeling and
animation packages, some are stand-alone, some are free open-source
projects. On the inside, a renderer is a carefully engineered program,
based on a selective mixture of disciplines related to: light physics,
visual perception, mathematics and software development.
In the case of 3D graphics, rendering may be done slowly, as in
An image created by using POV-Ray 3.6.
pre-rendering, or in real time. Pre-rendering is a computationally
intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video
games which rely on the use of graphics cards with 3D hardware accelerators.

Usage
When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or
procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the
consumer or intended viewer sees.
For movie animations, several images (frames) must be rendered, and stitched together in a program capable of
making an animation of this sort. Most 3D image editing programs can do this.

Features
A rendered image can be understood in terms of a number of visible
features. Rendering research and development has been largely
motivated by finding ways to simulate these efficiently. Some relate
directly to particular algorithms and techniques, while others are
produced together.
shading how the color and brightness of a surface varies with
lighting
texture-mapping a method of applying detail to surfaces
bump-mapping a method of simulating small-scale bumpiness on
surfaces
fogging/participating medium how light dims when passing
through non-clear atmosphere or air
shadows the effect of obstructing light
soft shadows varying darkness caused by partially obscured light
sources
reflection mirror-like or highly glossy reflection
transparency (optics), transparency (graphic) or opacity sharp
transmission of light through solid objects
translucency highly scattered transmission of light through solid
objects

Image rendered with computer aided design.

refraction bending of light associated with transparency


diffraction bending, spreading and interference of light passing by an object or aperture that disrupts the ray

Rendering
indirect illumination surfaces illuminated by light reflected off other surfaces, rather than directly from a light
source (also known as global illumination)
caustics (a form of indirect illumination) reflection of light off a shiny object, or focusing of light through a
transparent object, to produce bright highlights on another object
depth of field objects appear blurry or out of focus when too far in front of or behind the object in focus
motion blur objects appear blurry due to high-speed motion, or the motion of the camera
non-photorealistic rendering rendering of scenes in an artistic style, intended to look like a painting or drawing

Techniques
Many rendering algorithms have been researched, and software used for rendering may employ a number of different
techniques to obtain a final image.
Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous
amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the
sampling is not intelligently restricted.
Therefore, a few loose families of more-efficient light transport modelling techniques have emerged:
rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without
advanced optical effects;
ray casting considers the scene as observed from a specific point-of-view, calculating the observed image based
only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to
reduce artifacts;
ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo
techniques to obtain more realistic results at a speed that is often orders of magnitude slower.
The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but
instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are
usually rendered to the display using one of the other three techniques.
Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.
Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order
algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually
fewer objects in a scene than pixels.

Scanline rendering and rasterisation


A high-level representation of an image necessarily contains elements
in a different domain from pixels. These elements are referred to as
primitives. In a schematic drawing, for instance, line segments and
curves might be primitives. In a graphical user interface, windows and
buttons might be the primitives. In rendering of 3D models, triangles
and polygons in space might be primitives.
If a pixel-by-pixel (image order) approach to rendering is impractical
Rendering of the European Extremely Large
or too slow for some task, then a primitive-by-primitive (object order)
Telescope.
approach to rendering may prove useful. Here, one loops through each
of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is
called rasterization, and is the rendering method used by all current graphics cards.
Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of
primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second,
rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels

167

Rendering
occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the
approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce
higher-quality images and is more versatile because it does not depend on as many assumptions about the image as
rasterization.
The older form of rasterization is characterized by rendering an entire face (primitive) as a single color.
Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and
then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken
the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by
face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there
is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics
card's more taxing shading functions and still achieves better performance because the simpler textures stored in
memory use less space. Sometimes designers will use one rasterization method on some faces and the other method
on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the
overall effect.

Ray casting
In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view
outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may
be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes
the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify
the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To
reduce artifacts, a number of rays in slightly different directions may be averaged.
Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the
object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light
source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated.
Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two.
Raycasting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon
animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain
better performance in the computational stage. This is usually the case when a large number of frames need to be
animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if
objects in the scene were all painted with matte finish.

168

Rendering

169

Ray tracing
Ray tracing aims to simulate the natural flow of light,
interpreted as particles. Often, ray tracing methods are
utilized to approximate the solution to the rendering
equation by applying Monte Carlo methods to it. Some
of the most used methods are path tracing, bidirectional
path tracing, or Metropolis light transport, but also semi
realistic methods are in use, like Whitted Style Ray
Tracing, or hybrids. While most implementations let
light propagate on straight lines, applications exist to
simulate relativistic spacetime effects.
In a final, production quality rendering of a ray traced
work, multiple rays are generally shot for each pixel,
and traced not just to the first object of intersection, but
rather, through a number of sequential 'bounces', using
the known laws of optics such as "angle of incidence
equals angle of reflection" and more advanced laws that
deal with refraction and surface roughness.

Spiral Sphere and Julia, Detail, a computer-generated image created


by visual artist Robert W. McGregor using only POV-Ray 3.6 and its
built-in scene description language.

Once the ray either encounters a light source, or more


probably once a set limiting number of bounces has
been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and
the changes along the way through the various bounces evaluated to estimate a value observed at the point of view.
This is all repeated for each sample, for each pixel.
In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however,
only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments.
As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to
consider for short films of any degree of quality, although it has been used for special effects sequences, and in
advertising, where a short portion of high quality (perhaps even photorealistic) footage is required.
However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not
high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is
now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which
show use of real-time software or hardware ray tracing.

Radiosity
Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light
sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the
'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms.
The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a
large spectrum of directions and illuminates the area around it.
The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply
illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity
estimation is coupled with a high quality ray tracing algorithim, images may exhibit convincing realism, particularly
for indoor scenes.

Rendering
In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces
in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring
of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes
including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or
ray-tracing model.
Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the
standardization of rapid radiosity calculation, some graphic artists used a technique referred to loosely as false
radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via
self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be
reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without
examining the contribution that complex objects make to the radiosityor complex objects may be replaced in the
radiosity calculation with simpler objects of similar size and texture.
Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful
for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be
reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without
seriously impacting the overall rendering time-per-frame.
Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from
beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films.

Sampling and filtering


One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem.
Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite
number of pixels. As a consequence of the NyquistShannon sampling theorem (or Kotelnikov theorem), any spatial
waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In
simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that
are smaller than one pixel.
If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly
aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects
where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce
good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a
process called antialiasing.

Optimization
Optimizations used by an artist when a scene is being developed
Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the
portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting
may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the
scene at high detail, and to remove objects that are not important to what is currently being developed.

Common optimizations for real time rendering


For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of
the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'.

170

Rendering

Academic core
The implementation of a realistic renderer always has some basic element of physical simulation or emulation
some computation which resembles or abstracts a real physical process.
The term "physically based" indicates the use of physical models and approximations that are more general and
widely accepted outside rendering. A particular set of related techniques have gradually become established in the
rendering community.
The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or
approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and
practicality, an implementation will be a complex combination of different techniques.
Rendering research is concerned with both the adaptation of scientific models and their efficient application.

The rendering equation


This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the
non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations
of this equation.

Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the
reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the
surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this
equation stands for the whole 'light transport' all the movement of light in a scene.

The bidirectional reflectance distribution function


The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a
surface as follows:

Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection,
although both can ALSO be BRDFs.

Geometric optics
Rendering is practically exclusively concerned with the particle aspect of light physics known as geometric
optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave
aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect
phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both
types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.

Visual perception
Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is
mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost
infinite range of light brightness and color, but current displays movie screen, computer monitor, etc. cannot
handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not
need to be given large-range images to create realism. This can help solve the problem of fitting images into
displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties

171

Rendering

172

won't be noticeable. This related subject is tone mapping.


Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and
Monte Carlo methods.
Rendering for movies often takes place on a network of tightly connected computers known as a render farm.
The current state of the art in 3-D image description for movie creation is the mental ray scene description language
designed at mental images and the RenderMan shading language designed at Pixar.[1] (compare with simpler 3D
fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators).
Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one
or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend
geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing
with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought
features these days may include IPR and hardware rendering/shading.

Chronology of important published ideas


1968 Ray casting
1970 Scanline rendering
1971 Gouraud shading
1974 Texture mapping
1974 Z-buffering
1975 Phong shading
1976 Environment mapping
1977 Shadow volumes
1978 Shadow buffer

1978 Bump mapping


1980 BSP trees
1980 Ray tracing
1981 Cook shader

1983 MIP maps


1984 Octree ray tracing
1984 Alpha compositing
1984 Distributed ray tracing
1984 Radiosity
1985 Hemicube radiosity
1986 Light source tracing
1986 Rendering equation
1987 Reyes rendering
1991 Hierarchical radiosity
1993 Tone mapping
1993 Subsurface scattering
1995 Photon mapping
1997 Metropolis light transport

1997 Instant Radiosity


2002 Precomputed Radiance Transfer

Rendering of an ESTCube-1 satellite.

Rendering

References
[1] A brief introduction to RenderMan (http:/ / portal. acm. org/ citation. cfm?id=1185817& jmp=abstract& coll=GUIDE& dl=GUIDE)

Further reading
Pharr, Matt; Humphreys, Greg (2004). Physically based rendering from theory to implementation. Amsterdam:
Elsevier/Morgan Kaufmann. ISBN0-12-553180-X.
Shirley, Peter; Morley, R. Keith (2003). Realistic ray tracing (2 ed.). Natick, Mass.: AK Peters.
ISBN1-56881-198-5.
Dutr, Philip; Bekaert, Philippe; Bala, Kavita (2003). Advanced global illumination ([Online-Ausg.] ed.). Natick,
Mass.: A K Peters. ISBN1-56881-177-2.
Akenine-Mller, Tomas; Haines, Eric (2004). Real-time rendering (2 ed.). Natick, Mass.: AK Peters.
ISBN1-56881-182-9.
Strothotte, Thomas; Schlechtweg, Stefan (2002). Non-photorealistic computer graphics modeling, rendering, and
animation (2 ed.). San Francisco, CA: Morgan Kaufmann. ISBN1-55860-787-0.
Gooch, Bruce; Gooch, Amy (2001). Non-photorealistic rendering. Natick, Mass.: A K Peters.
ISBN1-56881-133-0.
Jensen, Henrik Wann (2001). Realistic image synthesis using photon mapping ([Nachdr.] ed.). Natick, Mass.: AK
Peters. ISBN1-56881-147-0.
Blinn, Jim (1996). Jim Blinn's corner : a trip down the graphics pipeline. San Francisco, Calif.: Morgan
Kaufmann Publishers. ISBN1-55860-387-5.
Glassner, Andrew S. (2004). Principles of digital image synthesis (2 ed.). San Francisco, Calif.: Kaufmann.
ISBN1-55860-276-3.
Cohen, Michael F.; Wallace, John R. (1998). Radiosity and realistic image synthesis (3 ed.). Boston, Mass. [u.a.]:
Academic Press Professional. ISBN0-12-178270-0.
Foley, James D.; Van Dam; Feiner; Hughes (1990). Computer graphics : principles and practice (2 ed.). Reading,
Mass.: Addison-Wesley. ISBN0-201-12110-7.
Andrew S. Glassner, ed. (1989). An introduction to ray tracing (3 ed.). London [u.a.]: Acad. Press.
ISBN0-12-286160-4.
Description of the 'Radiance' system (http://radsite.lbl.gov/radiance/papers/sg94.1/)

External links
SIGGRAPH (http://www.siggraph.org/) The ACMs special interest group in graphics the largest academic
and professional association and conference.
http://www.cs.brown.edu/~tor/List of links to (recent) siggraph papers (and some others) on the web.

173

Retained mode

Retained mode
In computing, retained mode rendering is a style for application programming interfaces of graphics libraries, in
which the libraries retain a complete model of the objects to be rendered.[1]

Overview
By using a "retained mode" approach, client calls do not directly cause actual rendering, but instead update an
internal model (typically a list of objects) which is maintained within the library's data space. This allows the library
to optimize when actual rendering takes place along with the processing of related objects.
Some techniques to optimize rendering include:[citation needed]
managing double buffering
performing occlusion culling
only transferring data that has changed from one frame to the next from the application to the library
Immediate mode is an alternative approach; the two styles can coexist in the same library and are not necessarily
exclusionary in practice. For example, OpenGL has immediate mode functions that can use previously defined server
side objects (textures, vertex and index buffers, shaders, etc.) without resending unchanged data.[citation needed]

References
[1] Retained Mode Versus Immediate Mode (http:/ / msdn. microsoft. com/ en-us/ library/ windows/ desktop/ ff684178(v=vs. 85). aspx)

Scanline rendering
Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a
row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are
first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed
using the intersection of a scan line with the polygons on the front of the sorted list, while the sorted list is updated to
discard no-longer-visible polygons as the active scan line is advanced down the picture.
The main advantage of this method is that sorting vertices along the normal of the scanning plane reduces the
number of comparisons between edges. Another advantage is that it is not necessary to translate the coordinates of
all vertices from the main memory into the working memoryonly vertices defining edges that intersect the current
scan line need to be in active memory, and each vertex is read in only once. The main memory is often very slow
compared to the link between the central processing unit and cache memory, and thus avoiding re-accessing vertices
in main memory can provide a substantial speedup.
This kind of algorithm can be easily integrated with many other graphics techniques, such as the Phong reflection
model or the Z-buffer algorithm.

Algorithm
The usual method starts with edges of projected polygons inserted into buckets, one per scanline; the rasterizer
maintains an active edge table(AET). Entries maintain sort links, X coordinates, gradients, and references to the
polygons they bound. To rasterize the next scanline, the edges no longer relevant are removed; new edges from the
current scanlines' Y-bucket are added, inserted sorted by X coordinate. The active edge table entries have X and
other parameter information incremented. Active edge table entries are maintained in an X-sorted list by bubble sort,
effecting a change when 2 edges cross. After updating edges, the active edge table is traversed in X order to emit

174

Scanline rendering
only the visible spans, maintaining a Z-sorted active Span table, inserting and deleting the surfaces when edges are
crossed.

Variants
A hybrid between this and Z-buffering does away with the active edge table sorting, and instead rasterizes one
scanline at a time into a Z-buffer, maintaining active polygon spans from one scanline to the next.
In another variant, an ID buffer is rasterized in an intermediate step, allowing deferred shading of the resulting
visible pixels.

History
The first publication of the scanline rendering technique was probably by Wylie, Romney, Evans, and Erdahl in
1967.[1]
Other early developments of the scanline rendering method were by Bouknight in 1969,[2] and Newell, Newell, and
Sancha in 1972.[3] Much of the early work on these methods was done in Ivan Sutherland's graphics group at the
University of Utah, and at the Evans & Sutherland company in Salt Lake City.

Use in realtime rendering


The early Evans & Sutherland ESIG line of image-generators (IGs) employed the technique in hardware 'on the fly',
to generate images one raster-line at a time without a framebuffer, saving the need for then costly memory. Later
variants used a hybrid approach.
The Nintendo DS is the latest hardware to render 3D scenes in this manner, with the option of caching the rasterized
images into VRAM.
The sprite hardware prevalent in 1980s games machines can be considered a simple 2D form of scanline rendering.
The technique was used in the first Quake engine for software rendering of environments (but moving objects were
Z-buffered over the top). Static scenery used BSP-derived sorting for priority. It proved better than Z-buffer/painter's
type algorithms at handling scenes of high depth complexity with costly pixel operations (i.e. perspective-correct
texture mapping without hardware assist). This use preceded the widespread adoption of Z-buffer-based GPUs now
common in PCs.
Sony experimented with software scanline renderers on a second Cell processor during the development of the
PlayStation 3, before settling on a conventional CPU/GPU arrangement.

Similar techniques
A similar principle is employed in tiled rendering (most famously the PowerVR 3D chip); that is, primitives are
sorted into screen space, then rendered in fast on-chip memory, one tile at a time. The Dreamcast provided a mode
for rasterizing one row of tiles at a time for direct raster scanout, saving the need for a complete framebuffer,
somewhat in the spirit of hardware scanline rendering.
Some software rasterizers use 'span buffering' (or 'coverage buffering'), in which a list of sorted, clipped spans are
stored in scanline buckets. Primitives would be successively added to this datastructure, before rasterizing only the
visible pixels in a final stage.

175

Scanline rendering

Comparison with Z-buffer algorithm


The main advantage of scanline rendering over Z-buffering is that the number of times visible pixels are processed
are kept to the absolute minimum which is always one time if no transparency effects are useda benefit for the
case of high resolution or expensive shading computations.
In modern Z-buffer systems, similar benefits can be gained through rough front-to-back sorting (approaching the
'reverse painters algorithm'), early Z-reject (in conjunction with hierarchical Z), and less common deferred rendering
techniques possible on programmable GPUs.
Scanline techniques working on the raster have the drawback that overload is not handled gracefully.
The technique is not considered to scale well as the number of primitives increases. This is because of the size of the
intermediate datastructures required during renderingwhich can exceed the size of a Z-buffer for a complex scene.
Consequently, in contemporary interactive graphics applications, the Z-buffer has become ubiquitous. The Z-buffer
allows larger volumes of primitives to be traversed linearly, in parallel, in a manner friendly to modern hardware.
Transformed coordinates, attribute gradients, etc., need never leave the graphics chip; only the visible pixels and
depth values are stored.

References
[1] Wylie, C, Romney, G W, Evans, D C, and Erdahl, A, "Halftone Perspective Drawings by Computer," Proc. AFIPS FJCC 1967, Vol. 31, 49
[2] Bouknight W.J, "An Improved Procedure for Generation of Half-tone Computer Graphics Representation," UI, Coordinated Science
Laboratory, Sept 1969
[3] Newell, M E, Newell R. G, and Sancha, T.L, "A New Approach to the Shaded Picture Problem," Proc ACM National Conf. 1972

External links
University of Utah Graphics Group History (http://www.cs.utah.edu/about/history/)

176

Schlick's approximation

177

Schlick's approximation
In 3D computer graphics, Schlick's approximation is a formula for approximating the contribution of the Fresnel
term in the specular reflection of light from a non-conducting interface (surface) between two media.
According to Schlick's model, the specular reflection coefficient R can be approximated by:

where

is the angle between the viewing direction and the half-angle direction, which is halfway between the

incident light direction and the viewing direction, hence

. And

are the indices of

refraction of the two medias at the interface and

is the reflection coefficient for light incoming parallel to the

normal (i.e., the value of the Fresnel term when

or minimal reflection). In computer graphics, one of the

interfaces is usually air, meaning that

very well can be approximated as 1.

References
Schlick, C. (1994). "An Inexpensive BRDF Model for Physically-based Rendering". Computer Graphics Forum
13 (3): 233. doi:10.1111/1467-8659.1330233 [1].

References
[1] http:/ / dx. doi. org/ 10. 1111%2F1467-8659. 1330233

Screen Space Ambient Occlusion


Screen space ambient occlusion (SSAO) is a rendering technique for efficiently approximating the computer
graphics ambient occlusion effect in real time. It was developed by Vladimir Kajalin while working at Crytek and
was used for the first time in a video game in the 2007 Windows game Crysis, developed by Crytek and published
by Electronic Arts.

Implementation
The algorithm is implemented as a pixel shader,
analyzing the scene depth buffer which is stored in a
texture. For every pixel on the screen, the pixel shader
samples the depth values around the current pixel and
tries to compute the amount of occlusion from each of
the sampled points. In its simplest implementation, the
occlusion factor depends only on the depth difference
between sampled point and current point.
Without additional smart solutions, such a brute force
method would require about 200 texture reads per pixel
for good visual quality. This is not acceptable for
real-time rendering on current graphics hardware. In

SSAO component of a typical game scene

order to get high quality results with far fewer reads, sampling is performed using a randomly rotated kernel. The
kernel orientation is repeated every N screen pixels in order to have only high-frequency noise in the final picture. In

Screen Space Ambient Occlusion

178

the end this high frequency noise is greatly removed by a NxN post-process blurring step taking into account depth
discontinuities (using methods such as comparing adjacent normals and depths). Such a solution allows a reduction
in the number of depth samples per pixel to about 16 or fewer while maintaining a high quality result, and allows the
use of SSAO in soft real-time applications like computer games.
Compared to other ambient occlusion solutions, SSAO has the following advantages:

Independent from scene complexity.


No data pre-processing needed, no loading time and no memory allocations in system memory.
Works with dynamic scenes.
Works in the same consistent way for every pixel on the screen.
No CPU usage it can be executed completely on the GPU.
May be easily integrated into any modern graphics pipeline.

Of course it has its disadvantages as well:


Rather local and in many cases view-dependent, as it is dependent on adjacent texel depths which may be
generated by any geometry whatsoever.
Hard to correctly smooth/blur out the noise without interfering with depth discontinuities, such as object edges
(the occlusion should not "bleed" onto objects).

In video games
Title

Year

Platform(s)

7554

2011 Microsoft Windows

Alan Wake

2010 Microsoft Windows, Xbox 360

Amnesia: The Dark


[1]
Descent

2010 Microsoft Windows, OS X, Linux

Notes

Amnesia: A Machine for 2013 Microsoft Windows, OS X, Linux


Pigs
ArmA 2
ARMA 2: Operation
Arrowhead
Arcania: Gothic 4

[2]

2009 Microsoft Windows


2010

2010 Microsoft Windows, PlayStation 3, Xbox 360

Windows version only.

Assassin's Creed:
Brotherhood

2010 Microsoft Windows, PlayStation 3, Xbox 360

Batman: Arkham
Asylum

2009 Microsoft Windows, PlayStation 3, Xbox 360, OS Windows and Xbox 360 versions only.
X

Batman: Arkham City

2011 Microsoft Windows, PlayStation 3, Xbox 360,


Wii U, OS X

Windows version only. Uses horizon-based ambient occlusion


(HBAO), an improved form of SSAO.

Batman: Arkham
Origins

2013 Microsoft Windows, PlayStation 3, Xbox 360,


Wii U

Windows version only. Uses horizon-based ambient occlusion


(HBAO+), an improved form of SSAO.

Battlefield 3

[3]

2011 Microsoft Windows, PlayStation 3, Xbox 360

Battlefield: Bad
[4]
Company 2

2010 Microsoft Windows, PlayStation 3, Xbox 360

BattleForge

2009 Microsoft Windows

Binary Domain

2012 Microsoft Windows, PlayStation 3, Xbox 360

Bionic Commando

2009 Microsoft Windows, PlayStation 3, Xbox 360

Windows version only. Uses horizon-based ambient occlusion


(HBAO), an improved form of SSAO.

Windows and Xbox 360 versions only.

Screen Space Ambient Occlusion

179

Borderlands

2009 Microsoft Windows, PlayStation 3, Xbox 360

Windows and Xbox 360 versions only.

Borderlands 2

2012 Microsoft Windows, PlayStation 3, Xbox 360

Windows and Xbox 360 versions only.

Burnout Paradise: The


Ultimate Box

2009 Microsoft Windows, PlayStation 3, Xbox 360

Windows version only.

Call of Duty: Modern


[5]
Warfare 3

2011 Microsoft Windows, PlayStation 3, Xbox 360

Windows version only.

Chivalry: Medieval
Warfare

2012 Microsoft Windows

City of Heroes

2010 Microsoft Windows

Costume Quest

2010 Microsoft Windows, PlayStation 3, Xbox 360

Crysis

2007 Microsoft Windows, PlayStation 3, Xbox 360


[6]

Crysis 2

2011 Microsoft Windows, PlayStation 3, Xbox 360

Crysis 3

2013 Microsoft Windows, PlayStation 3, Xbox 360

Crysis Warhead

2008 Microsoft Windows

Darksiders II

2012 Microsoft Windows, PlayStation 3, Xbox 360

[7]

Dead Island

2011 Microsoft Windows, PlayStation 3, Xbox 360

Dead Space 3

2013 Microsoft Windows, PlayStation 3, Xbox 360

Dead to Rights:
Retribution

2010 PlayStation 3, Xbox 360

Deus Ex: Human


[8]
Revolution

2011 Microsoft Windows, PlayStation 3, Xbox 360

[9]

2011 Microsoft Windows, PlayStation 3, Xbox 360

Dragon Age II

Windows version only.

Windows version only.

Windows version only.

Empire: Total War

2009 Microsoft Windows

Eve Online

2003 Microsoft Windows, OS X, Linux

Nvidia GPUs only. SSAO support added in 2011 update.

F.E.A.R. 2: Project
[10]
Origin

2009 Microsoft Windows, PlayStation 3, Xbox 360

Windows version only.

F.E.A.R. 3

2011 Microsoft Windows, PlayStation 3, Xbox 360

Windows and Xbox 360 versions only.

Far Cry 3

2012 Microsoft Windows, PlayStation 3, Xbox 360

Windows version has additional support for horizon-based


ambient occlusion (HBAO) and high-definition ambient
occlusion (HDAO).

Fight Night
[11]
Champion

2011 PlayStation 3, Xbox 360

Gears of War 2

2008 Xbox 360

Halo: Reach

2010 Xbox 360

Hitman: Absolution

2012 Microsoft Windows, PlayStation 3, Xbox 360

IL-2 Sturmovik: Cliffs of 2011 Microsoft Windows


[12]
Dover
[13]

Infamous 2

2011 PlayStation 3

Infestation: Survivor
Stories

2012 Microsoft Windows

Screen Space Ambient Occlusion

180

James Bond 007: Blood 2010 Microsoft Windows, PlayStation 3, Xbox 360
[14]
Stone
[15]

Just Cause 2

2010 Microsoft Windows, PlayStation 3, Xbox 360

L.A. Noire

2011 Microsoft Windows, PlayStation 3, Xbox 360

[16][17]

Mafia II

2010 Microsoft Windows, PlayStation 3, Xbox 360

Max Payne 3

2012 Microsoft Windows, PlayStation 3, Xbox 360

[18]

Metro 2033

2010 Microsoft Windows, Xbox 360

Metro: Last Light

2013 Microsoft Windows, PlayStation 3, Xbox 360

Napoleon: Total
[19]
War

2010 Microsoft Windows

NecroVision

2009 Microsoft Windows

Overgrowth

TBA Microsoft Windows, OS X, Linux

Quake Live

2009 Microsoft Windows, OS X, Linux

Red Faction:
[20]
Guerrilla

2009 Microsoft Windows, PlayStation 3, Xbox 360

Risen

2009 Microsoft Windows, Xbox 360

S.T.A.L.K.E.R.: Call of
[21]
Pripyat

2009 Microsoft Windows

S.T.A.L.K.E.R.: Clear
Sky

2008 Microsoft Windows

[22]

Windows version only.

Windows and Xbox 360 versions only.

Windows version only.

Windows version only.

Shattered Horizon

2009 Microsoft Windows

Saints Row: The


[23]
Third

2011 Microsoft Windows, PlayStation 3, Xbox 360

Slender: The Arrival

2013 Microsoft Windows

Sleeping Dogs

2012 Microsoft Windows, PlayStation 3, Xbox 360

Windows version only.

StarCraft II: Wings of


Liberty

2010 Microsoft Windows, OS X

After Patch 1.2.0 released 1/12/2011.

Star Trek Online

2010 Microsoft Windows

The Elder Scrolls V:


Skyrim

2011 Microsoft Windows, PlayStation 3, Xbox 360

Windows version only. Through modification, or enabling AO


through Nvidia driver control panel.

[24]
The Cave

2013 Microsoft Windows, OS X, Linux, PlayStation 3,


Xbox 360, Wii U, Ouya

Windows, OS X and Linux versions only.

The Secret World

2012 Microsoft Windows

The Settlers 7: Paths to


a Kingdom

2010 Microsoft Windows

The Witcher 2:
[25]
Assassins of Kings

2011 Microsoft Windows, Xbox 360

Windows version only.

Tom Clancy's Splinter


Cell: Blacklist

2013 Microsoft Windows, PlayStation 3, Xbox 360,


Wii U

Windows version only. Uses horizon-based ambient occlusion


(HBAO+), an improved form of SSAO.

Tomb Raider

2013 Microsoft Windows, PlayStation 3, Xbox 360

Windows version only.

Screen Space Ambient Occlusion

181

Toy Story 3: The Video


Game

2010 Microsoft Windows, OS X, PlayStation 3,


PlayStation 2, Wii, Xbox 360, PlayStation
Portable, Nintendo DS

PlayStation 3 and Xbox 360 versions only.

Transformers: War for


[26]
Cybertron

2010 PlayStation 3, Xbox 360

Uncharted 2: Among
Thieves

2009 PlayStation 3

Vox

2012 Microsoft Windows

War Thunder

2012 Microsoft Windows

Since patch 1.31.

World of Tanks

2010 Microsoft Windows

Since patch 0.8.0.

World of Warcraft

2004 Microsoft Windows, OS X

Since Mists of Pandaria expansion prepatch 5.0.4.

References
[1] http:/ / geekmontage. com/ texts/ game-fixes-amnesia-the-dark-descent-crashing-lag-black-screen-freezing-sound-fixes/
[2] http:/ / www. bit-tech. net/ gaming/ pc/ 2010/ 10/ 25/ arcania-gothic-4-review/ 1
[3] http:/ / publications. dice. se/ attachments/ BF3_NFS_WhiteBarreBrisebois_Siggraph2011. pdf
[4] http:/ / www. guru3d. com/ news/ battlefield-bad-company-2-directx-11-details-/
[5] http:/ / community. callofduty. com/ thread/ 4682
[6] http:/ / crytek. com/ sites/ default/ files/ Crysis%202%20Key%20Rendering%20Features. pdf
[7] http:/ / www. eurogamer. net/ articles/ digitalfoundry-dead-island-face-off
[8] http:/ / www. eurogamer. net/ articles/ deus-ex-human-revolution-face-off
[9] http:/ / www. techspot. com/ review/ 374-dragon-age-2-performance-test/
[10] http:/ / www. pcgameshardware. com/ aid,675766/ Fear-2-Project-Origin-GPU-and-CPU-benchmarks-plus-graphics-settings-compared/
Reviews/
[11] http:/ / imagequalitymatters. blogspot. com/ 2011/ 03/ tech-analysis-fight-night-champion-360_12. html
[12] http:/ / store. steampowered. com/ news/ 5321/ ?l=russian
[13] http:/ / imagequalitymatters. blogspot. com/ 2010/ 07/ tech-analsis-infamous-2-early-screens. html
[14] http:/ / www. lensoftruth. com/ head2head-blood-stone-007-hd-screenshot-comparison/
[15] http:/ / ve3d. ign. com/ articles/ features/ 53469/ Just-Cause-2-PC-Interview
[16] http:/ / imagequalitymatters. blogspot. com/ 2010/ 08/ tech-analysis-mafia-ii-demo-ps3-vs-360. html
[17] http:/ / www. eurogamer. net/ articles/ digitalfoundry-mafia-ii-demo-showdown
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]

http:/ / www. eurogamer. net/ articles/ metro-2033-4a-engine-impresses-blog-entry


http:/ / www. pcgameshardware. com/ aid,705532/ Napoleon-Total-War-CPU-benchmarks-and-tuning-tips/ Practice/
http:/ / www. eurogamer. net/ articles/ digitalfoundry-red-faction-guerilla-pc-tech-comparison?page=2
http:/ / www. pcgameshardware. com/ aid,699424/ Stalker-Call-of-Pripyat-DirectX-11-vs-DirectX-10/ Practice/
http:/ / mgnews. ru/ read-news/ otvety-glavnogo-dizajnera-shattered-horizon-na-vashi-voprosy
http:/ / www. eurogamer. net/ articles/ digitalfoundry-face-off-saints-row-the-third
http:/ / www. doublefine. com/ forums/ viewthread/ 8547/ #261785
http:/ / www. pcgamer. com/ 2011/ 05/ 25/ the-witcher-2-tweaks-guide/
http:/ / www. eurogamer. net/ articles/ digitalfoundry-xbox360-vs-ps3-round-27-face-off?page=2

External links
Finding Next Gen CryEngine 2 (http://delivery.acm.org/10.1145/1290000/1281671/p97-mittring.
pdf?key1=1281671&key2=9942678811&coll=ACM&dl=ACM&CFID=15151515&CFTOKEN=6184618)
Video showing SSAO in action (http://www.youtube.com/watch?v=ifdAILHTcZk)
Image Enhancement by Unsharp Masking the Depth Buffer (http://graphics.uni-konstanz.de/publikationen/
2006/unsharp_masking/Luft et al.-- Image Enhancement by Unsharp Masking the Depth Buffer.pdf)
Hardware Accelerated Ambient Occlusion Techniques on GPUs (http://perumaal.googlepages.com/)
Overview on Screen Space Ambient Occlusion Techniques (http://meshula.net/wordpress/?p=145) (as of
March 1, 2012)

Screen Space Ambient Occlusion

182

Real-Time Depth Buffer Based Ambient Occlusion (http://developer.download.nvidia.com/presentations/


2008/GDC/GDC08_Ambient_Occlusion.pdf)
Source code of SSAO shader used in Crysis (http://www.pastebin.ca/953523)
Approximating Dynamic Global Illumination in Image Space (http://www.mpi-inf.mpg.de/~ritschel/Papers/
SSDO.pdf)
Accumulative Screen Space Ambient Occlusion (http://www.gamedev.net/community/forums/topic.
asp?topic_id=527170)
NVIDIA has integrated SSAO into drivers (http://www.nzone.com/object/nzone_ambientocclusion_home.
html)
Several methods of SSAO are described in ShaderX7 book (http://www.shaderx7.com/TOC.html)
SSAO Shader ( Russian ) (http://lwengine.net.ru/article/DirectX_10/ssao_directx10)
SSAO Tutorial, extension of the technique used in Crysis (http://www.john-chapman.net/content.php?id=8)

Self-shadowing
Self-Shadowing is a computer graphics lighting effect, used in 3D rendering applications such as computer
animation and video games. Self-shadowing allows non-static objects in the environment, such as game characters
and interactive objects (buckets, chairs, etc.), to cast shadows on themselves and each other. For example, without
self-shadowing, if a character puts his or her right arm over the left, the right arm will not cast a shadow over the left
arm. If that same character places a hand over a ball, that hand will cast a shadow over the ball.

Shadow mapping
Shadow mapping or projective shadowing is a process by which
shadows are added to 3D computer graphics. This concept was
introduced by Lance Williams in 1978, in a paper entitled "Casting
curved shadows on curved surfaces". Since then, it has been used both
in pre-rendered scenes and realtime scenes in many console and PC
games.
Shadows are created by testing whether a pixel is visible from the light
source, by comparing it to a z-buffer or depth image of the light
source's view, stored in the form of a texture.
Scene with shadow mapping

Principle of a shadow and a shadow map


If you looked out from a source of light, all of the objects you can see would appear in light. Anything behind those
objects, however, would be in shadow. This is the basic principle used to create a shadow map. The light's view is
rendered, storing the depth of every surface it sees (the shadow map). Next, the regular scene is rendered comparing
the depth of every point drawn (as if it were being seen by the light, rather than the eye) to this depth map.

Shadow mapping

183

This technique is less accurate than shadow volumes, but the shadow
map can be a faster alternative depending on how much fill time is
required for either technique in a particular application and therefore
may be more suitable to real time applications. In addition, shadow
maps do not require the use of an additional stencil buffer, and can be
modified to produce shadows with a soft edge. Unlike shadow
volumes, however, the accuracy of a shadow map is limited by its
resolution.
Scene with no shadows

Algorithm overview
Rendering a shadowed scene involves two major drawing steps. The first produces the shadow map itself, and the
second applies it to the scene. Depending on the implementation (and number of lights), this may require two or
more drawing passes.

Creating the shadow map


The first step renders the scene from the light's point of view. For a point light
source, the view should be a perspective projection as wide as its desired
angle of effect (it will be a sort of square spotlight). For directional light (e.g.,
that from the Sun), an orthographic projection should be used.
From this rendering, the depth buffer is extracted and saved. Because only the
depth information is relevant, it is common to avoid updating the color
buffers and disable all lighting and texture calculations for this rendering, in
order to save drawing time. This depth map is often stored as a texture in
graphics memory.

Scene rendered from the light view.

This depth map must be updated any time there are changes to either the light
or the objects in the scene, but can be reused in other situations, such as those
where only the viewing camera moves. (If there are multiple lights, a separate
depth map must be used for each light.)
In many implementations it is practical to render only a subset of the objects
in the scene to the shadow map in order to save some of the time it takes to
redraw the map. Also, a depth offset which shifts the objects away from the
light may be applied to the shadow map rendering in an attempt to resolve
stitching problems where the depth map value is close to the depth of a
Scene from the light view, depth map.
surface being drawn (i.e., the shadow casting surface) in the next step.
Alternatively, culling front faces and only rendering the back of objects to the shadow map is sometimes used for a
similar result.

Shadow mapping

184

Shading the scene


The second step is to draw the scene from the usual camera viewpoint, applying the shadow map. This process has
three major components, the first is to find the coordinates of the object as seen from the light, the second is the test
which compares that coordinate against the depth map, and finally, once accomplished, the object must be drawn
either in shadow or in light.
Light space coordinates
In order to test a point against the depth map, its position in the scene
coordinates must be transformed into the equivalent position as seen by the
light. This is accomplished by a matrix multiplication. The location of the
object on the screen is determined by the usual coordinate transformation, but
a second set of coordinates must be generated to locate the object in light
space.
The matrix used to transform the world coordinates into the light's viewing
coordinates is the same as the one used to render the shadow map in the first
Visualization of the depth map projected
step (under OpenGL this is the product of the modelview and projection
onto the scene
matrices). This will produce a set of homogeneous coordinates that need a
perspective division (see 3D projection) to become normalized device coordinates, in which each component (x, y,
or z) falls between 1 and 1 (if it is visible from the light view). Many implementations (such as OpenGL and
Direct3D) require an additional scale and bias matrix multiplication to map those 1 to 1 values to 0 to 1, which are
more usual coordinates for depth map (texture map) lookup. This scaling can be done before the perspective
division, and is easily folded into the previous transformation calculation by multiplying that matrix with the
following:

If done with a shader, or other graphics hardware extension, this transformation is usually applied at the vertex level,
and the generated value is interpolated between other vertices, and passed to the fragment level.
Depth map test
Once the light-space coordinates are found, the x and y values usually
correspond to a location in the depth map texture, and the z value corresponds
to its associated depth, which can now be tested against the depth map.
If the z value is greater than the value stored in the depth map at the
appropriate (x,y) location, the object is considered to be behind an occluding
object, and should be marked as a failure, to be drawn in shadow by the
drawing process. Otherwise it should be drawn lit.
If the (x,y) location falls outside the depth map, the programmer must either
decide that the surface should be lit or shadowed by default (usually lit).

Depth map test failures.

In a shader implementation, this test would be done at the fragment level. Also, care needs to be taken when
selecting the type of texture map storage to be used by the hardware: if interpolation cannot be done, the shadow will
appear to have a sharp jagged edge (an effect that can be reduced with greater shadow map resolution).

Shadow mapping

185

It is possible to modify the depth map test to produce shadows with a soft edge by using a range of values (based on
the proximity to the edge of the shadow) rather than simply pass or fail.
The shadow mapping technique can also be modified to draw a texture onto the lit regions, simulating the effect of a
projector. The picture above, captioned "visualization of the depth map projected onto the scene" is an example of
such a process.
Drawing the scene
Drawing the scene with shadows can be done in several different ways. If
programmable shaders are available, the depth map test may be performed by
a fragment shader which simply draws the object in shadow or lighted
depending on the result, drawing the scene in a single pass (after an initial
earlier pass to generate the shadow map).
If shaders are not available, performing the depth map test must usually be
implemented by some hardware extension (such as GL_ARB_shadow [1]),
which usually do not allow a choice between two lighting models (lit and
shadowed), and necessitate more rendering passes:

Final scene, rendered with ambient


shadows.

1. Render the entire scene in shadow. For the most common lighting models (see Phong reflection model) this
should technically be done using only the ambient component of the light, but this is usually adjusted to also
include a dim diffuse light to prevent curved surfaces from appearing flat in shadow.
2. Enable the depth map test, and render the scene lit. Areas where the depth map test fails will not be overwritten,
and remain shadowed.
3. An additional pass may be used for each additional light, using additive blending to combine their effect with the
lights already drawn. (Each of these passes requires an additional previous pass to generate the associated shadow
map.)
The example pictures in this article used the OpenGL extension GL_ARB_shadow_ambient
shadow map process in two passes.

[2]

to accomplish the

Shadow map real-time implementations


One of the key disadvantages of real time shadow mapping is that the size and depth of the shadow map determines
the quality of the final shadows. This is usually visible as aliasing or shadow continuity glitches. A simple way to
overcome this limitation is to increase the shadow map size, but due to memory, computational or hardware
constraints, it is not always possible. Commonly used techniques for real-time shadow mapping have been developed
to circumvent this limitation. These include Cascaded Shadow Maps, Trapezoidal Shadow Maps, Light Space
Perspective Shadow maps, or Parallel-Split Shadow maps.
Also notable is that generated shadows, even if aliasing free, have hard edges, which is not always desirable. In order
to emulate real world soft shadows, several solutions have been developed, either by doing several lookups on the
shadow map, generating geometry meant to emulate the soft edge or creating non standard depth shadow maps.
Notable examples of these are Percentage Closer Filtering, Smoothies, and Variance Shadow maps.

Shadow mapping

Shadow mapping techniques


Simple
SSM "Simple"

Splitting
PSSM "Parallel Split" http://http.developer.nvidia.com/GPUGems3/gpugems3_ch10.html [3]
CSM "Cascaded" http://developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/
doc/cascaded_shadow_maps.pdf [4]

Warping
LiSPSM "Light Space Perspective" http://www.cg.tuwien.ac.at/~scherzer/files/papers/LispSM_survey.pdf
[5]

TSM "Trapezoid" http://www.comp.nus.edu.sg/~tants/tsm.html [6]


PSM "Perspective" http://www-sop.inria.fr/reves/Marc.Stamminger/psm/ [7]
CSSM "Camera Space" http://bib.irb.hr/datoteka/570987.12_CSSM.pdf [8]

Smoothing
PCF "Percentage Closer Filtering" http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html [9]

Filtering
ESM "Exponential" http://www.thomasannen.com/pub/gi2008esm.pdf [10]
CSM "Convolution" http://research.edm.uhasselt.be/~tmertens/slides/csm.ppt [11]
VSM "Variance" http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.2569&rep=rep1&
type=pdf [12]
SAVSM "Summed Area Variance" http://http.developer.nvidia.com/GPUGems3/gpugems3_ch08.html [13]
SMSR "Shadow Map Silhouette Revectorization" http://bondarev.nl/?p=326 [14]

Soft Shadows
PCSS "Percentage Closer" http://developer.download.nvidia.com/shaderlibrary/docs/shadow_PCSS.pdf [15]
SSSS "Screen space soft shadows" http://www.crcnetbase.com/doi/abs/10.1201/b10648-36 [16]
FIV "Fullsphere Irradiance Vector" http://getlab.org/publications/FIV/ [17]

Assorted

ASM "Adaptive" http://www.cs.cornell.edu/~kb/publications/ASM.pdf [18]


AVSM "Adaptive Volumetric" http://visual-computing.intel-research.net/art/publications/avsm/ [19]
CSSM "Camera Space" http://free-zg.t-com.hr/cssm/ [20]
DASM "Deep Adaptive"
DPSM "Dual Paraboloid" http://sites.google.com/site/osmanbrian2/dpsm.pdf [21]
DSM "Deep" http://graphics.pixar.com/library/DeepShadows/paper.pdf [22]
FSM "Forward" http://www.cs.unc.edu/~zhangh/technotes/shadow/shadow.ps [23]
LPSM "Logarithmic" http://gamma.cs.unc.edu/LOGSM/ [24]

MDSM "Multiple Depth" http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.59.3376&rep=rep1&


type=pdf [25]
RTW "Rectilinear" http://www.cspaul.com/wiki/doku.php?id=publications:rosen.2012.i3d [26]

186

Shadow mapping

RMSM "Resolution Matched" http://www.idav.ucdavis.edu/func/return_pdf?pub_id=919 [27]


SDSM "Sample Distribution" http://visual-computing.intel-research.net/art/publications/sdsm/ [28]
SPPSM "Separating Plane Perspective" http://jgt.akpeters.com/papers/Mikkelsen07/sep_math.pdf [29]
SSSM "Shadow Silhouette" http://graphics.stanford.edu/papers/silmap/silmap.pdf [30]

Further reading
Smooth Penumbra Transitions with Shadow Maps [31] Willem H. de Boer
Forward shadow mapping [32] does the shadow test in eye-space rather than light-space to keep texture access
more sequential.
Shadow mapping techniques [33] An overview of different shadow mapping techniques

References
[1]
[2]
[3]
[4]
[5]

http:/ / www. opengl. org/ registry/ specs/ ARB/ shadow. txt


http:/ / www. opengl. org/ registry/ specs/ ARB/ shadow_ambient. txt
http:/ / http. developer. nvidia. com/ GPUGems3/ gpugems3_ch10. html
http:/ / developer. download. nvidia. com/ SDK/ 10. 5/ opengl/ src/ cascaded_shadow_maps/ doc/ cascaded_shadow_maps. pdf
http:/ / www. cg. tuwien. ac. at/ ~scherzer/ files/ papers/ LispSM_survey. pdf

[6] http:/ / www. comp. nus. edu. sg/ ~tants/ tsm. html
[7] http:/ / www-sop. inria. fr/ reves/ Marc. Stamminger/ psm/
[8] http:/ / bib. irb. hr/ datoteka/ 570987. 12_CSSM. pdf
[9] http:/ / http. developer. nvidia. com/ GPUGems/ gpugems_ch11. html
[10] http:/ / www. thomasannen. com/ pub/ gi2008esm. pdf
[11] http:/ / research. edm. uhasselt. be/ ~tmertens/ slides/ csm. ppt
[12] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 104. 2569& rep=rep1& type=pdf
[13] http:/ / http. developer. nvidia. com/ GPUGems3/ gpugems3_ch08. html
[14] http:/ / bondarev. nl/ ?p=326
[15] http:/ / developer. download. nvidia. com/ shaderlibrary/ docs/ shadow_PCSS. pdf
[16] http:/ / www. crcnetbase. com/ doi/ abs/ 10. 1201/ b10648-36
[17] http:/ / getlab. org/ publications/ FIV/
[18] http:/ / www. cs. cornell. edu/ ~kb/ publications/ ASM. pdf
[19] http:/ / visual-computing. intel-research. net/ art/ publications/ avsm/
[20] http:/ / free-zg. t-com. hr/ cssm/
[21] http:/ / sites. google. com/ site/ osmanbrian2/ dpsm. pdf
[22] http:/ / graphics. pixar. com/ library/ DeepShadows/ paper. pdf
[23] http:/ / www. cs. unc. edu/ ~zhangh/ technotes/ shadow/ shadow. ps
[24] http:/ / gamma. cs. unc. edu/ LOGSM/
[25] http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 59. 3376& rep=rep1& type=pdf
[26] http:/ / www. cspaul. com/ wiki/ doku. php?id=publications:rosen. 2012. i3d
[27] http:/ / www. idav. ucdavis. edu/ func/ return_pdf?pub_id=919
[28] http:/ / visual-computing. intel-research. net/ art/ publications/ sdsm/
[29] http:/ / jgt. akpeters. com/ papers/ Mikkelsen07/ sep_math. pdf
[30] http:/ / graphics. stanford. edu/ papers/ silmap/ silmap. pdf
[31] http:/ / www. whdeboer. com/ papers/ smooth_penumbra_trans. pdf
[32] http:/ / www. cs. unc. edu/ ~zhangh/ shadow. html
[33] http:/ / www. gamerendering. com/ category/ shadows/ shadow-mapping/

187

Shadow mapping

External links
Hardware Shadow Mapping (http://developer.nvidia.com/attach/8456), nVidia
Shadow Mapping with Today's OpenGL Hardware (http://developer.nvidia.com/attach/6769), nVidia
Riemer's step-by-step tutorial implementing Shadow Mapping with HLSL and DirectX (http://www.riemers.
net/Tutorials/DirectX/Csharp3/index.php)
NVIDIA Real-time Shadow Algorithms and Techniques (http://developer.nvidia.com/object/doc_shadows.
html)
Shadow Mapping implementation using Java and OpenGL (http://www.embege.com/shadowmapping)

Shadow volume
Shadow volume is a technique used in 3D computer graphics to add shadows to a rendered scene. They were first
proposed by Frank Crow in 1977[1] as the geometry describing the 3D shape of the region occluded from a light
source. A shadow volume divides the virtual world in two: areas that are in shadow and areas that are not.
The stencil buffer implementation of shadow volumes is generally considered among the most practical general
purpose real-time shadowing techniques for use on modern 3D graphics hardware. It has been popularised by the
video game Doom 3, and a particular variation of the technique used in this game has become known as Carmack's
Reverse (see depth fail below).
Shadow volumes have become a popular tool for real-time shadowing, alongside the more venerable shadow
mapping. The main advantage of shadow volumes is that they are accurate to the pixel (though many
implementations have a minor self-shadowing problem along the silhouette edge, see construction below), whereas
the accuracy of a shadow map depends on the texture memory allotted to it as well as the angle at which the shadows
are cast (at some angles, the accuracy of a shadow map unavoidably suffers). However, the shadow volume
technique requires the creation of shadow geometry, which can be CPU intensive (depending on the
implementation). The advantage of shadow mapping is that it is often faster, because shadow volume polygons are
often very large in terms of screen space and require a lot of fill time (especially for convex objects), whereas
shadow maps do not have this limitation.

Construction
In order to construct a shadow volume, project a ray from the light source through each vertex in the shadow casting
object to some point (generally at infinity). These projections will together form a volume; any point inside that
volume is in shadow, everything outside is lit by the light.
For a polygonal model, the volume is usually formed by classifying each face in the model as either facing toward
the light source or facing away from the light source. The set of all edges that connect a toward-face to an away-face
form the silhouette with respect to the light source. The edges forming the silhouette are extruded away from the
light to construct the faces of the shadow volume. This volume must extend over the range of the entire visible
scene; often the dimensions of the shadow volume are extended to infinity to accomplish this (see optimization
below.) To form a closed volume, the front and back end of this extrusion must be covered. These coverings are
called "caps". Depending on the method used for the shadow volume, the front end may be covered by the object
itself, and the rear end may sometimes be omitted (see depth pass below).
There is also a problem with the shadow where the faces along the silhouette edge are relatively shallow. In this
case, the shadow an object casts on itself will be sharp, revealing its polygonal facets, whereas the usual lighting
model will have a gradual change in the lighting along the facet. This leaves a rough shadow artifact near the
silhouette edge which is difficult to correct. Increasing the polygonal density will minimize the problem, but not
eliminate it. If the front of the shadow volume is capped, the entire shadow volume may be offset slightly away from

188

Shadow volume
the light to remove any shadow self-intersections within the offset distance of the silhouette edge (this solution is
more commonly used in shadow mapping).
The basic steps for forming a shadow volume are:
1. Find all silhouette edges (edges which separate front-facing faces from back-facing faces)
2. Extend all silhouette edges in the direction away from the light-source
3. Add a front-cap and/or back-cap to each surface to form a closed volume (may not be necessary, depending on
the implementation used)

Illustration of shadow volumes. The image above at left shows a scene shadowed using shadow volumes. At right, the shadow volumes are
shown in wireframe. Note how the shadows form a large conical area pointing away from the light source (the bright white point).

Stencil buffer implementations


After Crow, Tim Heidmann showed in 1991 how to use the stencil buffer to render shadows with shadow volumes
quickly enough for use in real time applications. There are three common variations to this technique, depth pass,
depth fail, and exclusive-or, but all of them use the same process:
1. Render the scene as if it were completely in shadow.
2. For each light source:
1. Using the depth information from that scene, construct a mask in the stencil buffer that has holes only where
the visible surface is not in shadow.
2. Render the scene again as if it were completely lit, using the stencil buffer to mask the shadowed areas. Use
additive blending to add this render to the scene.
The difference between these three methods occurs in the generation of the mask in the second step. Some involve
two passes, and some only one; some require less precision in the stencil buffer.
Shadow volumes tend to cover large portions of the visible scene, and as a result consume valuable rasterization time
(fill time) on 3D graphics hardware. This problem is compounded by the complexity of the shadow casting objects,
as each object can cast its own shadow volume of any potential size onscreen. See optimization below for a
discussion of techniques used to combat the fill time problem.

189

Shadow volume

Depth pass
Heidmann proposed that if the front surfaces and back surfaces of the shadows were rendered in separate passes, the
number of front faces and back faces in front of an object can be counted using the stencil buffer. If an object's
surface is in shadow, there will be more front facing shadow surfaces between it and the eye than back facing
shadow surfaces. If their numbers are equal, however, the surface of the object is not in shadow. The generation of
the stencil mask works as follows:
1.
2.
3.
4.
5.
6.
7.

Disable writes to the depth and color buffers.


Use back-face culling.
Set the stencil operation to increment on depth pass (only count shadows in front of the object).
Render the shadow volumes (because of culling, only their front faces are rendered).
Use front-face culling.
Set the stencil operation to decrement on depth pass.
Render the shadow volumes (only their back faces are rendered).

After this is accomplished, all lit surfaces will correspond to a 0 in the stencil buffer, where the numbers of front and
back surfaces of all shadow volumes between the eye and that surface are equal.
This approach has problems when the eye itself is inside a shadow volume (for example, when the light source
moves behind an object). From this point of view, the eye sees the back face of this shadow volume before anything
else, and this adds a 1 bias to the entire stencil buffer, effectively inverting the shadows. This can be remedied by
adding a "cap" surface to the front of the shadow volume facing the eye, such as at the front clipping plane. There is
another situation where the eye may be in the shadow of a volume cast by an object behind the camera, which also
has to be capped somehow to prevent a similar problem. In most common implementations, because properly
capping for depth-pass can be difficult to accomplish, the depth-fail method (see below) may be licensed for these
special situations. Alternatively one can give the stencil buffer a +1 bias for every shadow volume the camera is
inside, though doing the detection can be slow.
There is another potential problem if the stencil buffer does not have enough bits to accommodate the number of
shadows visible between the eye and the object surface, because it uses saturation arithmetic. (If they used arithmetic
overflow instead, the problem would be insignificant.)
Depth pass testing is also known as z-pass testing, as the depth buffer is often referred to as the z-buffer.

Depth fail
Around the year 2000, several people discovered that Heidmann's method can be made to work for all camera
positions by reversing the depth. Instead of counting the shadow surfaces in front of the object's surface, the surfaces
behind it can be counted just as easily, with the same end result. This solves the problem of the eye being in shadow,
since shadow volumes between the eye and the object are not counted, but introduces the condition that the rear end
of the shadow volume must be capped, or shadows will end up missing where the volume points backward to
infinity.
1.
2.
3.
4.
5.
6.
7.

Disable writes to the depth and color buffers.


Use front-face culling.
Set the stencil operation to increment on depth fail (only count shadows behind the object).
Render the shadow volumes.
Use back-face culling.
Set the stencil operation to decrement on depth fail.
Render the shadow volumes.

The depth fail method has the same considerations regarding the stencil buffer's precision as the depth pass method.
Also, similar to depth pass, it is sometimes referred to as the z-fail method.

190

Shadow volume
William Bilodeau and Michael Songy discovered this technique in October 1998, and presented the technique at
Creativity, a Creative Labs developer's conference, in 1999. Sim Dietrich presented this technique at both GDC in
March 1999, and at Creativity in late 1999. A few months later, William Bilodeau and Michael Songy filed a US
patent application for the technique the same year, US 6384822 [2], entitled "Method for rendering shadows using a
shadow volume and a stencil buffer" issued in 2002. John Carmack of id Software independently discovered the
algorithm in 2000 during the development of Doom 3. Since he advertised the technique to the larger public, it is
often known as Carmack's Reverse.

Exclusive-or
Either of the above types may be approximated with an exclusive-or variation, which does not deal properly with
intersecting shadow volumes, but saves one rendering pass (if not fill time), and only requires a 1-bit stencil buffer.
The following steps are for the depth pass version:
1. Disable writes to the depth and color buffers.
2. Set the stencil operation to XOR on depth pass (flip on any shadow surface).
3. Render the shadow volumes.

Optimization
One method of speeding up the shadow volume geometry calculations is to utilize existing parts of the rendering
pipeline to do some of the calculation. For instance, by using homogeneous coordinates, the w-coordinate may be
set to zero to extend a point to infinity. This should be accompanied by a viewing frustum that has a far clipping
plane that extends to infinity in order to accommodate those points, accomplished by using a specialized
projection matrix. This technique reduces the accuracy of the depth buffer slightly, but the difference is usually
negligible. See 2002 paper Practical and Robust Stenciled Shadow Volumes for Hardware-Accelerated
Rendering [3], C. Everitt and M. Kilgard, for a detailed implementation.
Rasterization time of the shadow volumes can be reduced by using an in-hardware scissor test to limit the
shadows to a specific onscreen rectangle.
NVIDIA has implemented a hardware capability called the depth bounds test [4] that is designed to remove parts
of shadow volumes that do not affect the visible scene. (This has been available since the GeForce FX 5900
model.) A discussion of this capability and its use with shadow volumes was presented at the Game Developers
Conference in 2005.
Since the depth-fail method only offers an advantage over depth-pass in the special case where the eye is within a
shadow volume, it is preferable to check for this case, and use depth-pass wherever possible. This avoids both the
unnecessary back-capping (and the associated rasterization) for cases where depth-fail is unnecessary, as well as
the problem of appropriately front-capping for special cases of depth-pass.
On more recent GPU pipelines, geometry shaders can be used to generate the shadow volumes.[5]

191

Shadow volume

References
[1]
[2]
[3]
[4]
[5]

Crow, Franklin C: "Shadow Algorithms for Computer Graphics", Computer Graphics (SIGGRAPH '77 Proceedings), vol. 11, no. 2, 242-248.
http:/ / worldwide. espacenet. com/ textdoc?DB=EPODOC& IDX=US6384822
http:/ / arxiv. org/ abs/ cs/ 0301002
http:/ / www. opengl. org/ registry/ specs/ EXT/ depth_bounds_test. txt
http:/ / web. archive. org/ web/ 20110516024500/ http:/ / developer. nvidia. com/ node/ 168

External links
The Theory of Stencil Shadow Volumes (http://www.gamedev.net/page/resources/_/technical/
graphics-programming-and-theory/the-theory-of-stencil-shadow-volumes-r1873)
The Mechanics of Robust Stencil Shadows (http://www.gamasutra.com/view/feature/2942/
the_mechanics_of_robust_stencil_.php)
An Introduction to Stencil Shadow Volumes (http://www.devmaster.net/articles/shadow_volumes)
Shadow Mapping and Shadow Volumes (http://www.devmaster.net/articles/shadow_techniques)
Stenciled Shadow Volumes in OpenGL (http://joshbeam.com/articles/stenciled_shadow_volumes_in_opengl/)
Volume shadow tutorial (http://web.archive.org/web/20110514001245/http://www.gamedev.net/reference/
articles/article2036.asp)
Fast shadow volumes (http://web.archive.org/web/20110515182521/http://developer.nvidia.com/object/
fast_shadow_volumes.html) at NVIDIA
Robust shadow volumes (http://developer.nvidia.com/object/robust_shadow_volumes.html) at NVIDIA
Advanced Stencil Shadow and Penumbral Wedge Rendering (http://www.terathon.com/gdc_lengyel.
ppt)Wikipedia:Link rot

Regarding depth-fail patents


"Creative Pressures id Software With Patents" (http://games.slashdot.org/story/04/07/28/1529222/
creative-pressures-id-software-with-patents). Slashdot. July 28, 2004. Retrieved 2006-05-16.
"Creative patents Carmack's reverse" (http://techreport.com/discussions.x/7113). The Tech Report. July 29,
2004. Retrieved 2006-05-16.
"Creative gives background to Doom III shadow story" (http://www.theinquirer.net/inquirer/news/1019517/
creative-background-doom-iii-shadow-story). The Inquirer. July 29, 2004. Retrieved 2006-05-16.

192

Silhouette edge

Silhouette edge
In computer graphics, a silhouette edge on a 3D body projected onto a 2D plane (display plane) is the collection of
points whose outwards surface normal is perpendicular to the view vector. Due to discontinuities in the surface
normal, a silhouette edge is also an edge which separates a front facing face from a back facing face. Without loss of
generality, this edge is usually chosen to be the closest one on a face, so that in parallel view this edge corresponds to
the same one in a perspective view. Hence, if there is an edge between a front facing face and a side facing face, and
another edge between a side facing face and back facing face, the closer one is chosen. The easy example is looking
at a cube in the direction where the face normal is collinear with the view vector.
The first type of silhouette edge is sometimes troublesome to handle because it does not necessarily correspond to a
physical edge in the CAD model. The reason that this can be an issue is that a programmer might corrupt the original
model by introducing the new silhouette edge into the problem. Also, given that the edge strongly depends upon the
orientation of the model and view vector, this can introduce numerical instabilities into the algorithm (such as when
a trick like dilution of precision is considered).

Computation
To determine the silhouette edge of an object, we first have to know the plane equation of all faces. Then, by
examining the sign of the point-plane distance from the light-source to each face

Using this result, we can determine if the face is front- or back facing.
The silhouette edge(s) consist of all edges separating a front facing face from a back facing face.

Similar Technique
A convenient and practical implementation of front/back facing detection is to use the unit normal of the plane
(which is commonly precomputed for lighting effects anyway), then simply applying the dot product of the light
position to the plane's unit normal and adding the D component of the plane equation (a scalar value):

Where plane_D is easily calculated as a point on the plane dot product with the unit normal of the plane:

Note: The homogeneous coordinates, L_w and d, are not always needed for this computation.
After doing this calculation, you may notice indicator is actually the signed distance from the plane to the light
position. This distance indicator will be negative if it is behind the face, and positive if it is in front of the face.

This is also the technique used in the 2002 SIGGRAPH paper, "Practical and Robust Stenciled Shadow Volumes for
Hardware-Accelerated Rendering"

193

Silhouette edge

External links
http://wheger.tripod.com/vhl/vhl.htm

Spectral rendering
In Computer Graphics, spectral rendering is where a scene's light transport is modeled considering the whole span
of wavelengths instead of R,G,B values (still relating on geometric optic, which ignore wave phase). The motivation
is that real colors of the physical world are spectrum; trichromatic colors are only inherent to Human Visual System.
Many phenomenons are poorly represented through trichromy:
A "green" light can have a spectrum with a peak in green, or peaks at yellow and blue. Indeed, plainty of different
spectrums are perceived equivalent.
Similarly for a transparent sheet filtering white light into "green".
Similarly for a surface coat which reflect white light into "green".
Still, the reflection and filtering of the light is wavelength-wise.
Similarly for a series of filters, or for multiple scattering in a transparent volume, or for multiple reflections
between coated surfaces. Thick layers of colored volumes as well as accumulated reflection at the corner or two
bright coated walls tend to "purify" the material spectrum to its peaks since it is a power law of the base spectrum
with the number of bounces. Thus, how "green" fluids or walls should looks like in this conditions is
undetermined without spectrum.
The refractive index is wavelength-dependant, which means that white rays are decomposed into rays going in
different directions depending of the wavelength. Getting a rainbow caustic or arc requests managing
wavelengths.
As an example, certain properties of tomatoes make them appear differently under sunlight than under fluorescent
light. Using the blackbody radiation equations to simulate sunlight or the emission spectrum of a fluorescent bulb in
combination with the tomato's spectral reflectance curve, more accurate images of each scenario can be produced.
Chlorophyll, paint pigments, some bricks, are often strongly wavelength-dependent. Besides "temperature color" of
incandescent light, LED, fluorescent and fluo-compact lights have "pathologic" spectrums made of isolated peaks
which thus interact very differently than sun light with matter color. Thus the importance of accurate simulation for
architectural, museographic or even night-driving simulation.
Some specific aspects can be dealt with using local evaluations (local light-material interaction), approximations or
interpolations (.e.g, refraction). Still, the base method consist in replacing the triple R,G,B by a given larger
sampling of frequencies, or by random chose of photon wavelengths. This process is thus a lot slower than classical
trichromic rendering. Spectral rendering is often used in ray tracing or photon mapping to more accurately simulate
the scene with demanding coat material and lighting characteristics, often for comparison with an actual photograph
to test the rendering algorithm (as in a Cornell Box) or to simulate different portions of the electromagnetic spectrum
for the purpose of scientific work.

194

Spectral rendering

Implementations
For example, Arion,[1] FluidRay[2] fryrender,[3] Indigo Renderer,[4] LuxRender,[5] mental ray,[6] Mitsuba,[7] Octane
Render,[8] Spectral Studio,[9] Thea Render[10] and Ocean[11] describe themselves as spectral renderers.

References
[1] http:/ / www. randomcontrol. com/ arion-tech-specs
[2] http:/ / www. fluidray. com/ features
[3] http:/ / www. randomcontrol. com/ fryrender-tech-specs
[4] http:/ / www. indigorenderer. com/ features/ technical
[5] http:/ / www. luxrender. net/ wiki/ Features#Physically_based. 2C_spectral_rendering
[6] http:/ / www. mentalimages. com/ products/ mental-ray/ about-mental-ray/ features. html
[7] http:/ / www. mitsuba-renderer. org/ index. html
[8] http:/ / render. otoy. com/ features. php
[9] http:/ / www. spectralpixel. com/ index. php/ features
[10] http:/ / www. thearender. com/ cms/ index. php/ features/ tech-tour/ 37. html
[11] http:/ / www. eclat-digital. com/ spectral-rendering/

External links
Cornell Box photo comparison (http://www.graphics.cornell.edu/online/box/compare.html)

Specular highlight
A specular highlight is the bright spot of light that appears on shiny
objects when illuminated (for example, see image at right). Specular
highlights are important in 3D computer graphics, as they provide a
strong visual cue for the shape of an object and its location with respect
to light sources in the scene.

Microfacets
The term specular means that light is perfectly reflected in a
mirror-like way from the light source to the viewer. Specular reflection
is visible only where the surface normal is oriented precisely halfway
between the direction of incoming light and the direction of the viewer;
Specular highlights on a pair of spheres.
this is called the half-angle direction because it bisects (divides into
halves) the angle between the incoming light and the viewer. Thus, a
specularly reflecting surface would show a specular highlight as the perfectly sharp reflected image of a light source.
However, many shiny objects show blurred specular highlights.
This can be explained by the existence of microfacets. We assume that surfaces that are not perfectly smooth are
composed of many very tiny facets, each of which is a perfect specular reflector. These microfacets have normals
that are distributed about the normal of the approximating smooth surface. The degree to which microfacet normals
differ from the smooth surface normal is determined by the roughness of the surface. At points on the object where
the smooth normal is close to the half-angle direction, many of the microfacets point in the half-angle direction and
so the specular highlight is bright. As one moves away from the center of the highlight, the smooth normal and the
half-angle direction get farther apart; the number of microfacets oriented in the half-angle direction falls, and so the
intensity of the highlight falls off to zero.

195

Specular highlight

196

The specular highlight often reflects the color of the light source, not the color of the reflecting object. This is
because many materials have a thin layer of clear material above the surface of the pigmented material. For example
plastic is made up of tiny beads of color suspended in a clear polymer and human skin often has a thin layer of oil or
sweat above the pigmented cells. Such materials will show specular highlights in which all parts of the color
spectrum are reflected equally. On metallic materials such as gold the color of the specular highlight will reflect the
color of the material.

Models of microfacets
A number of different models exist to predict the distribution of microfacets. Most assume that the microfacet
normals are distributed evenly around the normal; these models are called isotropic. If microfacets are distributed
with a preference for a certain direction along the surface, the distribution is anisotropic.
NOTE: In most equations, when it says

it means

Phong distribution
In the Phong reflection model, the intensity of the specular highlight is calculated as:

Where R is the mirror reflection of the light vector off the surface, and V is the viewpoint vector.
In the BlinnPhong shading model, the intensity of a specular highlight is calculated as:

Where N is the smooth surface normal and H is the half-angle direction (the direction vector midway between L, the
vector to the light, and V, the viewpoint vector).
The number n is called the Phong exponent, and is a user-chosen value that controls the apparent smoothness of the
surface. These equations imply that the distribution of microfacet normals is an approximately Gaussian distribution
(for large ), or approximately Pearson type II distribution, of the corresponding angle.[1] While this is a useful
heuristic and produces believable results, it is not a physically based model.
Another similar formula, but only calculated differently:

where R is an eye reflection vector, E is an eye vector (view vector), N is surface normal vector. All vectors
are
normalized
(
).
L
is
a
light
vector.
For
example,
then:

Approximate formula is this:

If

then

vector

is

normalized

Specular highlight

197

Gaussian distribution
A slightly better model of microfacet distribution can be created using a Gaussian distribution.[citation
usual function calculates specular highlight intensity as:

needed]

The

where m is a constant between 0 and 1 that controls the apparent smoothness of the surface.[2]

Beckmann distribution
A physically based model of microfacet distribution is the Beckmann distribution:[3]

where m is the rms slope of the surface microfacets (the roughness of the material).[4] Compare to the empirical
models above, this function "gives the absolute magnitude of the reflectance without introducing arbitrary constants;
the disadvantage is that it requires more computation". However, this model can be simplified since
. Also note that the product of

and a surface distribution function is

normalized over the half-sphere which is obeyed by this function.

HeidrichSeidel anisotropic distribution


The HeidrichSeidel distribution is a simple anisotropic distribution, based on the Phong model. It can be used to
model surfaces that have small parallel grooves or fibers, such as brushed metal, satin, and hair. The specular
highlight intensity for this distribution is:

where n is the anisotropic exponent, V is the viewing direction, L is the direction of incoming light, and T is the
direction parallel to the grooves or fibers at this point on the surface. If you have a unit vector D which specifies the
global direction of the anisotropic distribution, you can compute the vector T at a given point by the following:

where N is the unit normal vector at that point on the surface. You can also easily compute the cosine of the angle
between the vectors by using a property of the dot product and the sine of the angle by using the trigonometric
identities.
The anisotropic

should be used in conjunction with a non-anisotropic distribution like a Phong distribution to

produce the correct specular highlight.

Ward anisotropic distribution


The Ward anisotropic distribution [5] uses two user-controllable parameters x and y to control the anisotropy. If
the two parameters are equal, then an isotropic highlight results. The specular term in the distribution is:

The specular term is zero if NL < 0 or NR < 0. All vectors are unit vectors. The vector R is the mirror reflection of
the light vector off the surface, L is the direction from the surface point to the light, H is the half-angle direction, N is

Specular highlight

198

the surface normal, and X and Y are two orthogonal vectors in the normal plane which specify the anisotropic
directions.

CookTorrance model
The CookTorrance model[] uses a specular term of the form
.
Here D is the Beckmann distribution factor as above and F is the Fresnel term,

For performance reasons in real-time 3D graphics Schlick's approximation is often used to approximate Fresnel term.
G is the geometric attenuation term, describing selfshadowing due to the microfacets, and is of the form
.
In these formulas E is the vector to the camera or eye, H is the half-angle vector, L is the vector to the light source
and N is the normal vector, and is the angle between H and N.

Using multiple distributions


If desired, different distributions (usually, using the same distribution function with different values of m or n) can be
combined using a weighted average. This is useful for modelling, for example, surfaces that have small smooth and
rough patches rather than uniform roughness.

References
[1] Richard Lyon, "Phong Shading Reformulation for Hardware Renderer Simplification", Apple Technical Report #43, Apple Computer, Inc.
1993 PDF (http:/ / dicklyon. com/ tech/ Graphics/ Phong_TR-Lyon. pdf)
[2] Glassner, Andrew S. (ed). An Introduction to Ray Tracing. San Diego: Academic Press Ltd, 1989. p. 148.
[3] Petr Beckmann, Andr Spizzichino, The scattering of electromagnetic waves from rough surfaces, Pergamon Press, 1963, 503 pp
(Republished by Artech House, 1987, ISBN 978-0-89006-238-8).
[4] Foley et al. Computer Graphics: Principles and Practice. Menlo Park: Addison-Wesley, 1997. p. 764.
[5] http:/ / radsite. lbl. gov/ radiance/ papers/

Specularity

199

Specularity
Specularity is the visual appearance of specular reflections. In
computer graphics, it means the quantity used in three-dimensional
(3D) rendering which represents the amount of specular reflectivity a
surface has. It is a key component in determining the brightness of
specular highlights, along with shininess to determine the size of the
highlights.
It is frequently used in real-time computer graphics where the
mirror-like specular reflection of light from other surfaces is often
ignored (due to the more intensive computations required to calculate
it), and the specular reflection of light directly from point light sources
is modelled as specular highlights.
Specular highlights on a pair of spheres

References

Sphere mapping
In computer graphics, sphere mapping (or spherical environment mapping) is a type of reflection mapping that
approximates reflective surfaces by considering the environment to be an infinitely far-away spherical wall. This
environment is stored as a texture depicting what a mirrored sphere would look like if it were placed into the
environment, using an orthographic projection (as opposed to one with perspective). This texture contains reflective
data for the entire environment, except for the spot directly behind the sphere. (For one example of such an object,
see Escher's drawing Hand with Reflecting Sphere.)
To use this data, the surface normal of the object, view direction from the object to the camera, and/or reflected
direction from the object to the environment is used to calculate a texture coordinate to look up in the
aforementioned texture map. The result appears like the environment is reflected in the surface of the object that is
being rendered.

Usage example
In the simplest case for generating texture coordinates, suppose:
The map has been created as above, looking at the sphere along the z-axis.
The texture coordinate of the center of the map is (0,0), and the sphere's image has radius 1.
We are rendering an image in the same exact situation as the sphere, but the sphere has been replaced with a
reflective object.
The image being created is orthographic, or the viewer is infinitely far away, so that the view direction does not
change as one moves across the image.
At texture coordinate

, note that the depicted location on the sphere is

), and the normal at that location is also

(where z is

. However, we are given the reverse task (a

normal for which we need to produce a texture map coordinate). So the texture coordinate corresponding to normal
is
.

Stencil buffer

200

Stencil buffer
A stencil buffer is an extra buffer, in addition to the
color buffer (pixel buffer) and depth buffer
(z-buffering) found on modern graphics hardware. The
buffer is per pixel, and works on integer values, usually
with a depth of one byte per pixel. The depth buffer and
stencil buffer often share the same area in the RAM of
the graphics hardware.
In the simplest case, the stencil buffer is used to limit
the area of rendering (stenciling). More advanced usage
of the stencil buffer makes use of the strong connection
between the depth buffer and the stencil buffer in the
rendering pipeline. For example, stencil values can be
automatically increased/decreased for every pixel that
fails or passes the depth test.

In this program the stencil buffer is filled with 1s wherever a white


stripe is drawn and 0s elsewhere. Two versions of each oval, square,
or triangle are then drawn. A black colored shape is drawn where the
stencil buffer is 1, and a white shape is drawn where the buffer is 0.

The simple combination of depth test and stencil


modifiers make a vast number of effects possible (such
as shadows, outline drawing or highlighting of intersections between complex primitives) though they often require
several rendering passes and, therefore, can put a heavy load on the graphics hardware.
The most typical application is still to add shadows to 3D applications. It is also used for planar reflections.
Other rendering techniques, such as portal rendering, use the stencil buffer in other ways; for example, it can be used
to find the area of the screen obscured by a portal and re-render those pixels correctly.
The stencil buffer and its modifiers can be accessed in computer graphics APIs like OpenGL and Direct3D.

OpenGL
glEnable(GL_STENCIL_TEST); // by default not enabled
glStencilMask(stencilMask); // allow writing to stencil buffer, by
default (0xFF) no mask.
glClearStencil(clearStencilValue); // clear stencil value, by default =
0
glStencilFunc(func, ref, mask); // by default GL_ALWAYS, 0, 0xFF,
always pass stencil test
glStencilOp(fail,zfail,zpass); // by default GL_KEEP, GL_KEEP, GL_KEEP,
dont change stencil buffer
glClear(GL_STENCIL_BUFFER_BIT); // clear stencil buffer, fill with
(clearStencilValue & stencilMask)
Test: ( ref & mask ) func (stencilValue & mask)
Depending on the three possible conditions of stencil function/depth function.
1. Stencil Test Function fails:
If say func is GL_NEVER, the stencil test will always fail.
Neither Color/Depth buffers are modified. Stencil buffer is modified as per glStencilOp fail.
If say glStencilOp(GL_REPLACE, GL_KEEP, GL_KEEP) then GL_REPLACE takes place and

Stencil buffer
stencilValue = (ref & stencilMask) // will become ref

2. Stencil Test Function passes/Depth Test Function fails:


If say func is GL_ALWAYS, the stencil test will always pass, but depth test may fail.
Neither Color/Depth buffer are modified. Stencil buffer is modified as per glStencilOp zfail.
If say glStencilOp(GL_KEEP, GL_INCR, GL_KEEP) then GL_INCR takes place and
stencilValue = (stencilValue+1) // will become 1

3. Stencil Function passes/Depth Function passes:


If say func is GL_ALWAYS, the stencil test will always pass. If depth test also passes.
Both Color/Depth buffer are modified. Stencil buffer is modified as per glStencilOp zpass.
If say, glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP) then Stencil values are not changed, only Color and Depth buffers are modified.

Typically Stencil buffer is initialized by setting depth buffer and color buffer masks to false. and then setting
appropriate ref value to stencil buffer by failing the stencil test every time.
// disable color and depth buffers
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE);
glStencilFunc(GL_NEVER, 1, 0xFF); // never pass stencil test
glStencilOp(GL_REPLACE, GL_KEEP, GL_KEEP); // replace stencil buffer
values to ref=1
glStencilMask(0xFF); // stencil buffer free to write
glClear(GL_STENCIL_BUFFER_BIT); // first clear stencil buffer by
writing default stencil value (0) to all of stencil buffer.
draw_stencil_shape(); // at stencil shape pixel locations in stencil
buffer replace stencil buffer values to ref = 1
Now use the initialized stencil buffer and stencil test to write only in the locations where stencil value is 1
// enable color and depth buffers.
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
// no more modifying of stencil buffer on stencil and depth pass.
glStencilMask(0x00);
// can also be achieved by glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
// stencil test: only pass stencil test at stencilValue == 1
(Assuming depth test would pass.) and write actual content to depth and
color buffer only at stencil shape locations.
glStencilFunc(GL_EQUAL, 1, 0xFF);
draw_actual_content();

201

Stencil codes

202

Stencil codes
Stencil codes are a class of iterative kernels[1] which update array
elements according to some fixed pattern, called stencil.[2] They are
most commonly found in the codes of computer simulations, e.g. for
computational fluid dynamics in the context of scientific and
engineering applications. Other notable examples include solving
partial differential equations, the Jacobi kernel, the GaussSeidel
method, image processing and cellular automata.[3] The regular
structure of the arrays sets stencil codes apart from other modeling
methods such as the Finite element method. Most finite difference
codes which operate on regular grids can be formulated as stencil
codes.

Definition

The shape of a 6-point 3D von Neumann style

stencil.
Stencil codes perform a sequence of sweeps (called timesteps) through
a given array. Generally this is a 2- or 3-dimensional regular grid. The
elements of the arrays are often referred to as cells. In each timestep, the stencil code updates all array elements.
Using neighboring array elements in a fixed pattern (called the stencil), each cell's new value is computed. In most
cases boundary values are left unchanged, but in some cases (e.g. LBM codes) those need to be adjusted during the
course of the computation as well. Since the stencil is the same for each element, the pattern of data accesses is
repeated.[4]

More formally, we may define stencil codes as a 5-tuple

with the following meaning:

is the index set. It defines the topology of the array.


is the (not necessarily finite) set of states, one of which each cell may take on on any given timestep.

defines the initial state of the system at time 0.

is the stencil itself and describes the actual shape of the neighborhood. (There are

elements in the

stencil.

is the transition function which is used to determine a cell's new state, depending on its neighbors.

Since I is a k-dimensional integer interval, the array will always have the topology of a finite regular grid. The array
is also called simulation space and individual cells are identified by their index
. The stencil is an ordered set
of

relative coordinates. We can now obtain for each cell

Their states are given by mapping the tuple

the tuple of its neighbors indices

to the corresponding tuple of states

, where

is

defined as follows:
This is all we need to define the system's state for the following time steps

with

Stencil codes
Note that
elements of

203
is defined on

and not just on

since the boundary conditions need to be set, too. Sometimes the

may be defined by a vector addition modulo the simulation space's dimension to realize toroidal

topologies:
This may be useful for implementing periodic boundary conditions, which simplifies certain physical models.

Example: 2D Jacobi iteration


To illustrate the formal definition, we'll have a look at how a two
dimensional Jacobi iteration can be defined. The update function
computes the arithmetic mean of a cell's four neighbors. In this case we
set off with an initial solution of 0. The left and right boundary are
fixed at 1, while the upper and lower boundaries are set to 0. After a
sufficient number of iterations, the system converges against a
saddle-shape.

Data dependencies of a selected cell in the 2D


array.

Stencil codes

2D Jacobi Iteration on a

204

Array

Stencils
The shape of the neighborhood used during the updates depends on the application itself. The most common stencils
are the 2D or 3D versions of the von Neumann neighborhood and Moore neighborhood. The example above uses a
2D von Neumann stencil while LBM codes generally use its 3D variant. Conway's Game of Life uses the 2D Moore
neighborhood. That said, other stencils such as a 25-point stencil for seismic wave propagation[5] can be found, too.

9-point 2D stencil

5-point 2D stencil

6-point 3D stencil

Stencil codes

25-point 3D stencil
A selection of stencils used in various scientific applications.

Implementation issues
Many simulation codes may be formulated naturally as stencil codes. Since computing time and memory
consumption grow linearly with the number of array elements, parallel implementations of stencil codes are of
paramount importance to research.[6] This is challenging since the computations are tightly coupled (because of the
cell updates depending on neighboring cells) and most stencil codes are memory bound (i.e. the ratio of memory
accesses to calculations is high).[7] Virtually all current parallel architectures have been explored for executing
stencil codes efficiently;[8] at the moment GPGPUs have proven to be most efficient.[9]

Libraries
Due to both, the importance of stencil codes to computer simulations and their high computational requirements,
there are a number of efforts which aim at creating reusable libraries to support scientists in implementing new
stencil codes. The libraries are mostly concerned with the parallelization, but may also tackle other challenges, such
as IO, steering and checkpointing. They may be classified by their API.

Patch-based libraries
This is a traditional design. The library manages a set of n-dimensional scalar arrays, which the user code may access
to perform updates. The library handles the synchronization of the boundaries (dubbed ghost zone or halo). The
advantage of this interface is that the user code may loop over the arrays, which makes it easy to integrate legacy
codes[10] . The disadvantage is that the library can not handle cache blocking (as this has to be done within the
loops[11]) or wrapping of the code for accelerators (e.g. via CUDA or OpenCL). Notable implementations include
Cactus [12], a physics problem solving environment, and waLBerla [13].

Cell-based libraries
These libraries move the interface to updating single simulation cells: only the current cell and its neighbors are
exposed to the user code, e.g. via getter/setter methods. The advantage of this approach is that the library can control
tightly which cells are updated in which order, which is useful not only to implement cache blocking, but also to run
the same code on multi-cores and GPUs.[14] This approach requires the user to recompile his source code together
with the library. Otherwise a function call for every cell update would be required, which would seriously impair
performance. This is only feasible with techniques such as class templates or metaprogramming, which is also the
reason why this design is only found in newer libraries. Examples are Physis [15] and LibGeoDecomp [16].

205

Stencil codes

References
[1] Roth, Gerald et al. (1997) Proceedings of SC'97: High Performance Networking and Computing. Compiling Stencils in High Performance
Fortran. (http:/ / citeseer. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 53. 1505)
[2] Sloot, Peter M.A. et al. (May 28, 2002) Computational Science ICCS 2002: International Conference, Amsterdam, The Netherlands, April
2124, 2002. Proceedings, Part I. (http:/ / books. google. com/ books?id=qVcLw1UAFUsC& pg=PA843& dq=stencil+ array&
sig=g3gYXncOThX56TUBfHE7hnlSxJg#PPA843,M1) Page 843. Publisher: Springer. ISBN 3-540-43591-3.
[3] Fey, Dietmar et al. (2010) Grid-Computing: Eine Basistechnologie fr Computational Science (http:/ / books. google. com/
books?id=RJRZJHVyQ4EC& pg=PA51& dq=fey+ grid& hl=de& ei=uGk8TtDAAo_zsgbEoZGpBQ& sa=X& oi=book_result& ct=result&
resnum=1& ved=0CCoQ6AEwAA#v=onepage& q& f=true).

Page 439. Publisher: Springer. ISBN 3-540-79746-7


[4] Yang, Laurence T.; Guo, Minyi. (August 12, 2005) High-Performance Computing : Paradigm and Infrastructure. (http:/ / books. google.
com/ books?id=qA4DbnFB2XcC& pg=PA221& dq=Stencil+ codes& as_brr=3& sig=H8wdKyABXT5P7kUh4lQGZ9C5zDk) Page 221.
Publisher: Wiley-Interscience. ISBN 0-471-65471-X
[5] Micikevicius, Paulius et al. (2009) 3D finite difference computation on GPUs using CUDA (http:/ / portal. acm. org/ citation.
cfm?id=1513905) Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units ISBN 978-1-60558-517-8
[6] Datta, Kaushik (2009) Auto-tuning Stencil Codes for Cache-Based Multicore Platforms (http:/ / www. cs. berkeley. edu/ ~kdatta/ pubs/
EECS-2009-177. pdf), Ph.D. Thesis
[7] Wellein, G et al. (2009) Efficient temporal blocking for stencil computations by multicore-aware wavefront parallelization (http:/ /
ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=5254211), 33rd Annual IEEE International Computer Software and Applications
Conference, COMPSAC 2009
[8] Datta, Kaushik et al. (2008) Stencil computation optimization and auto-tuning on state-of-the-art multicore architectures (http:/ / portal. acm.
org/ citation. cfm?id=1413375), SC '08 Proceedings of the 2008 ACM/IEEE conference on Supercomputing
[9] Schfer, Andreas and Fey, Dietmar (2011) High Performance Stencil Code Algorithms for GPGPUs (http:/ / www. sciencedirect. com/
science/ article/ pii/ S1877050911002791), Proceedings of the International Conference on Computational Science, ICCS 2011
[10] S. Donath, J. Gtz, C. Feichtinger, K. Iglberger and U. Rde (2010) waLBerla: Optimization for Itanium-based Systems with Thousands of
Processors (http:/ / www. springerlink. com/ content/ p2583237l2187374/ ), High Performance Computing in Science and Engineering,
Garching/Munich 2009
[11] Nguyen, Anthony et al. (2010) 3.5-D Blocking Optimization for Stencil Computations on Modern CPUs and GPUs (http:/ / dl. acm. org/
citation. cfm?id=1884658), SC '10 Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing,
Networking, Storage and Analysis
[12] http:/ / cactuscode. org/
[13] http:/ / www10. informatik. uni-erlangen. de/ Research/ Projects/ walberla/ description. shtml
[14] Naoya Maruyama, Tatsuo Nomura, Kento Sato, and Satoshi Matsuoka (2011) Physis: An Implicitly Parallel Programming Model for Stencil
Computations on Large-Scale GPU-Accelerated Supercomputers, SC '11 Proceedings of the 2011 ACM/IEEE International Conference for
High Performance Computing, Networking, Storage and Analysis
[15] https:/ / github. com/ naoyam/ physis
[16] http:/ / www. libgeodecomp. org

External links
Physis (https://github.com/naoyam/physis)
LibGeoDecomp (http://www.libgeodecomp.org)

206

Subdivision surface

207

Subdivision surface
A subdivision surface, in the field of 3D computer graphics, is a method of representing a smooth surface via the
specification of a coarser piecewise linear polygon mesh. The smooth surface can be calculated from the coarse mesh
as the limit of a recursive process of subdividing each polygonal face into smaller faces that better approximate the
smooth surface.

Overview
Subdivision surfaces are defined recursively. The process starts with a
given polygonal mesh. A refinement scheme is then applied to this
mesh. This process takes that mesh and subdivides it, creating new
vertices and new faces. The positions of the new vertices in the mesh
are computed based on the positions of nearby old vertices. In some
refinement schemes, the positions of old vertices might also be altered
(possibly based on the positions of new vertices).
This process produces a denser mesh than the original one, containing
more polygonal faces. This resulting mesh can be passed through the
same refinement scheme again and so on.
The limit subdivision surface is the surface produced from this process
being iteratively applied infinitely many times. In practical use
however, this algorithm is only applied a limited number of times. The
limit surface can also be calculated directly for most subdivision
surfaces using the technique of Jos Stam, which eliminates the need for
recursive refinement. Subdivision surfaces and T-Splines are
competing technologies. Mathematically, subdivision surfaces are
spline surfaces with singularities.

First three steps of CatmullClark subdivision of


a cube with subdivision surface below

Refinement schemes
Subdivision surface refinement schemes can be broadly classified into two categories: interpolating and
approximating. Interpolating schemes are required to match the original position of vertices in the original mesh.
Approximating schemes are not; they can and will adjust these positions as needed. In general, approximating
schemes have greater smoothness, but editing applications that allow users to set exact surface constraints require an
optimization step. This is analogous to spline surfaces and curves, where Bzier splines are required to interpolate
certain control points (namely the two end-points), while B-splines are not.
There is another division in subdivision surface schemes as well, the type of polygon that they operate on. Some
function for quadrilaterals (quads), while others operate on triangles.

Subdivision surface

Approximating schemes
Approximating means that the limit surfaces approximate the initial meshes and that after subdivision, the newly
generated control points are not in the limit surfaces. Examples of approximating subdivision schemes are:
CatmullClark (1978) generalized bi-cubic uniform B-spline to produce their subdivision scheme. For arbitrary
initial meshes, this scheme generates limit surfaces that are C2 continuous everywhere except at extraordinary
vertices where they are C1 continuous (Peters and Reif 1998).
DooSabin - The second subdivision scheme was developed by Doo and Sabin (1978) who successfully extended
Chaikin's corner-cutting method for curves to surfaces. They used the analytical expression of bi-quadratic
uniform B-spline surface to generate their subdivision procedure to produce C1 limit surfaces with arbitrary
topology for arbitrary initial meshes.
Loop, Triangles - Loop (1987) proposed his subdivision scheme based on a quartic box-spline of six direction
vectors to provide a rule to generate C2 continuous limit surfaces everywhere except at extraordinary vertices
where they are C1 continuous.
Mid-Edge subdivision scheme - The mid-edge subdivision scheme was proposed independently by PetersReif
(1997) and HabibWarren (1999). The former used the midpoint of each edge to build the new mesh. The latter
used a four-directional box spline to build the scheme. This scheme generates C1 continuous limit surfaces on
initial meshes with arbitrary topology.
3 subdivision scheme - This scheme has been developed by Kobbelt (2000): it handles arbitrary triangular
meshes, it is C2 continuous everywhere except at extraordinary vertices where it is C1 continuous and it offers a
natural adaptive refinement when required. It exhibits at least two specificities: it is a Dual scheme for triangle
meshes and it has a slower refinement rate than primal ones.

Interpolating schemes
After subdivision, the control points of the original mesh and the new generated control points are interpolated on the
limit surface. The earliest work was the butterfly scheme by Dyn, Levin and Gregory (1990), who extended the
four-point interpolatory subdivision scheme for curves to a subdivision scheme for surface. Zorin, Schrder and
Sweldens (1996) noticed that the butterfly scheme cannot generate smooth surfaces for irregular triangle meshes and
thus modified this scheme. Kobbelt (1996) further generalized the four-point interpolatory subdivision scheme for
curves to the tensor product subdivision scheme for surfaces.
Butterfly, Triangles - named after the scheme's shape
Midedge, Quads
Kobbelt, Quads - a variational subdivision method that tries to overcome uniform subdivision drawbacks

Editing a subdivision surface


Subdivision surfaces can be naturally edited at different levels of subdivision. Starting with basic shapes you can use
binary operators to create the correct topology. Then edit the coarse mesh to create the basic shape, then edit the
offsets for the next subdivision step, then repeat this at finer and finer levels. You can always see how your edits
affect the limit surface via GPU evaluation of the surface.
A surface designer may also start with a scanned in object or one created from a NURBS surface. The same basic
optimization algorithms are used to create a coarse base mesh with the correct topology and then add details at each
level so that the object may be edited at different levels. These types of surfaces may be difficult to work with
because the base mesh does not have control points in the locations that a human designer would place them. With a
scanned object this surface is easier to work with than a raw triangle mesh, but a NURBS object probably had well
laid out control points which behave less intuitively after the conversion than before.

208

Subdivision surface

Key developments
1978: Subdivision surfaces were discovered simultaneously by Edwin Catmull and Jim Clark (see CatmullClark
subdivision surface). In the same year, Daniel Doo and Malcom Sabin published a paper building on this work
(see DooSabin subdivision surface.)
1995: Ulrich Reif characterized subdivision surfaces near extraordinary vertices by treating them as splines with
singularities.
1998: Jos Stam contributed a method for exact evaluation for CatmullClark and Loop subdivision surfaces under
arbitrary parameter values.[3]

References
Peters, J.; Reif, U. (October 1997). "The simplest subdivision scheme for smoothing polyhedra". ACM
Transactions on Graphics 16 (4): 420431. doi: 10.1145/263834.263851 (http://dx.doi.org/10.1145/263834.
263851).
Habib, A.; Warren, J. (May 1999). "Edge and vertex insertion for a class C1 of subdivision surfaces". Computer
Aided Geometric Design 16 (4): 223247. doi: 10.1016/S0167-8396(98)00045-4 (http://dx.doi.org/10.1016/
S0167-8396(98)00045-4).
Kobbelt, L. (2000). "3-subdivision". Proceedings of the 27th annual conference on Computer graphics and
interactive techniques - SIGGRAPH '00. pp.103112. doi: 10.1145/344779.344835 (http://dx.doi.org/10.
1145/344779.344835). ISBN1-58113-208-5.

External links
Resources about Subdvisions (http://www.subdivision.org)
Geri's Game (http://www.pixar.com/shorts/gg/theater/index.html) : Oscar winning animation by Pixar
completed in 1997 that introduced subdivision surfaces (along with cloth simulation)
Subdivision for Modeling and Animation (http://www.multires.caltech.edu/pubs/sig99notes.pdf) tutorial,
SIGGRAPH 1999 course notes
Subdivision for Modeling and Animation (http://www.mrl.nyu.edu/dzorin/sig00course/) tutorial,
SIGGRAPH 2000 course notes
Subdivision of Surface and Volumetric Meshes (http://www.hakenberg.de/subdivision/ultimate_consumer.
htm), software to perform subdivision using the most popular schemes
Surface Subdivision Methods in CGAL, the Computational Geometry Algorithms Library (http://www.cgal.
org/Pkg/SurfaceSubdivisionMethods3)
Surface and Volumetric Subdivision Meshes, hierarchical/multiresolution data structures in CGoGN (http://
cgogn.unistra.fr)
Modified Butterfly method implementation in C++ (https://bitbucket.org/rukletsov/b)

209

Subsurface scattering

210

Subsurface scattering
Subsurface scattering (or SSS), also
known as subsurface light transport
(SSLT),[1] is a mechanism of light
transport in which light penetrates the
surface of a translucent object, is
scattered by interacting with the
material, and exits the surface at a
different point. The light will generally
penetrate the surface and be reflected a
number of times at irregular angles
inside the material, before passing
back out of the material at an angle
other than the angle it would have if it
had been reflected directly off the
surface. Subsurface scattering is
important in 3D computer graphics,
being necessary for the realistic
rendering of materials such as marble,
skin, leaves, wax and milk.

Direct surface scattering (left), plus subsurface scattering (middle), create the final
image on the right.

Rendering Techniques
Most materials used in real-time
Example of Subsurface scattering made in
Blender software.
computer graphics today only account
for the interaction of light at the
surface of an object. In reality, many materials are slightly translucent: light enters the surface; is absorbed, scattered
and re-emitted potentially at a different point. Skin is a good case in point; only about 6% of reflectance is direct,
94% is from subsurface scattering. An inherent property of semitransparent materials is absorption. The further
through the material light travels, the greater the proportion absorbed. To simulate this effect, a measure of the
distance the light has traveled through the material must be obtained.

Depth Map based SSS


One method of estimating this distance is to use depth maps , in a
manner similar to shadow mapping. The scene is rendered from the
light's point of view into a depth map, so that the distance to the nearest
surface is stored. The depth map is then projected onto it using
standard projective texture mapping and the scene re-rendered. In this
pass, when shading a given point, the distance from the light at the
point the ray entered the surface can be obtained by a simple texture
Depth estimation using depth maps
lookup. By subtracting this value from the point the ray exited the
object we can gather an estimate of the distance the light has traveled through the object.
The measure of distance obtained by this method can be used in several ways. One such way is to use it to index
directly into an artist created 1D texture that falls off exponentially with distance. This approach, combined with

Subsurface scattering
other more traditional lighting models, allows the creation of different materials such as marble, jade and wax.
Potentially, problems can arise if models are not convex, but depth peeling can be used to avoid the issue. Similarly,
depth peeling can be used to account for varying densities beneath the surface, such as bone or muscle, to give a
more accurate scattering model.
As can be seen in the image of the wax head to the right, light isnt diffused when passing through object using this
technique; back features are clearly shown. One solution to this is to take multiple samples at different points on
surface of the depth map. Alternatively, a different approach to approximation can be used, known as texture-space
diffusion.

Texture Space Diffusion


As noted at the start of the section, one of the more obvious effects of subsurface scattering is a general blurring of
the diffuse lighting. Rather than arbitrarily modifying the diffuse function, diffusion can be more accurately modeled
by simulating it in texture space. This technique was pioneered in rendering faces in The Matrix Reloaded, but has
recently fallen into the realm of real-time techniques.
The method unwraps the mesh of an object using a vertex shader, first calculating the lighting based on the original
vertex coordinates. The vertices are then remapped using the UV texture coordinates as the screen position of the
vertex, suitable transformed from the [0, 1] range of texture coordinates to the [-1, 1] range of normalized device
coordinates. By lighting the unwrapped mesh in this manner, we obtain a 2D image representing the lighting on the
object, which can then be processed and reapplied to the model as a light map. To simulate diffusion, the light map
texture can simply be blurred. Rendering the lighting to a lower-resolution texture in itself provides a certain amount
of blurring. The amount of blurring required to accurately model subsurface scattering in skin is still under active
research, but performing only a single blur poorly models the true effects. To emulate the wavelength dependent
nature of diffusion, the samples used during the (Gaussian) blur can be weighted by channel. This is somewhat of an
artistic process. For human skin, the broadest scattering is in red, then green, and blue has very little scattering.
A major benefit of this method is its independence of screen resolution; shading is performed only once per texel in
the texture map, rather than for every pixel on the object. An obvious requirement is thus that the object have a good
UV mapping, in that each point on the texture must map to only one point of the object. Additionally, the use of
texture space diffusion provides one of the several factors that contribute to soft shadows, alleviating one cause of
the realism deficiency of shadow mapping.

References
[1] http:/ / wiki. povray. org/ content/ Reference:Finish#Subsurface_Light_Transport

External links
Henrik Wann Jensen's subsurface scattering website (http://graphics.ucsd.edu/~henrik/images/subsurf.html)
An academic paper by Jensen on modeling subsurface scattering (http://graphics.ucsd.edu/~henrik/papers/
bssrdf/)
Maya Tutorial - Subsurface Scattering: Using the Misss_Fast_Simple_Maya shader (http://www.highend3d.
com/maya/tutorials/rendering_lighting/shaders/135.html)
3d Studio Max Tutorial - The definitive guide to using subsurface scattering in 3dsMax (http://www.
mrbluesummers.com/3510/3d-tutorials/3dsmax-mental-ray-sub-surface-scattering-guide/)

211

Surface caching

212

Surface caching
Surface caching is a computer graphics technique pioneered by John Carmack, first used in the computer game
Quake, to apply lightmaps to level geometry. Carmack's technique was to combine lighting information with surface
textures in texture-space when primitives became visible (at the appropriate mipmap level), exploiting temporal
coherence for those calculations. As hardware capable of blended multi-texture rendering (and later pixel shaders)
became more commonplace, the technique became less common, being replaced with screenspace combination of
lightmaps in rendering hardware.
Surface caching contributed greatly to the visual quality of Quake's software rasterized 3D engine on Pentium
microprocessors, which lacked dedicated graphics instructions. [citation needed].
Surface caching could be considered a precursor to the more recent MegaTexture technique in which lighting and
surface decals and other procedural texture effects are combined for rich visuals devoid of unnatural repeating
artifacts.

External links
Quake's Lighting Model: Surface Caching [1] - an in-depth explanation by Michael Abrash

References
[1] http:/ / www. bluesnews. com/ abrash/ chap68. shtml

Texel
A texel, texture element, or texture pixel is the fundamental unit of
texture space,[1] used in computer graphics. Textures are represented
by arrays of texels, just as pictures are represented by arrays of pixels.
Texels can also be described by image regions that are obtained
through a simple procedure such as thresholding. Voronoi tesselation
can be used to define their spatial relationships. This means that a
division is made at the half-way point between the centroid of each
texel and the centroids of every surrounding texel for the entire texture.
The result is that each texel centroid will have a Voronoi polygon
surrounding it. This polygon region consists of all points that are closer
to its texel centroid than any other centroid.[2]

Voronoi polygons for a group of texels.

Texel

Rendering texels
When texturing a 3D surface or surfaces (a process known as texture
mapping), the renderer maps texels to appropriate pixels in the output
picture. On modern computers, this operation is accomplished on the
graphics processing unit.
The texturing process starts with a location in space. The location can
be in world space, but typically it is in Model space so that the texture
moves with the model. A projector function is applied to the location to
Two different projector functions.
change the location from a three-element vector to a two-element
vector with values ranging from zero to one (uv).[3] These values are multiplied by the resolution of the texture to
obtain the location of the texel. When a texel is requested that is not on an integer position, texture filtering is
applied.
When a texel is requested that is outside of the texture, one of two techniques is used: clamping or wrapping.
Clamping limits the texel to the texture size, moving it to the nearest edge if it is more than the texture size.
Wrapping moves the texel in increments of the texture's size to bring it back into the texture. Wrapping causes a
texture to be repeated; clamping causes it to be in one spot only.

References
[1] Andrew Glassner, An Introduction to Ray Tracing, San Francisco: MorganKaufmann, 1989
[2] Linda G. Shapiro and George C. Stockman, Computer Vision, Upper Saddle River: PrenticeHall, 2001
[3] Tomas Akenine-Moller, Eric Haines, and Naty Hoffman, Real-Time Rendering, Wellesley: A K Peters, 2008

Texture atlas
In realtime computer graphics, a texture atlas is a large image containing a collection of sub-images, or "atlas"
which contains many smaller sub-images, each of which is a texture for some part of a 3D object. The sub-textures
can be rendered by modifying the texture coordinates of the object's uvmap on the atlas, essentially telling it which
part of the image its texture is in. In an application where many small textures are used frequently, it is often more
efficient to store the textures in a texture atlas which is treated as a single unit by the graphics hardware. In
particular, because there are less rendering state changes by binding once, it can be faster to bind one large texture
once than to bind many smaller textures as they are drawn.
For example, a tile-based game would benefit greatly in performance from a texture atlas.
Atlases can consist of uniformly-sized sub-textures, or they can consist of textures of varying sizes (usually restricted
to powers of two). In the latter case, the program must usually arrange the textures in an efficient manner before
sending the textures to hardware. Manual arrangement of texture atlases is possible, and sometimes preferable, but
can be tedious. If using mipmaps, care must be taken to arrange the textures in such a manner as to avoid sub-images
being "polluted" by their neighbours.

213

Texture atlas

External links
Sprite Sheets - Essential Facts Every Game Developer Should Know [1] - Funny video explaining the benefits of
using sprite sheets
Texture Atlas Whitepaper [2] - A whitepaper by NVIDIA which explains the technique.
TexturePacker [3] - Commercial texture atlas creator for game developers.
Texture Atlas Maker [4] - Open source texture atlas utility for 2D OpenGL games.
Practical Texture Atlases [5] - A guide on using a texture atlas (and the pros and cons).
SpriteMapper [6] - Open source texture atlas (sprite map) utility including an Apache Ant task.

References
[1]
[2]
[3]
[4]
[5]
[6]

http:/ / www. codeandweb. com/ what-is-a-sprite-sheet


http:/ / download. nvidia. com/ developer/ NVTextureSuite/ Atlas_Tools/ Texture_Atlas_Whitepaper. pdf
http:/ / www. texturepacker. com
http:/ / www. codeproject. com/ Articles/ 330742/ Texture-Atlas-Maker
http:/ / www. gamasutra. com/ features/ 20060126/ ivanov_01. shtml
http:/ / opensource. cego. dk/ spritemapper/

Texture filtering
In computer graphics, texture filtering or texture smoothing is the method used to determine the texture color for a
texture mapped pixel, using the colors of nearby texels (pixels of the texture). Mathematically, texture filtering is a
type of anti-aliasing (AA), but it filters out high frequencies from the texture fill whereas other AA techniques
generally focus on visual edges. Put simply, it allows a texture to be applied at many different shapes, sizes and
angles while minimizing blurriness, shimmering and blocking.
There are many methods of texture filtering, which make different trade-offs between computational complexity and
image quality.

The need for filtering


During the texture mapping process, a 'texture lookup' takes place to find out where on the texture each pixel center
falls. Since the textured surface may be at an arbitrary distance and orientation relative to the viewer, one pixel does
not usually correspond directly to one texel. Some form of filtering has to be applied to determine the best color for
the pixel. Insufficient or incorrect filtering will show up in the image as artifacts (errors in the image), such as
'blockiness', jaggies, or shimmering.
There can be different types of correspondence between a pixel and the texel/texels it represents on the screen. These
depend on the position of the textured surface relative to the viewer, and different forms of filtering are needed in
each case. Given a square texture mapped on to a square surface in the world, at some viewing distance the size of
one screen pixel is exactly the same as one texel. Closer than that, the texels are larger than screen pixels, and need
to be scaled up appropriately - a process known as texture magnification. Farther away, each texel is smaller than a
pixel, and so one pixel covers multiple texels. In this case an appropriate color has to be picked based on the covered
texels, via texture minification. Graphics APIs such as OpenGL allow the programmer to set different choices for
minification and magnification filters.
Note that even in the case where the pixels and texels are exactly the same size, one pixel will not necessarily match
up exactly to one texel - it may be misaligned, and cover parts of up to four neighboring texels. Hence some form of
filtering is still required.

214

Texture filtering

Mipmapping
Mipmapping is a standard technique used to save some of the filtering work needed during texture minification.
During texture magnification, the number of texels that need to be looked up for any pixel is always four or fewer;
during minification, however, as the textured polygon moves farther away potentially the entire texture might fall
into a single pixel. This would necessitate reading all of its texels and combining their values to correctly determine
the pixel color, a prohibitively expensive operation. Mipmapping avoids this by prefiltering the texture and storing it
in smaller sizes down to a single pixel. As the textured surface moves farther away, the texture being applied
switches to the prefiltered smaller size. Different sizes of the mipmap are referred to as 'levels', with Level 0 being
the largest size (used closest to the viewer), and increasing levels used at increasing distances.

Filtering methods
This section lists the most common texture filtering methods, in increasing order of computational cost and image
quality.

Nearest-neighbor interpolation
Nearest-neighbor interpolation is the fastest and crudest filtering method it simply uses the color of the texel
closest to the pixel center for the pixel color. While fast, this results in a large number of artifacts - texture
'blockiness' during magnification, and aliasing and shimmering during minification.

Nearest-neighbor with mipmapping


This method still uses nearest neighbor interpolation, but adds mipmapping first the nearest mipmap level is
chosen according to distance, then the nearest texel center is sampled to get the pixel color. This reduces the aliasing
and shimmering significantly, but does not help with blockiness.

Bilinear filtering
Bilinear filtering is the next step up. In this method the four nearest texels to the pixel center are sampled (at the
closest mipmap level), and their colors are combined by weighted average according to distance. This removes the
'blockiness' seen during magnification, as there is now a smooth gradient of color change from one texel to the next,
instead of an abrupt jump as the pixel center crosses the texel boundary. Bilinear filtering is almost invariably used
with mipmapping; though it can be used without, it would suffer the same aliasing and shimmering problems as its
nearest neighbor.

Trilinear filtering
Trilinear filtering is a remedy to a common artifact seen in mipmapped bilinearly filtered images: an abrupt and very
noticeable change in quality at boundaries where the renderer switches from one mipmap level to the next. Trilinear
filtering solves this by doing a texture lookup and bilinear filtering on the two closest mipmap levels (one higher and
one lower quality), and then linearly interpolating the results. This results in a smooth degradation of texture quality
as distance from the viewer increases, rather than a series of sudden drops. Of course, closer than Level 0 there is
only one mipmap level available, and the algorithm reverts to bilinear filtering.

215

Texture filtering

216

Anisotropic filtering
Anisotropic filtering is the highest quality filtering available in current consumer 3D graphics cards. Simpler,
"isotropic" techniques use only square mipmaps which are then interpolated using bi or trilinear filtering. (Isotropic
means same in all directions, and hence is used to describe a system in which all the maps are squares rather than
rectangles or other quadrilaterals.)
When a surface is at a high angle relative to the camera, the fill area for a texture will not be approximately square.
Consider the common case of a floor in a game: the fill area is far wider than it is tall. In this case, none of the square
maps are a good fit. The result is blurriness and/or shimmering, depending on how the fit is chosen. Anisotropic
filtering corrects this by sampling the texture as a non-square shape. Some implementations simply use rectangles
instead of squares, which are a much better fit than the original square and offer a good approximation.
However, going back to the example of the floor, the fill area is not just compressed vertically, there are also more
pixels across the near edge than the far edge. Consequently, more advanced implementations will use trapezoidal
maps for an even better approximation (at the expense of greater processing).
In either rectangular or trapezoidal implementations, the filtering produces a map, which is then bi or trilinearly
filtered, using the same filtering algorithms used to filter the square maps of traditional mipmapping.

Texture mapping
Texture mapping is a method for adding detail, surface texture (a
bitmap or raster image), or color to a computer-generated graphic or
3D model. Its application to 3D graphics was pioneered by Edwin
Catmull in 1974.

1 = 3D model without textures


2 = 3D model with textures

Texture mapping

Examples of multitexturing (click for larger image);


1:Untextured sphere, 2:Texture and bump maps, 3:Texture map only,
4:Opacity and texture maps.

A texture map is applied (mapped) to the


surface of a shape or polygon.[1] This process is
akin to applying patterned paper to a plain white
box. Every vertex in a polygon is assigned a
texture coordinate (which in the 2d case is also
known as a UV coordinate) either via explicit
assignment or by procedural definition. Image
sampling locations are then interpolated across
the face of a polygon to produce a visual result
that seems to have more richness than could
otherwise be achieved with a limited number of

Texture mapping

217

polygons. Multitexturing is the use of more than one texture at a time on a polygon.[2] For instance, a light map
texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is
rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing
direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex
surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring.
Bump mapping has become popular in recent video games as graphics hardware has become powerful enough to
accommodate it in real-time.
The way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture
filtering. The fastest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear
interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of
a texture coordinate being outside the texture, it is either clamped or wrapped.

Perspective correctness
Texture coordinates are specified at
each vertex of a given triangle, and
these coordinates are interpolated
using an extended Bresenham's line
algorithm. If these texture coordinates
are linearly interpolated across the
screen, the result is affine texture
mapping. This is a fast calculation, but
Because affine texture mapping does not take into account the depth information about a
there can be a noticeable discontinuity
polygon's vertices, where the polygon is not perpendicular to the viewer it produces a
noticeable defect.
between adjacent triangles when these
triangles are at an angle to the plane of
the screen (see figure at right textures (the checker boxes) appear bent).
Perspective correct texturing accounts for the vertices' positions in 3D space, rather than simply interpolating a 2D
triangle. This achieves the correct visual effect, but it is slower to calculate. Instead of interpolating the texture
coordinates directly, the coordinates are divided by their depth (relative to the viewer), and the reciprocal of the
depth value is also interpolated and used to recover the perspective-correct coordinate. This correction makes it so
that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture
coordinates is smaller (stretching the texture wider), and in parts that are farther away this difference is larger
(compressing the texture).
Affine texture mapping directly interpolates a texture coordinate

between two endpoints

and

where
Perspective correct mapping interpolates after dividing by depth
recover the correct coordinate:

, then uses its interpolated reciprocal to

All modern 3D graphics hardware implements perspective correct texturing.

Texture mapping

218

Development
Classic texture mappers generally did only simple mapping with at most one lighting effect, and the perspective
correctness was about 16 times more expensive. To achieve two goals - faster arithmetic results, and keeping the
arithmetic mill busy at all times - every triangle is further subdivided into groups of about 16 pixels. For perspective
texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering, which
improves details in non-architectural applications. Software renderers generally preferred screen subdivision because
it has less overhead. Additionally they try to do linear interpolation along a line of pixels to simplify the set-up
(compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the
low number of registers of the x86 CPU; the 68000 or any RISC is much more suited). For instance, Doom restricted
the world to vertical walls and horizontal floors/ceilings. This meant the walls would be a constant distance along a
vertical line and the floors/ceilings would be a constant distance along a horizontal line. A fast affine mapping could
be used along those lines because it would be correct. A different approach was taken for Quake, which would
calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between
them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in
parallel on the co-processor.[3] The polygons are rendered independently, hence it may be possible to switch between
spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more
constant z, but the effort seems not to be worth it.

Screen space sub division techniques. Top left:


Quake-like, top right: bilinear, bottom left: const-z

Another technique was subdividing the polygons into smaller


polygons, like triangles in 3d-space or squares in screen space, and
using an affine mapping on them. The distortion of affine mapping
becomes much less noticeable on smaller polygons. Yet another
technique was approximating the perspective with a faster
calculation, such as a polynomial. Still another technique uses 1/z
value of the last two drawn pixels to linearly extrapolate the next
value. The division is then done starting from those values so that
only a small remainder has to be divided, but the amount of
bookkeeping makes this method too slow on most systems.
Finally, some programmers extended the constant distance trick
used for Doom by finding the line of constant distance for
arbitrary polygons and rendering along it.

References
[1] Jon Radoff, Anatomy of an MMORPG, http:/ / radoff. com/ blog/ 2008/ 08/ 22/ anatomy-of-an-mmorpg/
[2] Blythe, David. Advanced Graphics Programming Techniques Using OpenGL (http:/ / www. opengl. org/ resources/ code/ samples/ sig99/
advanced99/ notes/ notes. html). Siggraph 1999. (see: Multitexture (http:/ / www. opengl. org/ resources/ code/ samples/ sig99/ advanced99/
notes/ node60. html))
[3] Abrash, Michael. Michael Abrash's Graphics Programming Black Book Special Edition. The Coriolis Group, Scottsdale Arizona, 1997. ISBN
1-57610-174-6 ( PDF (http:/ / www. gamedev. net/ reference/ articles/ article1698. asp)) (Chapter 70, pg. 1282)

Texture mapping

219

External links
Introduction into texture mapping using C and SDL (http://www.happy-werner.de/howtos/isw/parts/3d/
chapter_2/chapter_2_texture_mapping.pdf)
Programming a textured terrain (http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/
Textured_terrain.php) using XNA/DirectX, from www.riemers.net
Perspective correct texturing (http://www.gamers.org/dEngine/quake/papers/checker_texmap.html)
Time Texturing (http://www.fawzma.com/time-texturing-texture-mapping-with-bezier-lines/) Texture
mapping with bezier lines
Polynomial Texture Mapping (http://www.hpl.hp.com/research/ptm/) Interactive Relighting for Photos
3 Mtodos de interpolacin a partir de puntos (in spanish) (http://www.um.es/geograf/sigmur/temariohtml/
node43_ct.html) Methods that can be used to interpolate a texture knowing the texture coords at the vertices of a
polygon

Texture synthesis
Texture synthesis is the process of algorithmically constructing a large digital image from a small digital sample
image by taking advantage of its structural content. It is an object of research in computer graphics and is used in
many fields, amongst others digital image editing, 3D computer graphics and post-production of films.
Texture synthesis can be used to fill in holes in images (as in inpainting), create large non-repetitive background
images and expand small pictures. See "SIGGRAPH 2007 course on Example-based Texture Synthesis" [1] for more
details.

Textures
"Texture" is an ambiguous word and in the context of texture synthesis
may have one of the following meanings:
1. In common speech, the word "texture" is used as a synonym for
"surface structure". Texture has been described by five different
properties in the psychology of perception: coarseness, contrast,
directionality, line-likeness and roughness [2].
2. In 3D computer graphics, a texture is a digital image applied to the
surface of a three-dimensional model by texture mapping to give the
model a more realistic appearance. Often, the image is a photograph
of a "real" texture, such as wood grain.

Maple burl, an example of a texture.

3. In image processing, every digital image composed of repeated elements is called a "texture." For example, see
the images below.
Texture can be arranged along a spectrum going from stochastic to regular:
Stochastic textures. Texture images of stochastic textures look like noise: colour dots that are randomly scattered
over the image, barely specified by the attributes minimum and maximum brightness and average colour. Many
textures look like stochastic textures when viewed from a distance. An example of a stochastic texture is
roughcast.
Structured textures. These textures look like somewhat regular patterns. An example of a structured texture is a
stonewall or a floor tiled with paving stones.
These extremes are connected by a smooth transition, as visualized in the figure below from "Near-regular Texture
Analysis and Manipulation." Yanxi Liu, Wen-Chieh Lin, and James Hays. SIGGRAPH 2004 [3]

Texture synthesis

Goal
Texture synthesis algorithms are intended to create an output image that meets the following requirements:
The output should have the size given by the user.
The output should be as similar as possible to the sample.
The output should not have visible artifacts such as seams, blocks and misfitting edges.
The output should not repeat, i. e. the same structures in the output image should not appear multiple places.
Like most algorithms, texture synthesis should be efficient in computation time and in memory use.

Methods
The following methods and algorithms have been researched or developed for texture synthesis:

Tiling
The simplest way to generate a large image from a sample image is to tile it. This means multiple copies of the
sample are simply copied and pasted side by side. The result is rarely satisfactory. Except in rare cases, there will be
the seams in between the tiles and the image will be highly repetitive.

Stochastic texture synthesis


Stochastic texture synthesis methods produce an image by randomly choosing colour values for each pixel, only
influenced by basic parameters like minimum brightness, average colour or maximum contrast. These algorithms
perform well with stochastic textures only, otherwise they produce completely unsatisfactory results as they ignore
any kind of structure within the sample image.

220

Texture synthesis

Single purpose structured texture synthesis


Algorithms of that family use a fixed procedure to create an output image, i. e. they are limited to a single kind of
structured texture. Thus, these algorithms can both only be applied to structured textures and only to textures with a
very similar structure. For example, a single purpose algorithm could produce high quality texture images of
stonewalls; yet, it is very unlikely that the algorithm will produce any viable output if given a sample image that
shows pebbles.

Chaos mosaic
This method, proposed by the Microsoft group for internet graphics, is a refined version of tiling and performs the
following three steps:
1. The output image is filled completely by tiling. The result is a repetitive image with visible seams.
2. Randomly selected parts of random size of the sample are copied and pasted randomly onto the output image.
The result is a rather non-repetitive image with visible seams.
3. The output image is filtered to smooth edges.
The result is an acceptable texture image, which is not too repetitive and does not contain too many artifacts. Still,
this method is unsatisfactory because the smoothing in step 3 makes the output image look blurred.

Pixel-based texture synthesis


These methods, such as "Texture synthesis via a noncausal nonparametric multiscale Markov random field." Paget
and Longstaff, IEEE Trans. on Image Processing, 1998 [4], "Texture Synthesis by Non-parametric Sampling." Efros
and Leung, ICCV, 1999 [5], "Fast Texture Synthesis using Tree-structured Vector Quantization" Wei and Levoy
SIGGRAPH 2000 [6] and "Image Analogies" Hertzmann et al. SIGGRAPH 2001. [7] are some of the simplest and
most successful general texture synthesis algorithms. They typically synthesize a texture in scan-line order by
finding and copying pixels with the most similar local neighborhood as the synthetic texture. These methods are very
useful for image completion. They can be constrained, as in image analogies, to perform many interesting tasks.
They are typically accelerated with some form of Approximate Nearest Neighbor method since the exhaustive search
for the best pixel is somewhat slow. The synthesis can also be performed in multiresolution, such as "Texture
synthesis via a noncausal nonparametric multiscale Markov random field." Paget and Longstaff, IEEE Trans. on
Image Processing, 1998 [4].

Patch-based texture synthesis


Patch-based texture synthesis creates a new texture by copying and stitching together textures at various offsets,
similar to the use of the clone tool to manually synthesize a texture. "Image Quilting." Efros and Freeman.
SIGGRAPH 2001 [8] and "Graphcut Textures: Image and Video Synthesis Using Graph Cuts." Kwatra et al.
SIGGRAPH 2003 [9] are the best known patch-based texture synthesis algorithms. These algorithms tend to be more
effective and faster than pixel-based texture synthesis methods.

221

Texture synthesis

Chemistry based
Realistic textures can be generated by simulations of complex chemical reactions within fluids, namely
Reaction-diffusion systems. It is believed that these systems show behaviors which are qualitatively equivalent to
real processes (Morphogenesis) found in the nature, such as animal markings (shells, fish, wild cats...).

Implementations
Some texture synthesis implementations exist as plug-ins for the free image editor Gimp:
Texturize [10]
Resynthesizer [11]
A pixel-based texture synthesis implementation:
Parallel Controllable Texture Synthesis [12]
Patch-based texture synthesis:
KUVA: Graphcut textures [13]

Literature
Several of the earliest and most referenced papers in this field include:

Popat [14] in 1993 - "Novel cluster-based probability model for texture synthesis, classification, and compression".
Heeger-Bergen [15] in 1995 - "Pyramid based texture analysis/synthesis".
Paget-Longstaff [16] in 1998 - "Texture synthesis via a noncausal nonparametric multiscale Markov random field"
Efros-Leung [17] in 1999 - "Texture Synthesis by Non-parameteric Sampling".
Wei-Levoy [6] in 2000 - "Fast Texture Synthesis using Tree-structured Vector Quantization"

although there was also earlier work on the subject, such as


Gagalowicz and Song De Ma in 1986, "Model driven synthesis of natural textures for 3-D scenes",
Lewis in 1984, "Texture synthesis for digital painting".
(The latter algorithm has some similarities to the Chaos Mosaic approach).
The non-parametric sampling approach of Efros-Leung is the first approach that can easily synthesis most types of
texture, and it has inspired literally hundreds of follow-on papers in computer graphics. Since then, the field of
texture synthesis has rapidly expanded with the introduction of 3D graphics accelerator cards for personal
computers. It turns out, however, that Scott Draves first published the patch-based version of this technique along
with GPL code in 1993 according to Efros [18].

External links

texture synthesis [18]


texture synthesis [19]
texture movie synthesis [20]
Texture2005 [21]
Near-Regular Texture Synthesis [3]
The Texture Lab [22]
Nonparametric Texture Synthesis [23]
Examples of reaction-diffusion textures [24]
Implementation of Efros & Leung's algorithm with examples [25]

Micro-texture synthesis by phase randomization, with code and online demonstration [26]

222

Texture synthesis

References
[1] http:/ / www. cs. unc. edu/ ~kwatra/ SIG07_TextureSynthesis/ index. htm
[2] http:/ / en. wikipedia. org/ wiki/ Texture_synthesis#endnote_Tamura
[3] http:/ / graphics. cs. cmu. edu/ projects/ nrt/
[4] http:/ / www. texturesynthesis. com/ nonparaMRF. htm
[5] http:/ / graphics. cs. cmu. edu/ people/ efros/ research/ EfrosLeung. html
[6] http:/ / graphics. stanford. edu/ papers/ texture-synthesis-sig00/
[7] http:/ / mrl. nyu. edu/ projects/ image-analogies/
[8] http:/ / graphics. cs. cmu. edu/ people/ efros/ research/ quilting. html
[9] http:/ / www-static. cc. gatech. edu/ gvu/ perception/ / projects/ graphcuttextures/
[10] http:/ / gimp-texturize. sourceforge. net/
[11] http:/ / www. logarithmic. net/ pfh/ resynthesizer
[12] http:/ / www-sop. inria. fr/ members/ Sylvain. Lefebvre/ _wiki_/ pmwiki. php?n=Main. TSynEx
[13] https:/ / github. com/ textureguy/ KUVA
[14] http:/ / xenia. media. mit. edu/ ~popat/ personal/
[15] http:/ / www. cns. nyu. edu/ heegerlab/ index. php?page=publications& id=heeger-siggraph95
[16] http:/ / www. texturesynthesis. com/ papers/ Paget_IP_1998. pdf
[17] http:/ / graphics. cs. cmu. edu/ people/ efros/ research/ NPS/ efros-iccv99. pdf
[18] http:/ / graphics. cs. cmu. edu/ people/ efros/ research/ synthesis. html
[19] http:/ / www. cs. utah. edu/ ~michael/ ts/
[20] http:/ / www. cs. huji. ac. il/ labs/ cglab/ papers/ texsyn/
[21]
[22]
[23]
[24]
[25]
[26]

http:/ / www. macs. hw. ac. uk/ texture2005/


http:/ / www. macs. hw. ac. uk/ texturelab/
http:/ / www. texturesynthesis. com/ texture. htm
http:/ / www. texrd. com/ gallerie/ gallerie. html
http:/ / rubinsteyn. com/ comp_photo/ texture/
http:/ / www. ipol. im/ pub/ algo/ ggm_random_phase_texture_synthesis/

Tiled rendering
Tiled rendering is the process of subdividing (or tiling) a computer graphics image by a regular grid in image space
to exploit local spatial coherence in the scene and/or to facilitate the use of limited hardware rendering resources
later in the graphics pipeline.
Tiled rendering is sometimes known as a "sort middle" architecture.
In a typical tiled renderer, geometry must first be transformed into screen space and assigned to screen-space tiles.
This requires some storage for the lists of geometry for each tile. In early tiled systems, this was performed by the
CPU, but all modern hardware contains hardware to accelerate this step. The list of geometry can also be sorted front
to back, allowing the GPU to use hidden surface removal to avoid processing pixels that are hidden behind others,
saving on memory bandwidth for unnecessary texture lookups.
Once geometry is assigned to tiles, the GPU renders each tile separately to a small on-chip buffer of memory. This
has the advantage that composition operations are cheap, both in terms of time and power. Once rendering is
complete for a particular tile, the final pixel values for the whole tile are then written once to external memory. Also,
since tiles can be rendered independently, the pixel processing lends itself very easily to parallel architectures with
multiple tile rendering engines.
Tiles are typically small (16x16 and 32x32 pixels are popular tile sizes), although some architectures use much
larger on-chip buffers and can be said to straddle the divide between tiled rendering and immediate mode ("sort last")
rendering.
Tiled rendering can also be used to create a nonlinear framebuffer to make adjacent pixels also adjacent in memory.

223

Tiled rendering

Early Work
Much of the early work on tiled rendering was done as part of the Pixel Planes 5 architecture (1989).
The Pixel Planes 5 project validated the tiled approach and invented a lot of the techniques now viewed as standard
for tiled renderers. It is the work most widely cited by other papers in the field.
The tiled approach was also known early in the history of software rendering. Implementations of Reyes rendering
often divide the image into "tile buckets".

Commercial Products Desktop and Console


Early in the development of desktop GPUs, several companies developed tiled architectures. Over time, these were
largely supplanted by immediate-mode GPUs with fast custom external memory systems.
Major examples of this are:
PowerVR rendering architecture (1996): The rasterizer consisted of a 3232 tile into which polygons were
rasterized across the image across multiple pixels in parallel. On early PC versions, tiling was performed in the
display driver running on the CPU. In the application of the Dreamcast console, tiling was performed by a piece
of hardware. This facilitated deferred renderingonly the visible pixels were texture-mapped, saving shading
calculations and texture-bandwidth.

Microsoft Talisman (1996)


Dreamcast (1998)
Gigapixel GP-1 (1999)
Xbox 360 (2005): the GPU contains an embedded 10MiB framebuffer; this is not sufficient to hold the raster for
an entire 1280720 image with 4 anti-aliasing, so a tiling solution is superimposed when running in HD
resolutions.
Intel Larrabee GPU (2009) (canceled)
PS Vita (2011)

Commercial Products Embedded


Due to the relatively low external memory bandwidth, and the modest amount of on-chip memory required, tiled
rendering is a popular technology for embedded GPUs. Current examples include:
ARM Mali series.
Imagination Technologies PowerVR series.
Qualcomm Adreno series.
Vivante produces mobile GPUs which have tightly coupled frame buffer memory (similar to the Xbox 360 GPU
described above). Although this can be used to render parts of the screen, the large size of the rendered regions
means that they are not usually described as using a tile-based architecture.

References

224

UV mapping

225

UV mapping
UV mapping is the 3D modeling
process of making a 2D image
representation of a 3D model.

UV mapping
This process projects a texture map
onto a 3D object. The letters "U" and
"V" denote the axes of the 2D
texture[1] because "X", "Y" and "Z" are
already used to denote the axes of the
3D object in model space.

The application of a texture in the UV space related to the effect in 3D.

UV texturing permits polygons that


make up a 3D object to be painted with
color from an image. The image is
called a UV texture map,[2] but it's just
an ordinary image. The UV mapping
process involves assigning pixels in the
image to surface mappings on the
polygon,
usually
done
by
"programmatically" copying a triangle
shaped piece of the image map and
pasting it onto a triangle on the
object.[3] UV is the alternative to XY,
it only maps into a texture space rather
than into the geometric space of the
object. But the rendering computation
uses the UV texture coordinates to
determine how to paint the three
dimensional surface.
In the example to the right, a sphere is
given a checkered texture, first without
and then with UV mapping. Without
UV mapping, the checkers tile XYZ
space and the texture is carved out of
the sphere. With UV mapping, the
checkers tile UV space and points on
the sphere map to this space according to their latitude and longitude.

A checkered sphere, without (left) and with


(right) UV mapping (3D checkered or 2D
checkered).

A representation of the UV mapping of a cube.


The flattened cube net may then be textured to
texture the cube.

When a model is created as a polygon mesh using a 3D modeler, UV coordinates can be generated for each vertex in
the mesh. One way is for the 3D modeler to unfold the triangle mesh at the seams, automatically laying out the
triangles on a flat page. If the mesh is a UV sphere, for example, the modeler might transform it into an
equirectangular projection. Once the model is unwrapped, the artist can paint a texture on each triangle individually,
using the unwrapped mesh as a template. When the scene is rendered, each triangle will map to the appropriate

UV mapping

226

texture from the "decal sheet".


A UV map can either be generated automatically by the software application, made manually by the artist, or some
combination of both. Often a UV map will be generated, and then the artist will adjust and optimize it to minimize
seams and overlaps. If the model is symmetric, the artist might overlap opposite triangles to allow painting both
sides simultaneously.
UV coordinates are applied per face, not per vertex. This means a shared vertex can have different UV coordinates in
each of its triangles, so adjacent triangles can be cut apart and positioned on different areas of the texture map.
The UV Mapping process at its simplest requires three steps: unwrapping the mesh, creating the texture, and
applying the texture.

Finding UV on a sphere
For any point

on the sphere, calculate

, that being the unit vector from

to the sphere's origin.

Assuming that the sphere's poles are aligned with the Y axis, UV coordinates in the range

can then be

calculated as follows:

Notes
[1] when using quaternions (which is standard), "W" is also used; cf. UVW mapping
[2] Mullen, T (2009). Mastering Blender. 1st ed. Indianapolis, Indiana: Wiley Publishing, Inc.
[3] Murdock, K.L. (2008). 3ds Max 2009 Bible. 1st ed. Indianapolis, Indiana: Wiley Publishing, Inc.

References
External links
LSCM Mapping image (http://de.wikibooks.org/wiki/Bild:Blender3D_LSCM.png) with Blender
Blender UV Mapping Tutorial (http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/UV_Map_Basics)
with Blender
Rare practical example of UV mapping (http://blog.nobel-joergensen.com/2011/04/05/
procedural-generated-mesh-in-unity-part-2-with-uv-mapping/) from a blog (not related to a specific product such
as Maya or Blender).

UVW mapping

227

UVW mapping
UVW mapping is a mathematical technique for coordinate mapping. In computer graphics, it is most commonly a
to
map, suitable for converting a 2D image (a texture) to a three dimensional object of a given topology.
"UVW", like the standard Cartesian coordinate system, has three dimensions; the third dimension allows texture
maps to wrap in complex ways onto irregular surfaces. Each point in a UVW map corresponds to a point on the
surface of the object. The graphic designer or programmer generates the specific mathematical function to
implement the map, so that points on the texture are assigned to (XYZ) points on the target surface. Generally
speaking, the more orderly the unwrapped polygons are, the easier it is for the texture artist to paint features onto the
texture. Once the texture is finished, all that has to be done is to wrap the UVW map back onto the object, projecting
the texture in a way that is far more flexible and advanced, preventing graphic artifacts that accompany more
simplistic texture mappings such as planar projection. For this reason, UVW mapping is commonly used to texture
map non-platonic solids, non-geometric primitives, and other irregularly-shaped objects, such as characters and
furniture.

External links
UVW Mapping Tutorial [1]

References
[1] http:/ / oman3d. com/ tutorials/ 3ds/ texture_stealth/

Vertex
In geometry, a vertex (plural vertices) is a special kind of point that describes the corners or intersections of
geometric shapes.

Definitions
Of an angle
The vertex of an angle is the point where two rays begin or meet,
where two line segments join or meet, where two lines intersect
(cross), or any appropriate combination of rays, segments and lines that
result in two straight "sides" meeting at one place.

Of a polytope
A vertex is a corner point of a polygon, polyhedron, or other higher
dimensional polytope, formed by the intersection of edges, faces or
facets of the object.

A vertex of an angle is the endpoint where two


line segments or lines come together.

In a polygon, a vertex is called "convex" if the internal angle of the


polygon, that is, the angle formed by the two edges at the vertex, with
the polygon inside the angle, is less than radians; otherwise, it is called "concave" or "reflex". More generally, a
vertex of a polyhedron or polytope is convex if the intersection of the polyhedron or polytope with a sufficiently
small sphere centered at the vertex is convex, and concave otherwise.

Vertex

228

Polytope vertices are related to vertices of graphs, in that the 1-skeleton of a polytope is a graph, the vertices of
which correspond to the vertices of the polytope, and in that a graph can be viewed as a 1-dimensional simplicial
complex the vertices of which are the graph's vertices. However, in graph theory, vertices may have fewer than two
incident edges, which is usually not allowed for geometric vertices. There is also a connection between geometric
vertices and the vertices of a curve, its points of extreme curvature: in some sense the vertices of a polygon are
points of infinite curvature, and if a polygon is approximated by a smooth curve there will be a point of extreme
curvature near each polygon vertex. However, a smooth curve approximation to a polygon will also have additional
vertices, at the points where its curvature is minimal.

Of a plane tiling
A vertex of a plane tiling or tessellation is a point where three or more tiles meet; generally, but not always, the tiles
of a tessellation are polygons and the vertices of the tessellation are also vertices of its tiles. More generally, a
tessellation can be viewed as a kind of topological cell complex, as can the faces of a polyhedron or polytope; the
vertices of other kinds of complexes such as simplicial complexes are its zero-dimensional faces.

Principal vertex
A polygon vertex xi of a simple polygon P is a principal polygon vertex
if the diagonal [x(i1),x(i+1)] intersects the boundary of P only at x(i1)
and x(i+1). There are two types of principal vertices: ears and mouths.

Ears
A principal vertex xi of a simple polygon P is called an ear if the
diagonal [x(i1),x(i+1)] that bridges xi lies entirely in P. (see also convex
polygon)

Mouths
A principal vertex xi of a simple polygon P is called a mouth if the
diagonal [x(i1),x(i+1)] lies outside the boundary of P.

Vertex B is an ear, because the straight line


between C and D is entirely inside the polygon.
Vertex C is a mouth, because the straight line
between A and B is entirely outside the polygon.

Vertices in computer graphics


In computer graphics, objects are often represented as triangulated polyhedra in which the object vertices are
associated not only with three spatial coordinates but also with other graphical information necessary to render the
object correctly, such as colors, reflectance properties, textures, and surface normals; these properties are used in
rendering by a vertex shader, part of the vertex pipeline.

External links
Weisstein, Eric W., "Polygon Vertex [1]", MathWorld.
Weisstein, Eric W., "Polyhedron Vertex [2]", MathWorld.
Weisstein, Eric W., "Principal Vertex [3]", MathWorld.

Vertex

229

References
[1] http:/ / mathworld. wolfram. com/ PolygonVertex. html
[2] http:/ / mathworld. wolfram. com/ PolyhedronVertex. html
[3] http:/ / mathworld. wolfram. com/ PrincipalVertex. html

Vertex Buffer Object


A Vertex Buffer Object (VBO) is an OpenGL feature that provides methods for uploading vertex data (position,
normal vector, color, etc.) to the video device for non-immediate-mode rendering. VBOs offer substantial
performance gains over immediate mode rendering primarily because the data resides in the video device memory
rather than the system memory and so it can be rendered directly by the video device.
The Vertex Buffer Object specification has been standardized by the OpenGL Architecture Review Board [1] as of
OpenGL Version 1.5 (in 2003). Similar functionality was available before the standardization of VBOs via the
Nvidia-created extension "Vertex Array Range" or ATI's "Vertex Array Object" extension.

Basic VBO functions


The following functions form the core of VBO access and manipulation:
In OpenGL 2.1:
GenBuffersARB(sizei n, uint *buffers)
Generates a new VBO and returns its ID number as an unsigned integer. Id 0 is reserved.
BindBufferARB(enum target, uint buffer)
Use a previously created buffer as the active VBO.
BufferDataARB(enum target, sizeiptrARB size, const void *data, enum usage)
Upload data to the active VBO.
DeleteBuffersARB(sizei n, const uint *buffers)
Deletes the specified number of VBOs from the supplied array or VBO id.
In OpenGL 3.x and OpenGL 4.x:
GenBuffers(sizei n, uint *buffers)
Generates a new VBO and returns its ID number as an unsigned integer. Id 0 is reserved.
BindBuffer(enum target, uint buffer)
Use a previously created buffer as the active VBO.
BufferData(enum target, sizeiptrARB size, const void *data, enum usage)
Upload data to the active VBO.
DeleteBuffers(sizei n, const uint *buffers)
Deletes the specified number of VBOs from the supplied array or VBO id.

Vertex Buffer Object

Example usage in C Using OpenGL 2.1


//Initialise VBO - do only once, at start of program
//Create a variable to hold the VBO identifier
GLuint triangleVBO;
//Vertices of a triangle (counter-clockwise winding)
float data[] = {1.0, 0.0, 1.0, 0.0, 0.0, -1.0, -1.0, 0.0, 1.0};
//Create a new VBO and use the variable id to store the VBO id
glGenBuffers(1, &triangleVBO);
//Make the new VBO active
glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);
//Upload vertex data to the video device
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);
//Make the new VBO active. Repeat here incase changed since
initialisation
glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);
//Draw Triangle from VBO - do each time window, view point or data
changes
//Establish its 3 coordinates per vertex with zero stride in this
array; necessary here
glVertexPointer(3, GL_FLOAT, 0, NULL);
//Establish array contains vertices (not normals, colours, texture
coords etc)
glEnableClientState(GL_VERTEX_ARRAY);
//Actually draw the triangle, giving the number of vertices provided
glDrawArrays(GL_TRIANGLES, 0, sizeof(data) / sizeof(float) / 3);
//Force display to be drawn now
glFlush();

Example usage in C Using OpenGL 3.x and OpenGL 4.x


Vertex Shader:
/*----------------- "exampleVertexShader.vert" -----------------*/
#version 150 // Specify which version of GLSL we are using.
// in_Position was bound to attribute index 0("shaderAttribute")
in vec3 in_Position;

230

Vertex Buffer Object


void main()
{
gl_Position = vec4(in_Position.x, in_Position.y, in_Position.z,
1.0);
return 0;
}
/*--------------------------------------------------------------*/
Fragment Shader:
/*---------------- "exampleFragmentShader.frag" ----------------*/
#version 150 // Specify which version of GLSL we are using.
precision highp float; // Video card drivers require this line to
function properly
out vec4 fragColor;
void main()
{
fragColor = vec4(1.0,1.0,1.0,1.0); //Set colour of each fragment to
WHITE
return 0;
}
/*--------------------------------------------------------------*/
Main OpenGL Program:
/*--------------------- Main OpenGL Program ---------------------*/

/* Create a variable to hold the VBO identifier */


GLuint triangleVBO;

/* This is a handle to the shader program */


GLuint shaderProgram;

/* These pointers will receive the contents of our shader source code
files */
GLchar *vertexSource, *fragmentSource;

/* These are handles used to reference the shaders */


GLuint vertexShader, fragmentShader;

const unsigned int shaderAttribute = 0;

const float NUM_OF_VERTICES_IN_DATA=3;

/* Vertices of a triangle (counter-clockwise winding) */

231

Vertex Buffer Object

232

float data[3][3] = {
{

0.0, 1.0, 0.0

},

{ -1.0, -1.0, 0.0

},

1.0, -1.0, 0.0

};

/*---------------------- Initialise VBO - (Note: do only once, at start


of program) ---------------------*/
/* Create a new VBO and use the variable "triangleVBO" to store the VBO
id */
glGenBuffers(1, &triangleVBO);

/* Make the new VBO active */


glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);

/* Upload vertex data to the video device */


glBufferData(GL_ARRAY_BUFFER, NUM_OF_VERTICES_IN_DATA * 3 *
sizeof(float), data, GL_STATIC_DRAW);

/* Specify that our coordinate data is going into attribute index


0(shaderAttribute), and contains three floats per vertex */
glVertexAttribPointer(shaderAttribute, 3, GL_FLOAT, GL_FALSE, 0, 0);

/* Enable attribute index 0(shaderAttribute) as being used */


glEnableVertexAttribArray(shaderAttribute);

/* Make the new VBO active. */


glBindBuffer(GL_ARRAY_BUFFER, triangleVBO);
/*-------------------------------------------------------------------------------------------------------*/

/*--------------------- Load Vertex and Fragment shaders from files and


compile them --------------------*/
/* Read our shaders into the appropriate buffers */
vertexSource = filetobuf("exampleVertexShader.vert");
fragmentSource = filetobuf("exampleFragmentShader.frag");

/* Assign our handles a "name" to new shader objects */


vertexShader = glCreateShader(GL_VERTEX_SHADER);
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);

/* Associate the source code buffers with each handle */


glShaderSource(vertexShader, 1, (const GLchar**)&vertexSource, 0);
glShaderSource(fragmentShader, 1, (const GLchar**)&fragmentSource, 0);

/* Free the temporary allocated memory */


free(vertexSource);
free(fragmentSource);

Vertex Buffer Object

/* Compile our shader objects */


glCompileShader(vertexShader);
glCompileShader(fragmentShader);
/*-------------------------------------------------------------------------------------------------------*/

/*-------------------- Create shader program, attach shaders to it and


then link it ---------------------*/
/* Assign our program handle a "name" */
shaderProgram = glCreateProgram();

/* Attach our shaders to our program */


glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);

/* Bind attribute index 0 (shaderAttribute) to in_Position*/


/* "in_Position" will represent "data" array's contents in the vertex
shader */
glBindAttribLocation(shaderProgram, shaderAttribute, "in_Position");

/* Link shader program*/


glLinkProgram(shaderProgram);
/*-------------------------------------------------------------------------------------------------------*/

/* Set shader program as being actively used */


glUseProgram(shaderProgram);

/* Set background colour to BLACK */


glClearColor(0.0, 0.0, 0.0, 1.0);

/* Clear background with BLACK colour */


glClear(GL_COLOR_BUFFER_BIT);

/* Actually draw the triangle, giving the number of vertices provided


by invoke glDrawArrays
while telling that our data is a triangle and we want to draw 0-3
vertexes
*/
glDrawArrays(GL_TRIANGLES, 0, 3);
/*---------------------------------------------------------------*/

233

Vertex Buffer Object

234

References
[1] http:/ / www. opengl. org/ about/ arb/

External links
Vertex Buffer Object Whitepaper (http://www.opengl.org/registry/specs/ARB/vertex_buffer_object.txt)

Vertex normal
In the geometry of computer graphics, a vertex normal at a vertex of a
polyhedron is the normalized average of the surface normals of the
faces that contain that vertex.[1][2] The average can be weighted for
example by the area of the face or it can be unweighted.[3][4] Vertex
normals are used in Gouraud shading, Phong shading and other
lighting models. This produces much smoother results than flat
shading; however, without some modifications, it cannot produce a
sharp edge.[5]

References
[1] Henri Gouraud. "Continuous Shading of Curved Surfaces." IEEE Transactions on
Computers, C-20(6) : 623--629 (June 1971).

Vertex normals of a dodecahedral mesh.

[2] Andrew Glassner, I.6 Building vertex normals from a unstructured polygon list, in Graphics Gems IV, edited by Paul S. Heckber, Morgan
Kaufmann, 1994. pp. 60--74
[3] Nelson Max, Weights for Computing Vertex Normals from Facet Normals, Journal of Graphics Tools, Volume 4, Issue 2, 1999 p. 1-6
[4] Grit Thrrner and Charles A. Wthrich, Computing Vertex Normals from Polygonal Facets. Journal of Graphics Tools, Volume 3, Issue 1,
1998. pp. 43-46
[5] Max Wagner, Generating Vertex Normals, http:/ / www. emeyex. com/ site/ tuts/ VertexNormals. pdf

Viewing frustum

235

Viewing frustum
In 3D computer graphics, the viewing frustum or view frustum is the
region of space in the modeled world that may appear on the screen; it
is the field of view of the notional camera.[1]
The exact shape of this region varies depending on what kind of
camera lens is being simulated, but typically it is a frustum of a
rectangular pyramid (hence the name). The planes that cut the frustum
perpendicular to the viewing direction are called the near plane and the
far plane. Objects closer to the camera than the near plane or beyond
the far plane are not drawn. Sometimes, the far plane is placed
infinitely far away from the camera so all objects within the frustum
are drawn regardless of their distance from the camera.

A view frustum.

Viewing frustum culling or view frustum culling is the process of removing objects that lie completely outside the
viewing frustum from the rendering process. Rendering these objects would be a waste of time since they are not
directly visible. To make culling fast, it is usually done using bounding volumes surrounding the objects rather than
the objects themselves.

Definitions
VPN
the view-plane normal a normal to the view plane.
VUV
the view-up vector the vector on the view plane that indicates the upward direction.
VRP
the viewing reference point a point located on the view plane, and the origin of the VRC.
PRP
the projection reference point the point where the image is projected from, for parallel projection, the PRP is
at infinity.
VRC
the viewing-reference coordinate system.
The geometry is defined by a field of view angle (in the 'y' direction), as well as an aspect ratio. Further, a set of
z-planes define the near and far bounds of the frustum.

References
[1] http:/ / msdn. microsoft. com/ en-us/ library/ ff634570. aspx Microsoft What Is a View Frustum?

Virtual actor

Virtual actor
A virtual human or digital clone is the creation or re-creation of a human being in image and voice using
computer-generated imagery and sound, that is often indistinguishable from the real actor. This idea was first
portrayed in the 1981 film Looker, wherein models had their bodies scanned digitally to create 3D computer
generated images of the models, and then animating said images for use in TV commercials. Two 1992 books used
this concept: "Fools" by Pat Cadigan, and Et Tu, Babe by Mark Leyner.
In general, virtual humans employed in movies are known as synthespians, virtual actors, vactors, cyberstars, or
"silicentric" actors. There are several legal ramifications for the digital cloning of human actors, relating to
copyright and personality rights. People who have already been digitally cloned as simulations include Bill Clinton,
Marilyn Monroe, Fred Astaire, Ed Sullivan, Elvis Presley, Bruce Lee, Audrey Hepburn, Anna Marie Goddard, and
George Burns. Ironically, data sets of Arnold Schwarzenegger for the creation of a virtual Arnold (head, at least)
have already been made.
The name Schwarzeneggerization comes from the 1992 book Et Tu, Babe by Mark Leyner. In one scene, on pages
5051, a character asks the shop assistant at a video store to have Arnold Schwarzenegger digitally substituted for
existing actors into various works, including (amongst others) Rain Man (to replace both Tom Cruise and Dustin
Hoffman), My Fair Lady (to replace Rex Harrison), Amadeus (to replace F. Murray Abraham), The Diary of Anne
Frank (as Anne Frank), Gandhi (to replace Ben Kingsley), and It's a Wonderful Life (to replace James Stewart).
Schwarzeneggerization is the name that Leyner gives to this process. Only 10 years later, Schwarzeneggerization
was close to being reality.
By 2002, Schwarzenegger, Jim Carrey, Kate Mulgrew, Michelle Pfeiffer, Denzel Washington, Gillian Anderson, and
David Duchovny had all had their heads laser scanned to create digital computer models thereof.

Early history
Early computer-generated animated faces include the 1985 film Tony de Peltrie and the music video for Mick
Jagger's song "Hard Woman" (from She's the Boss). The first actual human beings to be digitally duplicated were
Marilyn Monroe and Humphrey Bogart in a March 1987 filmWikipedia:Please clarify created by Nadia Magnenat
Thalmann and Daniel Thalmann for the 100th anniversary of the Engineering Society of Canada. The film was
created by six people over a year, and had Monroe and Bogart meeting in a caf in Montreal. The characters were
rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands.
In 1987, the Kleizer-Walczak Construction Company begain its Synthespian ("synthetic thespian") Project, with the
aim of creating "life-like figures based on the digital animation of clay models".
In 1988, Tin Toy was the first entirely computer-generated movie to win an Academy Award (Best Animated Short
Film). In the same year, Mike the Talking Head, an animated head whose facial expression and head posture were
controlled in real time by a puppeteer using a custom-built controller, was developed by Silicon Graphics, and
performed live at SIGGRAPH. In 1989, The Abyss, directed by James Cameron included a computer-generated face
placed onto a watery pseudopod.
In 1991, Terminator 2, also directed by Cameron, confident in the abilities of computer-generated effects from his
experience with The Abyss, included a mixture of synthetic actors with live animation, including computer models of
Robert Patrick's face. The Abyss contained just one scene with photo-realistic computer graphics. Terminator 2
contained over forty shots throughout the film.
In 1997, Industrial Light and Magic worked on creating a virtual actor that was a composite of the bodily parts of
several real actors.
By the 21st century, virtual actors had become a reality. The face of Brandon Lee, who had died partway through the
shooting of The Crow in 1994, had been digitally superimposed over the top of a body-double in order to complete

236

Virtual actor
those parts of the movie that had yet to be filmed. By 2001, three-dimensional computer-generated realistic humans
had been used in Final Fantasy: The Spirits Within, and by 2004, a synthetic Laurence Olivier co-starred in Sky
Captain and the World of Tomorrow.

Legal issues
Critics such as Stuart Klawans in the New York Times expressed worry about the loss of "the very thing that art was
supposedly preserving: our point of contact with the irreplaceable, finite person". And even more problematic are the
issues of copyright and personality rights. Actors have little legal control over a digital clone of themselves. In the
United States, for instance, they must resort to database protection laws in order to exercise what control they have
(The proposed Database and Collections of Information Misappropriation Act would strengthen such laws). An actor
does not own the copyright on his digital clones, unless they were created by him. Robert Patrick, for example,
would not have any legal control over the liquid metal digital clone of himself that was created for Terminator 2.
The use of digital clones in movie industry, to replicate the acting performances of a cloned person, represents a
controversial aspect of these implications, as it may cause real actors to land in fewer roles, and put them in
disadvantage at contract negotiations, since a clone could always be used by the producers at potentially lower costs.
It is also a career difficulty, since a clone could be used in roles that a real actor would never accept for various
reasons. Bad identifications of an actor's image with a certain type of roles could harm his career, and real actors,
conscious of this, pick and choose what roles they play (Bela Lugosi and Margaret Hamilton became typecast with
their roles as Count Dracula and the Wicked Witch of the West, whereas Anthony Hopkins and Dustin Hoffman
have played a diverse range of parts). A digital clone could be used to play the parts of (for examples) an axe
murderer or a prostitute, which would affect the actor's public image, and in turn affect what future casting
opportunities were given to that actor. Both Tom Waits and Bette Midler have won actions for damages against
people who employed their images in advertisements that they had refused to take part in themselves.
In the USA, the use of a digital clone in advertisements is requireed to be accurate and truthful (section 43(a) of the
Lanham Act and which makes deliberate confusion unlawful). The use of a celebrity's image would be an implied
endorsement. The New York District Court held that an advertisement employing a Woody Allen impersonator
would violate the Act unless it contained a disclaimer stating that Allen did not endorse the product.
Other concerns include posthumous use of digital clones. Barbara Creed states that "Arnold's famous threat, 'I'll be
back', may take on a new meaning". Even before Brandon Lee was digitally reanimated, the California Senate drew
up the Astaire Bill, in response to lobbying from Fred Astaire's widow and the Screen Actors Guild, who were
seeking to restrict the use of digital clones of Astaire. Movie studios opposed the legislation, and as of 2002 it had
yet to be finalized and enacted. Several companies, including Virtual Celebrity Productions, have purchased the
rights to create and use digital clones of various dead celebrities, such as Marlene Dietrich[1] and Vincent Price.

In fiction
S1m0ne, a 2002 science fiction drama film written, produced and directed by Andrew Niccol, starring Al Pacino.

In business
A Virtual Actor can also be a person who performs a role in real-time when logged into a Virtual World or
Collaborative On-Line Environment. One who represents, via an avatar, a character in a simulation or training event.
One who behaves as if acting a part through the use of an avatar.
Vactor Studio LLC is a New York-based company, but its "Vactors" (virtual actors) are located all across the US and
Canada. The Vactors log into virtual world applications from their homes or offices to participate in exercises
covering an extensive range of markets including: Medical, Military, First Responder, Corporate, Government,
Entertainment, and Retail. Through their own computers, they become doctors, soldiers, EMTs, customer service

237

Virtual actor

238

reps, victims for Mass Casualty Response training, or whatever the demonstration requires. Since 2005, Vactor
Studios role-players have delivered thousands of hours of professional virtual world demonstrations, training
exercises, and event management services.

References
[1] Los Angeles Times / Digital Elite Inc. (http:/ / articles. latimes. com/ 1999/ aug/ 09/ business/ fi-64043)

Further reading
Michael D. Scott and James N. Talbott (1997). "Titles and Characters". Scott on Multimedia Law. Aspen
Publishers Online. ISBN1-56706-333-0. a detailed discussion of the law, as it stood in 1997, relating to virtual
humans and the rights held over them by real humans
Richard Raysman (2002). "Trademark Law". Emerging Technologies and the Law: Forms and Analysis. Law
Journal Press. pp.615. ISBN1-58852-107-9. how trademark law affects digital clones of celebrities who
have trademarked their person

External links
Vactor Studio (http://www.vactorstudio.com/)

Volume rendering
In scientific visualization and computer graphics,
volume rendering is a set of techniques used to
display a 2D projection of a 3D discretely sampled data
set.
A typical 3D data set is a group of 2D slice images
acquired by a CT, MRI, or MicroCT scanner. Usually
these are acquired in a regular pattern (e.g., one slice
every millimeter) and usually have a regular number of
image pixels in a regular pattern. This is an example of
a regular volumetric grid, with each volume element, or
voxel represented by a single value that is obtained by
sampling the immediate area surrounding the voxel.
To render a 2D projection of the 3D data set, one first
needs to define a camera in space relative to the
volume. Also, one needs to define the opacity and color
of every voxel. This is usually defined using an RGBA
(for red, green, blue, alpha) transfer function that
defines the RGBA value for every possible voxel value.

A volume rendered cadaver head using view-aligned texture mapping


and diffuse reflection

For example, a volume may be viewed by extracting isosurfaces (surfaces of equal values) from the volume and
rendering them as polygonal meshes or by rendering the volume directly as a block of data. The

Volume rendering

239

marching cubes algorithm is a common technique for


extracting an isosurface from volume data. Direct
volume rendering is a computationally intensive task
that may be performed in several ways.

Direct volume rendering


A direct volume renderer[1] requires every sample value
to be mapped to opacity and a color. This is done with
a "transfer function" which can be a simple ramp, a
piecewise linear function or an arbitrary table. Once
converted to an RGBA (for red, green, blue, alpha)
value, the composed RGBA result is projected on
corresponding pixel of the frame buffer. The way this is
done depends on the rendering technique.

Volume rendered CT scan of a forearm with different color schemes


for muscle, fat, bone, and blood

A combination of these techniques is possible. For instance, a shear warp implementation could use texturing
hardware to draw the aligned slices in the off-screen buffer.

Volume ray casting


The technique of volume ray casting can be
derived directly from the rendering
equation. It provides results of very high
quality, usually considered to provide the
best image quality. Volume ray casting is
classified as image based volume rendering
technique, as the computation emanates
from the output image, not the input volume
data as is the case with object based
techniques. In this technique, a ray is
generated for each desired image pixel.
Volume Ray Casting. Crocodile mummy provided by the Phoebe A. Hearst
Using a simple camera model, the ray starts
Museum of Anthropology, UC Berkeley. CT data was acquired by Dr. Rebecca
at the center of projection of the camera
Fahrig, Department of Radiology, Stanford University, using a Siemens
SOMATOM
Definition, Siemens Healthcare. The image was rendered by Fovia's
(usually the eye point) and passes through
High Definition Volume Rendering engine
the image pixel on the imaginary image
plane floating in between the camera and the
volume to be rendered. The ray is clipped by the boundaries of the volume in order to save time. Then the ray is
sampled at regular or adaptive intervals throughout the volume. The data is interpolated at each sample point, the
transfer function applied to form an RGBA sample, the sample is composited onto the accumulated RGBA of the
ray, and the process repeated until the ray exits the volume. The RGBA color is converted to an RGB color and
deposited in the corresponding image pixel. The process is repeated for every pixel on the screen to form the
completed image.

Volume rendering

240

Splatting
This is a technique which trades quality for speed. Here, every volume element is splatted, as Lee Westover said, like
a snow ball, on to the viewing surface in back to front order. These splats are rendered as disks whose properties
(color and transparency) vary diametrically in normal (Gaussian) manner. Flat disks and those with other kinds of
property distribution are also used depending on the application.

Shear warp
The shear warp approach to volume rendering was
developed by Cameron and Undrill, popularized by
Philippe Lacroute and Marc Levoy.[2] In this technique,
the viewing transformation is transformed such that the
nearest face of the volume becomes axis aligned with
an off-screen image buffer with a fixed scale of voxels
to pixels. The volume is then rendered into this buffer
using the far more favorable memory alignment and
fixed scaling and blending factors. Once all slices of
the volume have been rendered, the buffer is then
warped into the desired orientation and scaled in the
displayed image.
This technique is relatively fast in software at the cost
of less accurate sampling and potentially worse image
quality compared to ray casting. There is memory
overhead for storing multiple copies of the volume, for
the ability to have near axis aligned volumes. This
overhead can be mitigated using run length encoding.

Example of a mouse skull (CT) rendering using the shear warp


algorithm

Texture mapping
Many 3D graphics systems use texture mapping to apply images, or textures, to geometric objects. Commodity PC
graphics cards are fast at texturing and can efficiently render slices of a 3D volume, with real time interaction
capabilities. Workstation GPUs are even faster, and are the basis for much of the production volume visualization
used in medical imaging, oil and gas, and other markets (2007). In earlier years, dedicated 3D texture mapping
systems were used on graphics systems such as Silicon Graphics InfiniteReality, HP Visualize FX graphics
accelerator, and others. This technique was first described by Bill Hibbard and Dave Santek.[3]
These slices can either be aligned with the volume and rendered at an angle to the viewer, or aligned with the
viewing plane and sampled from unaligned slices through the volume. Graphics hardware support for 3D textures is
needed for the second technique.
Volume aligned texturing produces images of reasonable quality, though there is often a noticeable transition when
the volume is rotated.

Volume rendering

241

Maximum intensity projection


As opposed to direct volume rendering, which requires
every sample value to be mapped to opacity and a color,
maximum intensity projection picks out and projects only
the voxels with maximum intensity that fall in the way of
parallel rays traced from the viewpoint to the plane of
projection.
This technique is computationally fast, but the 2D results
do not provide a good sense of depth of the original data.
To improve the sense of 3D, animations are usually
rendered of several MIP frames in which the viewpoint is
slightly changed from one to the other, thus creating the
illusion of rotation. This helps the viewer's perception to
find the relative 3D positions of the object components.
This implies that two MIP renderings from opposite
viewpoints are symmetrical images, which makes it
impossible for the viewer to distinguish between left or
right, front or back and even if the object is rotating
clockwise or counterclockwise even though it makes a
significant difference for the volume being rendered.

CT visualized by a maximum intensity projection of a mouse

MIP imaging was invented for use in nuclear medicine


by Jerold Wallis, MD, in 1988, and subsequently published in IEEE Transactions in Medical Imaging.
Surprisingly, an easy improvement to MIP is Local maximum intensity projection. In this technique we don't take the
global maximum value, but the first maximum value that is above a certain threshold. Because - in general - we can
terminate the ray earlier this technique is faster and also gives somehow better results as it approximates occlusion.

Hardware-accelerated volume rendering


Due to the extremely parallel nature of direct volume rendering, special purpose volume rendering hardware was a
rich research topic before GPU volume rendering became fast enough. The most widely cited technology was
VolumePro, which used high memory bandwidth and brute force to render using the ray casting algorithm.
A recently exploited technique to accelerate traditional volume rendering algorithms such as ray-casting is the use of
modern graphics cards. Starting with the programmable pixel shaders, people recognized the power of parallel
operations on multiple pixels and began to perform general-purpose computing on (the) graphics processing units
(GPGPU). The pixel shaders are able to read and write randomly from video memory and perform some basic
mathematical and logical calculations. These SIMD processors were used to perform general calculations such as
rendering polygons and signal processing. In recent GPU generations, the pixel shaders now are able to function as
MIMD processors (now able to independently branch) utilizing up to 1 GB of texture memory with floating point
formats. With such power, virtually any algorithm with steps that can be performed in parallel, such as volume ray
casting or tomographic reconstruction, can be performed with tremendous acceleration. The programmable pixel
shaders can be used to simulate variations in the characteristics of lighting, shadow, reflection, emissive color and so
forth. Such simulations can be written using high level shading languages.

Volume rendering

Optimization techniques
The primary goal of optimization is to skip as much of the volume as possible. A typical medical data set can be 1
GB in size. To render that at 30 frame/s requires an extremely fast memory bus. Skipping voxels means that less
information needs to be processed.

Empty space skipping


Often, a volume rendering system will have a system for identifying regions of the volume containing no visible
material. This information can be used to avoid rendering these transparent regions.[4]

Early ray termination


This is a technique used when the volume is rendered in front to back order. For a ray through a pixel, once
sufficient dense material has been encountered, further samples will make no significant contribution to the pixel and
so may be neglected.

Octree and BSP space subdivision


The use of hierarchical structures such as octree and BSP-tree could be very helpful for both compression of volume
data and speed optimization of volumetric ray casting process.

Volume segmentation
By sectioning out large portions of the volume that one considers uninteresting before rendering, the amount of
calculations that have to be made by ray casting or texture blending can be significantly reduced. This reduction can
be as much as from O(n) to O(log n) for n sequentially indexed voxels. Volume segmentation also has significant
performance benefits for other ray tracing algorithms.

Multiple and adaptive resolution representation


By representing less interesting regions of the volume in a coarser resolution, the data input overhead can be
reduced. On closer observation, the data in these regions can be populated either by reading from memory or disk, or
by interpolation. The coarser resolution volume is resampled to a smaller size in the same way as a 2D mipmap
image is created from the original. These smaller volume are also used by themselves while rotating the volume to a
new orientation.

Pre-integrated volume rendering


Pre-integrated volume rendering[5][6] is a method that can reduce sampling artifacts by pre-computing much of the
required data. It is especially useful in hardware-accelerated applications[7] because it improves quality without a
large performance impact. Unlike most other optimizations, this does not skip voxels. Rather it reduces the number
of samples needed to accurately display a region of voxels. The idea is to render the intervals between the samples
instead of the samples themselves. This technique captures rapidly changing material, for example the transition
from muscle to bone with much less computation.

242

Volume rendering

Image-based meshing
Image-based meshing is the automated process of creating computer models from 3D image data (such as MRI, CT,
Industrial CT or microtomography) for computational analysis and design, e.g. CAD, CFD, and FEA.

Temporal reuse of voxels


For a complete display view, only one voxel per pixel (the front one) is required to be shown (although more can be
used for smoothing the image), if animation is needed, the front voxels to be shown can be cached and their location
relative to the camera can be recalculated as it moves. Where display voxels become too far apart to cover all the
pixels, new front voxels can be found by ray casting or similar, and where two voxels are in one pixel, the front one
can be kept.

References
[1] Marc Levoy, "Display of Surfaces from Volume Data", IEEE CG&A, May 1988. Archive of Paper (http:/ / graphics. stanford. edu/ papers/
volume-cga88/ )
[2] Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation (http:/ / graphics. stanford. edu/ papers/ shear/ )
[3] Hibbard W., Santek D., "Interactivity is the key" (http:/ / www. ssec. wisc. edu/ ~billh/ p39-hibbard. pdf), Chapel Hill Workshop on Volume
Visualization, University of North Carolina, Chapel Hill, 1989, pp.3943.
[4] Sherbondy A., Houston M., Napel S.: Fast volume segmentation with simultaneous visualization using programmable graphics hardware. In
Proceedings of IEEE Visualization (2003), pp.171176.
[5] Max N., Hanrahan P., Crawfis R.: Area and volume coherence for efficient visualization of 3D scalar functions. In Computer Graphics (San
Diego Workshop on Volume Visualization, 1990) vol. 24, pp.2733.
[6] Stein C., Backer B., Max N.: Sorting and hardware assisted rendering for volume visualization. In Symposium on Volume Visualization
(1994), pp.8390.
[7] Lum E., Wilson B., Ma K.: High-Quality Lighting and Efficient Pre-Integration for Volume Rendering. In Eurographics/IEEE Symposium on
Visualization 2004.

Bibliography
1. ^ Barthold Lichtenbelt, Randy Crane, Shaz Naqvi, Introduction to Volume Rendering (Hewlett-Packard
Professional Books), Hewlett-Packard Company 1998.
2. ^ Peng H., Ruan, Z, Long, F, Simpson, JH, Myers, EW: V3D enables real-time 3D visualization and quantitative
analysis of large-scale biological image data sets. Nature Biotechnology, 2010 doi: 10.1038/nbt.1612 (http://dx.
doi.org/10.1038/nbt.1612) Volume Rendering of large high-dimensional image data (http://www.nature.
com/nbt/journal/vaop/ncurrent/full/nbt.1612.html).

243

Volumetric lighting

244

Volumetric lighting
Volumetric lighting is a technique used in
3D computer graphics to add lighting effects
to a rendered scene. It allows the viewer to
see beams of light shining through the
environment; seeing sunbeams streaming
through an open window is an example of
volumetric lighting, also known as
crepuscular rays. The term seems to have
been introduced from cinematography and is
now widely applied to 3D modelling and
rendering especially in the field of 3D
gaming.[citation needed]

Forest scene from Big Buck Bunny, showing light rays through the canopy.

In volumetric lighting, the light cone emitted by a light source is modeled as a transparent object and considered as a
container of a "volume": as a result, light has the capability to give the effect of passing through an actual three
dimensional medium (such as fog, dust, smoke, or steam) that is inside its volume, just like in the real world.

How volumetric lighting works


Volumetric lighting requires two components: a light space shadow map, and a depth buffer. Starting at the near clip
plane of the camera, the whole scene is traced and sampling values are accumulated into the input buffer. For each
sample, it is determined if the sample is lit by the source of light being processed or not, using the shadowmap as a
comparison. Only lit samples will affect final pixel color.
This basic technique works, but requires more optimization to function in real time. One way to optimize volumetric
lighting effects is to render the lighting volume at a much coarser resolution than that which the graphics context is
using. This creates some bad aliasing artifacts, but that is easily touched up with a blur.
You can also use stencil buffer like with the shadow volume technique.
Another technique can also be used to provide usually satisfying, if inaccurate volumetric lighting effects. The
algorithm functions by blurring luminous objects away from the center of the main light source. Generally, the
transparency is progressively reduced with each blur step, especially in more luminous scenes. Note that this requires
an on-screen source of light.[1]

References
[1] NeHe Volumetric Lighting (http:/ / nehe. gamedev. net/ data/ lessons/ lesson. asp?lesson=36)

External links
Volumetric lighting tutorial at Art Head Start (http://www.art-head-start.com/tutorial-volumetric.html)
3D graphics terms dictionary at Tweak3D.net (http://www.tweak3d.net/3ddictionary/)

Voxel

245

Voxel
A voxel (volume element), represents a value on a regular grid in three
dimensional space. Voxel is a combination of "volume" and "pixel"
where pixel is a combination of "picture" and "element".[1] This is
analogous to a texel, which represents 2D image data in a bitmap
(which is sometimes referred to as a pixmap). As with pixels in a
bitmap, voxels themselves do not typically have their position (their
coordinates) explicitly encoded along with their values. Instead, the
position of a voxel is inferred based upon its position relative to other
voxels (i.e., its position in the data structure that makes up a single
volumetric image). In contrast to pixels and voxels, points and
polygons are often explicitly represented by the coordinates of their
vertices. A direct consequence of this difference is that polygons are
able to efficiently represent simple 3D structures with lots of empty or
homogeneously filled space, while voxels are good at representing
regularly sampled spaces that are non-homogeneously filled.

A series of voxels in a stack with a single voxel


shaded

Voxels are frequently used in the visualization and analysis of medical and scientific data. Some volumetric displays
use voxels to describe their resolution. For example, a display might be able to show 512512512 voxels.

Rasterization
Another technique for voxels involves Raster graphics where you simply raytrace every pixel of the display into the
scene. A typical implementation will raytrace each pixel of the display starting at the bottom of the screen using
what is known as a y-buffer. When a voxel is reached that has a higher y value on the display it is added to the
y-buffer overriding the previous value and connected with the previous y-value on the screen interpolating the color
values.
Outcast and other 1990's video games employed this graphics technique for effects such as reflection and
bump-mapping and usually for terrain rendering. Outcast's graphics engine was mainly a combination of a ray
casting (heightmap) engine, used to render the landscape, and a texture mapping polygon engine used to render
objects. The "Engine Programming" section of the games credits in the manual has several subsections related to
graphics, among them: "Landscape Engine", "Polygon Engine", "Water & Shadows Engine" and "Special effects
Engine". Although Outcast is often cited as a forerunner of voxel technology, this is somewhat misleading. The
game does not actually model three-dimensional volumes of voxels. Instead, it models the ground as a surface,
which may be seen as being made up of voxels. The ground is decorated with objects that are modeled using
texture-mapped polygons. When Outcast was developed, the term "voxel engine", when applied to computer games,
commonly referred to a ray casting engine (for example the VoxelSpace engine). On the engine technology page of
the game's website, the landscape engine is also referred to as the "Voxels engine".[2] The engine is purely
software-based; it does not rely on hardware-acceleration via a 3D graphics card.[3]
John Carmack also experimented with Voxels for the Quake III engine.[4] One such problem cited by Carmack is the
lack of graphics cards designed specifically for such rendering requiring them to be software rendered, which still
remains an issue with the technology to this day.
Comanche was also the first commercial flight simulation based on voxel technology via the company's proprietary
Voxel Space engine (written entirely in Assembly language). This rendering technique allowed for much more
detailed and realistic terrain compared to simulations based on vector graphics at that time.

Voxel

246

Voxel data
A voxel represents a single sample, or data point, on a regularly
spaced, three-dimensional grid. This data point can consist of a single
piece of data, such as an opacity, or multiple pieces of data, such as a
color in addition to opacity. A voxel represents only a single point on
this grid, not a volume; the space between each voxel is not
represented in a voxel-based dataset. Depending on the type of data
and the intended use for the dataset, this missing information may be
reconstructed and/or approximated, e.g. via interpolation.
The value of a voxel may represent various properties. In CT scans, the
values are Hounsfield units, giving the opacity of material to
X-rays.[5]:29 Different types of value are acquired from MRI or
ultrasound.

A (smoothed) rendering of a data set of voxels for


a macromolecule

Voxels can contain multiple scalar values, essentially vector (tensor)


data; in the case of ultrasound scans with B-mode and Doppler data,
density, and volumetric flow rate are captured as separate channels of
data relating to the same voxel positions.

While voxels provide the benefit of precision and depth of reality, they
are typically large data sets and are unwieldy to manage given the bandwidth of common computers. However,
through efficient compression and manipulation of large data files, interactive visualization can be enabled on
consumer market computers.
Other values may be useful for immediate 3D rendering, such as a surface normal vector and color.

Uses
Common uses of voxels include volumetric imaging in medicine and representation of terrain in games and
simulations. Voxel terrain is used instead of a heightmap because of its ability to represent overhangs, caves, arches,
and other 3D terrain features. These concave features cannot be represented in a heightmap due to only the top 'layer'
of data being represented, leaving everything below it filled (the volume that would otherwise be the inside of the
caves, or the underside of arches or overhangs).

Visualization
A volume containing voxels can be visualized either by direct volume rendering or by the extraction of polygon
isosurfaces that follow the contours of given threshold values. The marching cubes algorithm is often used for
isosurface extraction, however other methods exist as well.

Computer gaming
Planet Explorers is a 3D building game that uses voxels for rendering equipment, buildings, and terrain. Using a
voxel editor, players can actually create their own models for weapons and buildings, and terrain can be modified
similar to other building games.
C4 Engine is a game engine that uses voxels for in game terrain and has a voxel editor for its built-in level editor.
Miner Wars 2081 uses its own Voxel Rage engine to let the user deform the terrain of asteroids allowing tunnels
to be formed.
Many NovaLogic games have used voxel-based rendering technology, including the Delta Force, Armored Fist
and Comanche series.

Voxel
Westwood Studios' Command & Conquer: Tiberian Sun and Command & Conquer: Red Alert 2 use voxels to
render most vehicles.
Westwood Studios' Blade Runner video game used voxels to render characters and artifacts.
Outcast, a game made by Belgian developer Appeal, sports outdoor landscapes that are rendered by a voxel
engine.
Comanche_series, a game made by NovaLogic used voxel rasterization for terrain rendering.[6]
The videogame Amok for the Sega Saturn makes use of voxels in its scenarios.
The computer game Vangers uses voxels for its two-level terrain system.
Master of Orion III uses voxel graphics to render space battles and solar systems. Battles displaying 1000 ships at
a time were rendered slowly on computers without hardware graphic acceleration.
Sid Meier's Alpha Centauri uses voxel models to render units.
Shattered Steel featured deforming landscapes using voxel technology.
Build engine first-person shooter games Shadow Warrior and Blood use voxels instead of sprites as an option for
many of the items pickups and scenery. Duke Nukem 3D has an fan-created pack in a similar style.
Crysis, as well as the Cryengine 2 and 3, use a combination of heightmaps and voxels for its terrain system.
Worms 4: Mayhem uses a voxel-based engine to simulate land deformation similar to the older 2D Worms games.
The multi-player role playing game Hexplore uses a voxel engine allowing the player to rotate the isometric
rendered playfield.
The computer game Voxatron, produced by Lexaloffle, is composed and generated fully using voxels.
Ace of Spades used Ken Silverman's Voxlap engine before being rewritten in a bespoke OpenGL engine.
3D Dot Game Heroes uses voxels to present retro-looking graphics.
Vox, an upcoming voxel based exploration/RPG game focusing on player generated content.
ScrumbleShip, a block-building MMO space simulator game in development, renders each in-game component
and damage to those components using dozens to thousands of voxels.
Castle Story, a castle building Real Time Strategy game in development, has terrain consisting of smoothed
voxels
Block Ops, a voxel based First Person Shooter game.
Cube World, an Indie voxel based game with RPG elements based on games like, Terraria, Diablo (video game),
The Legend of Zelda, Monster Hunter, World of Warcraft, Secret of Mana, and many others.
EverQuest Next and EverQuest Next: Landmark, upcoming MMORPGs by Sony Online Entertainment, make
extensive use of voxels for world creation as well as player generated content
7 Days to Die, Voxel based open world survival horror game developed by The Fun Pimps Entertainment.
Brutal Nature, Voxel based Survival FPS that uses surface net relaxation to render voxels as a smooth mesh.

Voxel editors
While scientific volume visualization doesn't require modifying the actual voxel data, voxel editors can be used to
create art (especially 3D pixel art) and models for voxel based games. Some editors are focused on a single approach
to voxel editing while others mix various approaches. Some common approaches are:
Slice based: The volume is sliced in one or more axes and the user can edit each image individually using 2D
raster editor tools. These generally store color information in voxels.
Sculpture: Similar to the vector counterpart but with no topology constraints. These usually store density
information in voxels and lack color information.
Building blocks: The user can add and remove blocks just like a construction set toy.

247

Voxel

Voxel editors for games


Many game developers use in-house editors that are not released to the public, but a few games have publicly
available editors, some of them created by players.
Slice based fan-made Voxel Section Editor III for Command & Conquer: Tiberian Sun and Command &
Conquer: Red Alert 2.
SLAB6 and VoxEd are sculpture based voxel editors used by Voxlap engine games, including Voxelstein 3D and
Ace of Spades.
The official Sandbox 2 editor for CryEngine 2 games (including Crysis) has support for sculpting voxel based
terrain.
The C4 Engine and editor support multiple detail level (LOD) voxel terrain by implementing the patent-free
Transvoxel algorithm.

General purpose voxel editors


There are a few voxel editors available that are not tied to specific games or engines. They can be used as
alternatives or complements to traditional 3D vector modeling.

Extensions
A generalization of a voxel is the doxel, or dynamic voxel. This is used in the case of a 4D dataset, for example, an
image sequence that represents 3D space together with another dimension such as time. In this way, an image could
contain 100100100100 doxels, which could be seen as a series of 100 frames of a 100100100 volume image
(the equivalent for a 3D image would be showing a 2D cross section of the image in each frame). Although storage
and manipulation of such data requires large amounts of memory, it allows the representation and analysis of
spacetime systems.

References
[1] http:/ / www. tomshardware. com/ reviews/ voxel-ray-casting,2423-3. html
[2] Engine Technology (http:/ / web. archive. org/ web/ 20060507235618/ http:/ / www. outcast-thegame. com/ tech/ paradise. htm)
[3] " Voxel terrain engine (http:/ / www. codermind. com/ articles/ Voxel-terrain-engine-building-the-terrain. html)", introduction. In a coder's
mind, 2005.
[4] http:/ / www. tomshardware. com/ reviews/ voxel-ray-casting,2423-2. html
[5] Novelline, Robert. Squire's Fundamentals of Radiology. Harvard University Press. 5th edition. 1997. ISBN 0-674-83339-2.
[6] http:/ / projectorgames. net/ blog/ ?p=168

External links
Games with voxel graphics (http://www.mobygames.com/game-group/visual-technique-style-voxel-graphics)
at MobyGames
Fundamentals of voxelization (http://labs.cs.sunysb.edu/labs/projects/volume/Papers/Voxel/)

248

Z-buffering

249

Z-buffering
In computer graphics, z-buffering, also known as depth buffering, is
the management of image depth coordinates in three-dimensional
(3-D) graphics, usually done in hardware, sometimes in software. It is
one solution to the visibility problem, which is the problem of deciding
which elements of a rendered scene are visible, and which are hidden.
The painter's algorithm is another common solution which, though less
efficient, can also handle non-opaque scene elements.
When an object is rendered, the depth of a generated pixel (z
coordinate) is stored in a buffer (the z-buffer or depth buffer). This
buffer is usually arranged as a two-dimensional array (x-y) with one
element for each screen pixel. If another object of the scene must be
rendered in the same pixel, the method compares the two depths and
overrides the current pixel if the object is closer to the observer. The
chosen depth is then saved to the z-buffer, replacing the old one. In the
end, the z-buffer will allow the method to correctly reproduce the usual
depth perception: a close object hides a farther one. This is called
z-culling.

Z-buffer data

The granularity of a z-buffer has a great influence on the scene quality: a 16-bit z-buffer can result in artifacts (called
"z-fighting") when two objects are very close to each other. A 24-bit or 32-bit z-buffer behaves much better,
although the problem cannot be entirely eliminated without additional algorithms. An 8-bit z-buffer is almost never
used since it has too little precision.

Uses
The Z-buffer is a technology used in almost all contemporary computers, laptops and mobile phones for performing
3-D (3 dimensional) graphics, for example for computer games. The Z-buffer is implemented as hardware in the
silicon ICs (integrated circuits) within these computers. The Z-buffer is also used (implemented as software as
opposed to hardware) for producing computer-generated special effects for films.
Furthermore, Z-buffer data obtained from rendering a surface from a light's point-of-view permits the creation of
shadows by the "shadow mapping" technique.

Developments
Even with small enough granularity, quality problems may arise when precision in the z-buffer's distance values is
not spread evenly over distance. Nearer values are much more precise (and hence can display closer objects better)
than values which are farther away. Generally, this is desirable, but sometimes it will cause artifacts to appear as
objects become more distant. A variation on z-buffering which results in more evenly distributed precision is called
w-buffering (see below).
At the start of a new scene, the z-buffer must be cleared to a defined value, usually 1.0, because this value is the
upper limit (on a scale of 0 to 1) of depth, meaning that no object is present at this point through the viewing
frustum.
The invention of the z-buffer concept is most often attributed to Edwin Catmull, although Wolfgang Straer also
described this idea in his 1974 Ph.D. thesis1.

Z-buffering

250

On recent PC graphics cards (19992005), z-buffer management uses a significant chunk of the available memory
bandwidth. Various methods have been employed to reduce the performance cost of z-buffering, such as lossless
compression (computer resources to compress/decompress are cheaper than bandwidth) and ultra fast hardware
z-clear that makes obsolete the "one frame positive, one frame negative" trick (skipping inter-frame clear altogether
using signed numbers to cleverly check depths).

Z-culling
In rendering, z-culling is early pixel elimination based on depth, a method that provides an increase in performance
when rendering of hidden surfaces is costly. It is a direct consequence of z-buffering, where the depth of each pixel
candidate is compared to the depth of existing geometry behind which it might be hidden.
When using a z-buffer, a pixel can be culled (discarded) as soon as its depth is known, which makes it possible to
skip the entire process of lighting and texturing a pixel that would not be visible anyway. Also, time-consuming
pixel shaders will generally not be executed for the culled pixels. This makes z-culling a good optimization
candidate in situations where fillrate, lighting, texturing or pixel shaders are the main bottlenecks.
While z-buffering allows the geometry to be unsorted, sorting polygons by increasing depth (thus using a reverse
painter's algorithm) allows each screen pixel to be rendered fewer times. This can increase performance in
fillrate-limited scenes with large amounts of overdraw, but if not combined with z-buffering it suffers from severe
problems such as:
polygons might occlude one another in a cycle (e.g. : triangle A occludes B, B occludes C, C occludes A), and
there is no canonical "closest" point on a triangle (e.g.: no matter whether one sorts triangles by their centroid or
closest point or furthest point, one can always find two triangles A and B such that A is "closer" but in reality B
should be drawn first).
As such, a reverse painter's algorithm cannot be used as an alternative to Z-culling (without strenuous
re-engineering), except as an optimization to Z-culling. For example, an optimization might be to keep polygons
sorted according to x/y-location and z-depth to provide bounds, in an effort to quickly determine if two polygons
might possibly have an occlusion interaction.

Algorithm
Given: A list of polygons {P1,P2,.....Pn}
Output: A COLOR array, which displays the intensity of the visible polygon surfaces.
Initialize:
note : z-depth and z-buffer(x,y) is positive........
z-buffer(x,y)=max depth; and
COLOR(x,y)=background color.
Begin:
for(each polygon P in the polygon list)
do{
for(each pixel(x,y) that intersects P)
do{
Calculate z-depth of P at (x,y)
If (z-depth < z-buffer[x,y])
then{
z-buffer[x,y]=z-depth;
COLOR(x,y)=Intensity of P at(x,y);

Z-buffering

251
}
}

}
display COLOR array.

Mathematics
The range of depth values in camera space (see 3D projection) to be rendered is often defined between a
value of . After a perspective transformation, the new value of , or , is defined by:

After an orthographic projection, the new value of

where

is the old value of

The resulting values of

, or

and

, is defined by:

in camera space, and is sometimes called

or

are normalized between the values of -1 and 1, where the

plane is at -1 and the

plane is at 1. Values outside of this range correspond to points which are not in the viewing frustum, and
shouldn't be rendered.

Fixed-point representation
Typically, these values are stored in the z-buffer of the hardware graphics accelerator in fixed point format. First they
are normalized to a more common range which is [0,1] by substituting the appropriate conversion
into the previous formula:

Second, the above formula is multiplied by

where d is the depth of the z-buffer (usually 16, 24 or 32

bits) and rounding the result to an integer:

This formula can be inverted and derivated in order to calculate the z-buffer resolution (the 'granularity' mentioned
earlier). The inverse of the above
:

where
The z-buffer resolution in terms of camera space would be the incremental value resulted from the smallest change
in the integer stored in the z-buffer, which is +1 or -1. Therefore this resolution can be calculated from the derivative
of as a function of :

Expressing it back in camera space terms, by substituting

by the above

Z-buffering

252

~
This shows that the values of

are grouped much more densely near the

farther away, resulting in better precision closer to the camera. The smaller the

plane, and much more sparsely


ratio is, the less precision

there is far awayhaving the


plane set too closely is a common cause of undesirable rendering artifacts in
more distant objects.
To implement a z-buffer, the values of are linearly interpolated across screen space between the vertices of the
current polygon, and these intermediate values are generally stored in the z-buffer in fixed point format.

W-buffer
To implement a w-buffer,Wikipedia:Please clarify the old values of in camera space, or , are stored in the
buffer, generally in floating point format. However, these values cannot be linearly interpolated across screen space
from the verticesthey usually have to be inverted[1], interpolated, and then inverted again. The resulting values of
, as opposed to , are spaced evenly between
and
. There are implementations of the w-buffer that
avoid the inversions altogether.
Whether a z-buffer or w-buffer results in a better image depends on the application.

References
[1] http:/ / toolserver. org/ %7Edispenser/ cgi-bin/ dab_solver. py?page=Z-buffering& editintro=Template:Disambiguation_needed/ editintro&
client=Template:Dn

External links
Learning to Love your Z-buffer (http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html)
Alpha-blending and the Z-buffer (http://www.sjbaker.org/steve/omniv/alpha_sorting.html)

Notes
Note 1: see W.K. Giloi, J.L. Encarnao, W. Straer. "The Gilois School of Computer Graphics". Computer
Graphics 35 4:1216.

Z-fighting

253

Z-fighting
Z-fighting is a phenomenon in 3D rendering that occurs when two or
more primitives have similar values in the z-buffer. It is particularly
prevalent with coplanar polygons, where two faces occupy essentially
the same space, with neither in front. Affected pixels are rendered with
fragments from one polygon or the other arbitrarily, in a manner
determined by the precision of the z-buffer. It can also vary as the
scene or camera is changed, causing one polygon to "win" the z test,
then another, and so on. The overall effect is a flickering, noisy
rasterization of two polygons which "fight" to color the screen pixels.
This problem is usually caused by limited sub-pixel precision and
floating point and fixed point round-off errors.

The effect seen on two coplanar polygons

Z-fighting can be reduced through the use of a higher resolution depth buffer, by z-buffering in some scenarios, or by
simply moving the polygons further apart. Z-fighting which cannot be entirely eliminated in this manner is often
resolved by the use of a stencil buffer, or by applying a post transformation screen space z-buffer offset to one
polygon which does not affect the projected shape on screen, but does affect the z-buffer value to eliminate the
overlap during pixel interpolation and comparison. Where z-fighting is caused by different transformation paths in
hardware for the same geometry (for example in a multi-pass rendering scheme) it can sometimes be resolved by
requesting that the hardware uses invariant vertex transformation.
The more z-buffer precision one uses, the less likely it is that z-fighting will be encountered. But for coplanar
polygons, the problem is inevitable unless corrective action is taken.
As the distance between near and far clip planes increases and in particular the near plane is selected near the eye,
the greater the likelihood exists that z-fighting between primitives will occur. With large virtual environments
inevitably there is an inherent conflict between the need to resolve visibility in the distance and in the foreground, so
for example in a space flight simulator if you draw a distant galaxy to scale, you will not have the precision to
resolve visibility on any cockpit geometry in the foreground (although even a numerical representation would
present problems prior to z-buffered rendering). To mitigate these problems, z-buffer precision is weighted towards
the near clip plane, but this is not the case with all visibility schemes and it is insufficient to eliminate all z-fighting
issues.

Z-fighting

254

Demonstration of z-fighting with multiple colors and textures over a grey background

255

Appendix
3D computer graphics software
3D computer graphics

Basics

3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
Primary Uses

3D models / Computer-aided design


Graphic design / Video games
Visual effects / Visualization
Virtual engineering / Virtual reality
Virtual cinematography
Related concepts

CGI / Animation / 3D display


Wireframe model / Texture mapping
Computer animation / Motion capture
Skeletal animation / Crowd simulation
Global illumination / Volume rendering

v
t

e [1]

3D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D
rendering.

3D computer graphics software

Classification
Modeling
3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs
of this class are called modeling applications or modelers.
3D modelers allow users to create and alter models via their 3D mesh. Users can add, subtract, stretch and otherwise
change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can
be rotated and the view can be zoomed in and out.
3D modelers can export their models to files, which can then be imported into other applications as long as the
metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write
data in the native formats of other applications.
Most 3D modelers contain a number of related features, such as ray tracers and other rendering alternatives and
texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able
to generate full-motion video of a series of rendered scenes (i.e. animation).

Rendering
Although 3D modeling and CAD software may perform 3D rendering as well (e.g.Autodesk 3ds Max or Blender),
exclusive 3D rendering software also exist.

Computer-aided design
Computer aided design software may employ the same fundamental 3D modeling techniques that 3D modeling
software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing,
Finite element analysis, product lifecycle management, 3D printing and Computer-aided architectural design.

Complementary tools
After producing video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final
Cut Pro at the low end, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is
commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves.
Use of real-time computer graphics engines to create a cinematic production is called machinima.

References
External links
3D Tools table (http://wiki.cgsociety.org/index.php/Comparison_of_3d_tools) from the CGSociety wiki
Comparison of 10 most popular modeling software (http://tideart.com/?id=4e26f595) from TideArt

256

Article Sources and Contributors

Article Sources and Contributors


3D rendering Source: http://en.wikipedia.org/w/index.php?oldid=590626818 Contributors: 3d rendering, 3dpvpvt, AGiorgio08, ALoopingIcon, Al.locke, Alanbly, Azunda, Balph Eubank,
Calmer Waters, Canoe1967, Cekli829, Chaim Leib, Chasingsol, Chowbok, CommonsDelinker, Danica41, Deemarq, Dewritech, Dicklyon, Doug Bell, Drakesens, Dsajga, Eekerz, Favonian,
Ferrari12345678901, Groupthink, Grstain, Hemant 832003, Hu12, Imsosleepy123, Iskander HFC, JamesBWatson, Jeff G., Jmlk17, Julius Tuomisto, Keilana, Kerotan, Kri, Ldo, M-le-mot-dit,
Marek69, Mdd, Michael Hardy, MrOllie, NJR ZA, Nixeagle, Obsidian Soul, Oicumayberight, Pchackal, Philip Trueman, Piano non troppo, Pqnd Render, QuantumEngineer, Res2216firestar,
Rilak, Riley Huntley, SantiagoCeballos, SiobhanHansa, Skhedkar, SkyMachine, SkyWalker, Sp, Ted1712, TheBendster, TheRealFennShysa, Tim1357, Verbist, Webclient101, Woohookitty,
Zarex, 80 ,123 anonymous edits
Alpha mapping Source: http://en.wikipedia.org/w/index.php?oldid=545338107 Contributors: Chaser3275, Eekerz, Ironholds, Phynicen, Squids and Chips, Sumwellrange, TiagoTiago
Ambient occlusion Source: http://en.wikipedia.org/w/index.php?oldid=588389148 Contributors: ALoopingIcon, Arunregi007, BigDwiki, Bovineone, CitizenB, Closedmouth, Dave Bowman Discovery Won, DewiMorgan, Eekerz, Falkreon, Frecklefoot, Gaius Cornelius, George100, Grafen, JohnnyMrNinja, Jotapeh, Knacker ITA, Kri, Miker@sundialservices.com, Mr.BlueSummers,
Mrtheplague, Prolog, Quibik, RJHall, Rehno Lindeque, Simeon, SimonP, Snakkino, TAnthony, The Anome, 63 anonymous edits
Anisotropic filtering Source: http://en.wikipedia.org/w/index.php?oldid=590591332 Contributors: Angela, Berkut, Berland, Blotwell, Bookandcoffee, Bubuka, CloudStrifeNBHM, Comet
Tuttle, CommonsDelinker, Dast, DavidPyke, Dorbie, Eekerz, Eyreland, Fredrik, FrenchIsAwesome, Fryed-peach, Furrykef, Gang65, Gazpacho, GeorgeMoney, Hcs, Hebrides, Holek,
Illuminatiscott, Iridescent, Joelholdsworth, Josephmarty, Karlhendrikse, Karol Langner, Knacker ITA, Kri, L888Y5, MattGiuca, Mblumber, Michael Snow, Neckelmann, Ni.cero, Pnm, Ra4king,
Room101, Rory096, Rudolflai, SBareSSomErMig, Shandris, Skarebo, Spaceman85, Valarauka, Velvetron, Versatranitsonlywaytofly, Voidxor, Wayne Hardman, WhosAsking, WikiBone,
Yacoby, Yamla, Ztobor, 70 anonymous edits
Back-face culling Source: http://en.wikipedia.org/w/index.php?oldid=574813077 Contributors: Andreas Kaufmann, BuzzerMan, Canderson7, Charles Matthews, Deepomega, Eric119,
Gazpacho, Ldo, LrdChaos, Mahyar d zig, Mdd4696, RJHall, Radagast83, Rainwarrior, Simeon, Snigbrook, Syrthiss, Tempshill, The last username left was taken, Tolkien fan, Uker, W3bbo,
Xavexgoem, Yworo, , 12 anonymous edits
Beam tracing Source: http://en.wikipedia.org/w/index.php?oldid=552154434 Contributors: Altenmann, Bduvenhage, CesarB, Hetar, Kibibu, Kri, M-le-mot-dit, Mark Arsten, Porges, RJFJR,
RJHall, Reedbeta, Samuel A S, Srleffler, 9 anonymous edits
Bidirectional texture function Source: http://en.wikipedia.org/w/index.php?oldid=585123977 Contributors: Andreas Kaufmann, Changorino, Dp, Guanaco, Ivokabel, Keefaas, Kjd33, Ldo,
Marasmusine, Redress perhaps, RichiH, SimonP, SirGrant, 17 anonymous edits
Bilinear filtering Source: http://en.wikipedia.org/w/index.php?oldid=561217170 Contributors: Antoantonov, AxelBoldt, Berland, ChrisGualtieri, ChrisHodgesUK, Dcoetzee, Djanvk, Eekerz,
Finetooth, Furrykef, Grendelkhan, HarrisX, Karlhendrikse, Lotje, Lunaverse, MarylandArtLover, Michael Hardy, MinorEdits, Mmj, Msikma, Neckelmann, NorkNork, One.Ouch.Zero, Peter M
Gerdes, Poor Yorick, Rgaddipa, Satchmo, Scepia, Shureg, Skulvar, Sparsefarce, Sterrys, Thegeneralguy, Valarauka, Vorn, Xaosflux, Zarex, 31 anonymous edits
Binary space partitioning Source: http://en.wikipedia.org/w/index.php?oldid=569902953 Contributors: Abdull, Altenmann, Amanaplanacanalpanama, Amritchowdhury, Angela, AquaGeneral,
Ariadie, B4hand, Bomazi, Brucenaylor, Brutaldeluxe, Bryan Derksen, Cbraga, Cgbuff, Chan siuman, Charles Matthews, ChrisGualtieri, Chrisjohnson, CyberSkull, Cybercobra, DanielPharos,
David Eppstein, Dcoetzee, Dionyziz, Dysprosia, Fredrik, Frencheigh, Gbruin, GregorB, Gyunt, Headbomb, Immibis, Immonster, Jafet, Jamesontai, Jkwchui, JohnnyMrNinja, Kdau, Kelvie,
KnightRider, Kri, LOL, Leithian, LogiNevermore, M-le-mot-dit, Mdob, Michael Hardy, Mild Bill Hiccup, Miquonranger03, Noxin911, NtpNtp, NuclearFriend, Obiwhonn, Oleg Alexandrov,
Operator link, Palmin, Percivall, Prikipedia, QuasarTE, RPHv, Reedbeta, Spodi, Stephan Leeds, Svick, Tabletop, Tarquin, The Anome, TreeMan100, Twri, Wiki alf, WikiLaurent, WiseWoman,
Wmahan, Wonghang, Yar Kramer, Zetawoof, 71 anonymous edits
Bounding interval hierarchy Source: http://en.wikipedia.org/w/index.php?oldid=527510270 Contributors: Altenmann, Czarkoff, David Eppstein, Imbcmdth, Michael Hardy, Oleg Alexandrov,
Rehno Lindeque, Snoopy67, Srleffler, Welsh, 26 anonymous edits
Bounding volume Source: http://en.wikipedia.org/w/index.php?oldid=582109189 Contributors: Aboeing, Aeris-chan, Ahering@cogeco.ca, Altenmann, CardinalDan, Chris the speller,
DavidCary, Flamurai, Forderud, Frank Shearar, Gdr, Gene Nygaard, Interiot, Iridescent, Jafet, Jaredwf, Lambiam, LokiClock, M-le-mot-dit, Michael Hardy, Oleg Alexandrov, Oli Filth,
Operativem, Pmaillot, RJHall, Reedbeta, Ryk, Sixpence, Smokris, Sterrys, T-tus, Tony1212, Tosha, Werddemer, WikHead, 45 anonymous edits
Bump mapping Source: http://en.wikipedia.org/w/index.php?oldid=556072976 Contributors: ALoopingIcon, Adem4ik, Al Fecund, ArCePi, Audrius u, Baggend, Baldhur, BluesD, BobtheVila,
Branko, Brion VIBBER, Chris the speller, CyberSkull, Damian Yerrick, Dhatfield, Dionyziz, Dormant25, Eekerz, Engwar, Frecklefoot, GDallimore, GoldDragon, GreatGatsby, GregorB,
Greudin, Guyinblack25, H, Haakon, Hadal, Halloko, Hamiltondaniel, Honette, Imroy, IrekReklama, Jats, Kenchikuben, Kimiko, KnightRider, Komap, Loisel, Lord Crc, M-le-mot-dit, Madoka,
Martin Kraus, Masem, Mephiston999, Michael Hardy, Mrwojo, Novusspero, Nxavar, Oyd11, Quentar, RJHall, Rainwarrior, Reedbeta, Roger Roger, Sam Hocevar, Scepia, Sdornan, ShashClp,
SkyWalker, Snoyes, SpaceFlight89, SpunkyBob, Sterrys, Svick, Tarinth, Th1rt3en, ThomasTenCate, Ussphilips, Versatranitsonlywaytofly, Viznut, WaysToEscape, Werdna, Xavexgoem,
Xezbeth, Yaninass2, 60 anonymous edits
CatmullClark subdivision surface Source: http://en.wikipedia.org/w/index.php?oldid=590642293 Contributors: Ahelps, Aquilosion, Ati3414, Austin512, Bebestbe, Berland, Chase me ladies,
I'm the Cavalry, Chikako, Cristiprefac, Cyp, David Eppstein, Decora, Duncan.Hull, Elmindreda, Empoor, Forderud, Furrykef, Giftlite, Goatcheese3, Gorbay, Guffed, Harmsma, Ianp5a, Irtopiste,
J.delanoy, Juhame, Karmacodex, Kinkybb, Krackpipe, Kubajzz, Lomacar, Michael Hardy, Mont29, Mr mr ben, My head itches, Mysid, Mystaker1, Niceguyedc, Nicholasbishop, Oleg
Alexandrov, Pablodiazgutierrez, Rasmus Faber, Sigmundv, Skybum, Smcquay, Tomruen, Willpowered, 74 anonymous edits
Conversion between quaternions and Euler angles Source: http://en.wikipedia.org/w/index.php?oldid=590726533 Contributors: Anakin101, BlindWanderer, Charles Matthews, EdJohnston,
Eiserlohpp, Florent Lamiraux, Fmalan, Forderud, Gaius Cornelius, Guentherwagner, Hyacinth, Icairns, Icalanise, Incnis Mrsi, JWWalker, Jcuadros, Jemebius, Jgoppert, Jheald, JohnBlackburne,
JordiGH, Juansempere, Linas, Lionelbrits, Marcofantoni84, Mjb4567, Niac2, Oleg Alexandrov, PAR, Patrick, RJHall, Radagast83, Stamcose, Steve Lovelace, ThomasV, TobyNorris, Waldir,
Woohookitty, ZeroOne, 41 anonymous edits
Cube mapping Source: http://en.wikipedia.org/w/index.php?oldid=589319019 Contributors: Barticus88, Bryan Seecrets, Eekerz, Foobarnix, JamesBWatson, Jknappett, MarylandArtLover,
MaxDZ8, Mo ainm, Mudd1, Neodop, Paolo.dL, Robwentworth, SharkD, Shashank Shekhar, Smjg, Smyth, SteveBaker, Tim1357, TopherTG, Versatranitsonlywaytofly, Zigger, 14 anonymous
edits
Diffuse reflection Source: http://en.wikipedia.org/w/index.php?oldid=586580834 Contributors: AManWithNoPlan, Adoniscik, Andrevruas, Apyule, Bluemoose, Casablanca2000in, Deor,
Dhatfield, Dicklyon, Eekerz, Falcon8765, Flamurai, Francs2000, GianniG46, Giftlite, Grebaldar, Incnis Mrsi, Jeff Dahl, JeffBobFrank, JohnOwens, Jojhutton, Lesnail, Linnormlord, Logger9,
Marcosaedro, Materialscientist, Matt Chase, Mbz1, Mirrorreflect, Owen214, Patrick, PerryTachett, Quindraco, RJHall, Rajah, Rjstott, Scriberius, Seaphoto, Shonenknifefan1, Srleffler,
Superblooper, Waldir, Wikijens, , 54 anonymous edits
Displacement mapping Source: http://en.wikipedia.org/w/index.php?oldid=556073920 Contributors: ALoopingIcon, Askewchan, CapitalR, Charles Matthews, Cmprince, Digitalntburn,
Dmharvey, Eekerz, Elf, Engwar, Firefox13, Furrykef, George100, GregorB, Ian Pitchford, Jacoplane, Jdtruax, Jhaiduce, JonH, Jordash, Kusunose, Mackseem, Markhoney, Moritz Moeller,
NeoRicen, Novusspero, Peter bertok, PianoSpleen, Pulsar, Puzzl, RJHall, Redquark, Robchurch, SpunkyBob, Sterrys, T-tus, Tom W.M., Tommstein, Toxic1024, Twinxor, Xxiii, 59 anonymous
edits
DooSabin subdivision surface Source: http://en.wikipedia.org/w/index.php?oldid=551288546 Contributors: Berland, Cuvette, Deodar, Forderud, Hagerman, Jitse Niesen, Michael Hardy,
Tomruen, 6 anonymous edits
Edge loop Source: http://en.wikipedia.org/w/index.php?oldid=581222944 Contributors: Albrechtphilly, Balloonguy, Costela, Fages, Fox144112, Furrykef, G9germai, George100, Grafen,
Gurch, Guy BlueSummers, J04n, Marasmusine, ProveIt, R'n'B, Scott5114, Skapur, Zundark, 13 anonymous edits
Euler operator Source: http://en.wikipedia.org/w/index.php?oldid=582868132 Contributors: Brad7777, BradBeattie, Dina, Elkman, Havemann, InverseHypercube, Jayprich, Jitse Niesen, Ldo,
Marylandwizard, Mecanismo, MetaNest, Michael Hardy, Mild Bill Hiccup, Tompw, 2 anonymous edits
False radiosity Source: http://en.wikipedia.org/w/index.php?oldid=566959678 Contributors: Fratrep, Kostmo, Nainsal, Visionguru, 3 anonymous edits

257

Article Sources and Contributors


Fragment Source: http://en.wikipedia.org/w/index.php?oldid=574275852 Contributors: Abtract, Adam majewski, BenFrantzDale, ChrisGualtieri, Ldo, Marasmusine, Sfingram, Sigma 7,
Thelennonorth, Wknight94, 1 anonymous edits
Geometry pipelines Source: http://en.wikipedia.org/w/index.php?oldid=576563594 Contributors: Bejnar, Bumm13, Cybercobra, Eda eng, Frap, GL1zdA, Hazardous Matt, Jesse Viviano, JoJan,
Joy, Joyous!, Jpbowen, R'n'B, Rilak, Robertvan1, Shaundakulbara, Stephenb, W Nowicki, 14 anonymous edits
Geometry processing Source: http://en.wikipedia.org/w/index.php?oldid=560850158 Contributors: ALoopingIcon, Alanbly, Betamod, Dsajga, EpsilonSquare, Frecklefoot, Happyrabbit, JMK,
Jeff3000, JennyRad, Jeodesic, Lantonov, Michael Hardy, N2e, PJY, Poobslag, RJHall, Siddhant, Sterrys, 14 anonymous edits
Global illumination Source: http://en.wikipedia.org/w/index.php?oldid=590533182 Contributors: Aenar, Ahelser, Andreas Kaufmann, Arru, Beland, Boijunk, Cappie2000, Chris Ssk,
ChrisGualtieri, Conversion script, CoolingGibbon, Dhatfield, Dormant25, Dqeswn, Elektron, Elena the Quiet, Favonian, Fractal3, Frap, Fru1tbat, Graphicsguy, H2oski2liv, Henrik, Heron,
Hhanke, Imroy, JYolkowski, Jontintinjordan, Jose Ramos, Jsnow, Kansik, Kri, Kriplozoik, Ldo, Levork, MartinPackerIBM, Maruchan, Mysid, Mystery01, N2f, NicoV, Nightscream, Nihiltres,
Nohat, Oldmanriver42, Paperquest, Paranoid, Peter bertok, Petereriksson, Pietaster, Pjrich, Pokipsy76, Pongley, Proteal, Pulle, RJHall, Reedbeta, Shaboomshaboom, Skorp, Smelialichu,
Smiley325, Th1rt3en, The machine512, Themunkee, Tim1357, Travistlo, UKURL, Wazery, Welsh, 81 anonymous edits
Gouraud shading Source: http://en.wikipedia.org/w/index.php?oldid=580710156 Contributors: Acdx, Akhram, Asiananimal, Bautze, Blueshade, Brion VIBBER, Chasingsol, Crater Creator,
Csl77, DMacks, Da Joe, Davehodgson333, David Eppstein, Dhatfield, Dicklyon, Eekerz, Furrykef, Gargaj, Giftlite, Hairy Dude, Jamelan, Jaxl, JelloB, Jon186, Jpbowen, Karada, Kocio, Kostmo,
Kri, MP, Mandra Oleka, Martin Kraus, Michael Hardy, Mrwojo, N4nojohn, Nayuki, Olivier, Pne, Poccil, RDBury, RJHall, Rainwarrior, Rjwilmsi, RoyalFool, Russl5445, SMC, Scepia,
SchuminWeb, Sct72, Shoaibsaikat, Shyland, SiegeLord, Solon.KR, The Anome, Thenickdude, Thumperward, Wxidea, Yzmo, Z10x, Zom-B, Zundark, 47 anonymous edits
Graphics pipeline Source: http://en.wikipedia.org/w/index.php?oldid=590467578 Contributors: Arnero, Badduri, Bakkster Man, Banazir, BenFrantzDale, CesarB, ChopMonkey, Eric Lengyel,
EricR, Fernvale, Flamurai, Fmshot, Frap, Gogo Dodo, Guptan, Hans Dunkelberg, Harryboyles, Hellisp, Hlovdal, Hymek, Jamesrnorwood, KevR44, Ldo, MIT Trekkie, Mackseem, Marvin
Monroe, MaxDZ8, Moe Epsilon, Mr.Unknown, Naraht, Piotrus, Posix memalign, Remag Kee, Reyk, Ricky81682, Rilak, Salam32, Seasage, Sfingram, Spelunkenbuker, Stilgar, Tank00,
TeeTylerToe, TutterMouse, TuukkaH, UKER, Woohookitty, Yan Kuligin, Yousou, 81 anonymous edits
Hidden line removal Source: http://en.wikipedia.org/w/index.php?oldid=574813133 Contributors: Andreas Kaufmann, Bobber0001, CesarB, Chrisjohnson, Grutness, Koozedine, Kylemcinnes,
Ldo, Magioladitis, MelbourneStar, MrMambo, Nayuki, Oleg Alexandrov, Pmaillot, RJHall, Resurgent insurgent, Shenme, Thumperward, Wheger, 16 anonymous edits
Hidden surface determination Source: http://en.wikipedia.org/w/index.php?oldid=581958030 Contributors: Altenmann, Alvis, Arnero, B4hand, Bill william compton, CanisRufus, Cbraga,
Christian Lassure, CoJaBo, Connelly, David Levy, Dougher, Everyking, Flamurai, Fredrik, Grafen, Graphicsguy, J04n, Jarry1250, Jleedev, Jmorkel, Jonomillin, Kostmo, LOL, LPGhatguy, Ldo,
LokiClock, Marasmusine, MattGiuca, Michael Hardy, Nahum Reduta, Philip Trueman, RJHall, Radagast83, Randy Kryn, Remag Kee, Robofish, Second Skin, Sg0826, Shenme, Spectralist, Ssd,
SteveORN, Tfpsly, Thiseye, Toussaint, Vendettax, Waldir, Walter bz, Wavelength, Wknight94, Wmahan, Wolfkeeper, 58 anonymous edits
High dynamic range rendering Source: http://en.wikipedia.org/w/index.php?oldid=547424386 Contributors: 25, Abdull, Ahruman, Allister MacLeod, Anakin101, Appraiser, Art LaPella,
Axem Titanium, Ayavaron, BIS Ondrej, Baddog121390, Betacommand, Betauser, Bongomatic, Calidarien, Cambrant, CesarB, ChrisGualtieri, Christoph hausner, Christopherlin, Ck lostsword,
Coldpower27, CommonsDelinker, Credema, Cronos Dage, Crummy, CyberSkull, Cynthia Sue Larson, DH85868993, DabMachine, Darkuranium, Darxus, David Eppstein, Djayjp, Dmmaus,
Drat, Drawn Some, Dreish, Dwarden, Eekerz, Ehn, Elmindreda, Entirety, Eptin, Evanreyes, Eyrian, FA010S, Falcon9x5, Frap, Gamer007, Gracefool, Hdu hh, Hibana, Holek, Hu12, I need a
name, Imroy, Infinity Wasted, Intgr, J.delanoy, Jack Daniels BBQ Sauce, Jason Quinn, Jengelh, JigPu, Johannes re, JojoMojo, JorisvS, Joy, Jyuudaime, Kaotika, Karam.Anthony.K,
Karlhendrikse, Katana314, Kelly Martin, King Bob324, Kocur, Korpal28, Kotofei, Krawczyk, Kungfujoe, Ldo, Leaviathan, Legionaire45, Marcika, Martyx, MattGiuca, Mboverload, Mdd4696,
Mika1h, Mikmac1, Mindmatrix, Morio, Mortense, Museerouge, NCurse, NOrbeck, Nastyman9, NoSoftwarePatents, NulNul, Nxavar, Oni Ookami Alfador, PatheticCopyEditor, PhilMorton,
Pkaulf, Pmanderson, Pmsyyz, PonyToast, Pqnd Render, Qutezuce, RG2, Redvers, Rich Farmbrough, Rjwilmsi, Robert K S, RoyBoy, Rror, Sam Hocevar, Shademe, ShuffyIosys, Sikon, Simeon,
Simetrical, Siotha, SkyWalker, Slavik262, Slicing, Snkcube, Srittau, Starfox, Starkiller88, Suruena, TJRC, Taw, Technical 13, ThaddeusB, The Negotiator, ThefirstM,
Thequickbrownfoxjumpsoveralazydog, Thewebb, Tiddly Tom, Tijfo098, Tomlee2010, Tony1, Unico master 15, Unmitigated Success, Vendettax, Vladimirovich, Wester547, X201, XMog,
Xabora, XanthoNub, Xanzzibar, XenoL-Type, Xompanthy, ZS, Zarex, Zr40, Zvar, , 377 anonymous edits
Image-based lighting Source: http://en.wikipedia.org/w/index.php?oldid=569215258 Contributors: Beland, Bl4ckd0g, Blakegripling ph, Bobo192, Chaoticbob, Dreamdra, Eekerz, Justinc, Kri,
Michael Hardy, Pearle, Qutezuce, Rainjam, Redress perhaps, Rogerb67, Rror, Slicedpan, TokyoJunkie, Wuz, 19 anonymous edits
Image plane Source: http://en.wikipedia.org/w/index.php?oldid=543995208 Contributors: BenFrantzDale, CesarB, Michael C Price, RJHall, Reedbeta, TheParanoidOne, 1 anonymous edits
Irregular Z-buffer Source: http://en.wikipedia.org/w/index.php?oldid=574813152 Contributors: Chris the speller, DabMachine, DavidHOzAu, Diego Moya, Fooberman, Karam.Anthony.K,
Ldo, Mblumber, Shaericell, SlipperyHippo, ThinkingInBinary, ToolmakerSteve, 8 anonymous edits
Isosurface Source: http://en.wikipedia.org/w/index.php?oldid=573152633 Contributors: Ariadacapo, Banus, Brad7777, CALR, Charles Matthews, Dergrosse, George100, Khalid hassani, Kku,
Kri, Michael Hardy, Onna, Ospalh, RJHall, RedWolf, Rudolf.hellmuth, Sam Hocevar, StoatBringer, Taw, The demiurge, Thurth, Tijfo098, TortoiseWrath, 8 anonymous edits
Lambert's cosine law Source: http://en.wikipedia.org/w/index.php?oldid=585423819 Contributors: AvicAWB, AxelBoldt, Ben Moore, BenFrantzDale, Berean Hunter, Cellocgw, Charles
Matthews, Choster, Css, Dbenbenn, Deuar, Dufbug Deropa, Escientist, Gene Nygaard, GianniG46, Helicopter34234, Hhhippo, HiraV, Hugh Hudson, Inductiveload, Jcaruth123, Kri, Linas,
Magioladitis, Marcosaedro, Michael Hardy, Mpfiz, Oleg Alexandrov, OptoDave, Owen, PAR, Papa November, Patrick, Pflatau, Q Science, RDBury, RJHall, Radagast83, Ramjar, Robobix,
Scolobb, Seth Ilys, Srleffler, Telfordbuck, The wub, ThePI, Thorseth, Tomruen, Tpholm, 35 anonymous edits
Lambertian reflectance Source: http://en.wikipedia.org/w/index.php?oldid=555495242 Contributors: Adoniscik, AnOddName, Bautze, BenFrantzDale, DMG413, Deuar, Dufbug Deropa,
Eekerz, Fefeheart, GianniG46, Girolamo Savonarola, Jtsiomb, KYN, Kri, Littlecruiser, Marc omorain, Martin Kraus, PAR, Pedrose, Pflatau, Radagast83, Sanddune777, Seabhcan, Shadowsill,
SirSeal, Srleffler, Thumperward, Venkat.vasanthi, Xavexgoem, , 21 anonymous edits
Level of detail Source: http://en.wikipedia.org/w/index.php?oldid=584378431 Contributors: ABF, ALoopingIcon, Adzinok, Azylber, Ben467, Bjdehut, Bluemoose, Bobber0001, Chris
Chittleborough, ChuckNorrisPwnedYou, David Levy, Deepomega, Drat, Edward, Epicgenius, Furrykef, GreatWhiteNortherner, Guy.hubert, Hillbillyholiday, IWantMonobookSkin,
InternetMeme, Joaquin008, Jtalledo, Ldo, MaxDZ8, Megapixie, Mikeblas, Pinkadelica, Rjwilmsi, Runtime, SchreiberBike, Sterrys, Three1415, ToolmakerSteve, TowerDragon, Wknight94, ZS,
39 anonymous edits
Mipmap Source: http://en.wikipedia.org/w/index.php?oldid=591368787 Contributors: Alksub, Andreas Kaufmann, Andrewpmk, Anss123, Arnero, Barticus88, Bongomatic, Bookandcoffee,
Brainy J, Brocklebjorn, Camille-goudeseune, Captain Conundrum, Choephix, Dshneeb, Eekerz, Exe, Eyreland, Goodone121, Grendelkhan, Henke37, Hooperbloob, Hotlorp, Jamelan, Kerrick
Staley, Knacker ITA, Knight666, Kri, Kricke, LarsPensjo, Lowellian, MIT Trekkie, MarylandArtLover, Mat-C, Mblumber, Mdockrey, Michael Hardy, Mikachu42, Moonbug2, Myaushka,
Nbarth, Norro, OlEnglish, Phorgan1, Pnm, Qwertyus, RJHall, Scfencer, Sixunhuit, Spoon!, StarkNebula, TRS-80, Tarquin, Theoh, Tmcw, Tribaal, VMS Mosaic, Valarauka, XDeltaA77,
Xmnemonic, 58 anonymous edits
Newell's algorithm Source: http://en.wikipedia.org/w/index.php?oldid=570372988 Contributors: Andreas Kaufmann, Charles Matthews, David Eppstein, Farley13, KnightRider, Komap,
RockMagnetist, TowerOfBricks, 6 anonymous edits
Non-uniform rational B-spline Source: http://en.wikipedia.org/w/index.php?oldid=593125108 Contributors: *drew, ALoopingIcon, Ahellwig, Alan Parmenter, Alanbly, Alansohn, AlphaPyro,
Andreas Kaufmann, Angela, Apparition11, Ati3414, BAxelrod, BMF81, Barracoon, BenFrantzDale, Berland, Buddelkiste, C0nanPayne, Cgbuff, Cgs, Commander Keane, Crahul, DMahalko,
Dallben, Developer, Dhatfield, Dmmd123, Doradus, DoriSmith, Ensign beedrill, Eric Demers, Ettrig, FF2010, Forderud, Fredrik, Freeformer, Furrykef, Gargoyle888, Gea, Graue, Greg L,
Happyrabbit, Hasanisawi, Hazir, HugoJacques1, HuyS3, Ian Pitchford, Ihope127, Iltseng, J04n, JFPresti, JJC1138, JohnBlackburne, Jusdafax, Kaldari, Karlhendrikse, Khunglongcon,
KoenDelaere, LeTrebuchet, Lzur, Maccarthaigh d, Malarame, Mardson, MarmotteNZ, Matthijs, Mauritsmaartendejong, Maury Markowitz, Meungkim, Michael Hardy, Migilik, NPowerSoftware,
Nedaim, Neostarbuck, Newbiepedian, Nichalp, Nick, Nick Pisarro, Jr., Nijun, Nintend06, Oleg Alexandrov, Orborde, Oxymoron83, Palapa, Parametric66, Pashute, Peter M Gerdes, Pgimeno,
Puchiko, Purwar, Quinacrine, Qutezuce, Radical Mallard, Rasmus Faber, Rconan, Reelrt, Regenwolke, Rfc1394, Ronz, Roundaboutyes, Sedimin, Skrapion, SlowJEEP, SmilingRob, Speck-Made,
Spitfire19, Stefano.anzellotti, Stewartadcock, Strangnet, Sukesh pabba, Taejo, Tamfang, The Anome, Toolnut, Tsa1093, Uwe rossbacher, VitruV07, Vladsinger, Whaa?, WulfTheSaxon, Xcoil,
Xmnemonic, Yahastu, Yousou, ZeroOne, Zoodinger.Dreyfus, Zootalures, , 212 anonymous edits
Normal Source: http://en.wikipedia.org/w/index.php?oldid=581278108 Contributors: 16@r, 4C, Aboalbiss, Abrech, Aquishix, Arcfrk, BenFrantzDale, Chris Howard, ChrisGualtieri, D.Lazard,
Daniele.tampieri, Dori, Dysprosia, Editsalot, Elembis, Epolk, Excirial, Fgnievinski, Fixentries, Frecklefoot, Furrykef, Gene Nygaard, Giftlite, Hakeem.gadi, Herbee, Ilya Voyager, InternetMeme,
JasonAD, JohnBlackburne, JonathanHudgins, Jorge Stolfi, Joseph Myers, KSmrq, Kostmo, Kushal one, LOL, Lunch, Madmath789, Michael Hardy, ObscureAuthor, Oleg Alexandrov,
Olegalexandrov, Paolo.dL, Patrick, Paulheath, Pazouzou, Pooven, Quanda, Quondum, R'n'B, RDBury, RJHall, RevenDS, Serpent's Choice, Skytopia, Smessing, Squash, Sterrys, Subhash15,
Takomat, Vkpd11, Zvika, 52 anonymous edits

258

Article Sources and Contributors


Normal mapping Source: http://en.wikipedia.org/w/index.php?oldid=589319584 Contributors: ACSE, ALoopingIcon, Ahoerstemeier, AlistairMcMillan, Alphathon, Andrewpmk, Ar-wiki,
Bronchial, Bryan Seecrets, Cmsjustin, CobbSalad, Comet Tuttle, CryptoDerk, Deepomega, Digitalntburn, Dionyziz, Dysprosia, EconomicsGuy, Eekerz, EmmetCaulfield, Engwar, Everyking,
Frecklefoot, Fredrik, Furrykef, Game-Guru999, Golan2781, Grauw, Green meklar, Gregb, Haakon, Heliocentric, Incady, Irrevenant, Jamelan, Jason One, Jean-Frdric, Jon914, JonathanHudgins,
JorisvS, Julian Herzog, K1Bond007, Kaneiderdaniel, KlappCK, Liman3D, Lord Crc, MarkPNeyer, Maximus Rex, Nahum Reduta, OlEnglish, Olanom, Pak21, Paolo.dL, R'n'B, RJHall,
ReconTanto, Redquark, Rich Farmbrough, SJP, Salam32, Scott5114, Sdornan, SkyWalker, Sorry--Really, Sterrys, SuperMidget, T-tus, TDogg310, Talcos, The Anome, The Hokkaido Crow,
TheHappyFriar, Tim1357, Tommstein, TwelveBaud, Unused000702, VBrent, Versatranitsonlywaytofly, Wikster E, X201, Xavexgoem, Xmnemonic, Yaninass2, 153 anonymous edits
OrenNayar reflectance model Source: http://en.wikipedia.org/w/index.php?oldid=590562537 Contributors: Arch dude, Artaxiad, Bautze, CodedAperture, Compvis, DemocraticLuntz,
Dhatfield, Dicklyon, Divya99, Eekerz, Eheitz, GianniG46, Ichiro Kikuchi, JeffBobFrank, Jwgu, Kri, Martin Kraus, Meekohi, ProyZ, R'n'B, Srleffler, StevenVerstoep, Woohookitty, Yoshi503,
Zoroastrama100, 25 anonymous edits
Painter's algorithm Source: http://en.wikipedia.org/w/index.php?oldid=580518767 Contributors: 16@r, Andreas Kaufmann, BlastOButter42, Bryan Derksen, Cgbuff, EoGuy, Fabiob,
Farley13, Feezo, Finell, Finlay McWalter, Fredrik, Frietjes, Hhanke, Jaberwocky6669, Jmabel, JohnBlackburne, KnightRider, Komap, Mickoush, Norm, Ordoon, PRMerkley, Phyte, RJHall,
RadRafe, Rainwarrior, Rasmus Faber, Reedbeta, Rufous, Shai-kun, Shanes, Sreifa01, Sterrys, SteveBaker, Sverdrup, WISo, Whatsthatcomingoverthehill, Zapyon, 26 anonymous edits
Parallax mapping Source: http://en.wikipedia.org/w/index.php?oldid=562979283 Contributors: ALoopingIcon, Aorwind, Bryan Seecrets, CadeFr, Charles Matthews, Cmprince, CyberSkull,
Eekerz, Fama Clamosa, Fancypants09, Fractal3, Gustavocarra, Hideya, Imroy, J5689, Jdcooper, Jitse Niesen, JonH, Kenchikuben, Lemonv1, MaxDZ8, Mjharrison, Novusspero, Nxavar, Oleg
Alexandrov, Peter.Hozak, Qutezuce, RJHall, Rainwarrior, Rich Farmbrough, Rominf, Scepia, Seth.illgard, SkyWalker, SpunkyBob, Sterrys, Strangerunbidden, TKD, Thepcnerd, Tommstein,
Vacapuer, Xavexgoem, XenoL-Type, 44 anonymous edits
Particle system Source: http://en.wikipedia.org/w/index.php?oldid=580492666 Contributors: Aliakakis, Ashlux, Athlord, Baron305, Bjrn, CanisRufus, Charles Matthews, Chris the speller,
Darthuggla, Deadlydog, Deodar, Eekerz, Ferdzee, Fractal3, Furrykef, Gamer3D, Gracefool, Halixi72, Jay1279, Jpbowen, Jtsiomb, Ketiltrout, Kibibu, Krizas, Lesser Cartographies, LilHelpa,
MarSch, MrOllie, Mrwojo, Onebyone, Oxfordwang, Philip Trueman, Rocketrod1960, Rror, Salvidrim!, Sameboat, Schmiteye, SchreiberBike, ScottDavis, SethTisue, Shanedidona, Sideris,
Sterrys, SteveBaker, Sun Creator, The Merciful, Thesalus, Tjmax99, Vegaswikian, Zzuuzz, 78 anonymous edits
Path tracing Source: http://en.wikipedia.org/w/index.php?oldid=592120919 Contributors: Abstracte, Annabel, BaiLong, Bmearns, DennyColt, Elektron, Frap, Hmira, Icairns, Iceglow, Incnis
Mrsi, Jonon, Keepscases, Kri, M-le-mot-dit, Markluffel, Mmernex, Mrwojo, NeD80, Olawlor, Paroswiki, PetrVevoda, Phil Boswell, Pol098, Psior, Qutorial, RJHall, Srleffler, Steve Quinn,
Tamfang, Tecknoize, 48 anonymous edits
Per-pixel lighting Source: http://en.wikipedia.org/w/index.php?oldid=544119696 Contributors: Alphonze, Altenmann, BMacZero, David Wahler, Eekerz, EoGuy, Jheriko, Mblumber,
Mishal153, 8 anonymous edits
Phong reflection model Source: http://en.wikipedia.org/w/index.php?oldid=592788748 Contributors: Acdx, Aparecki, Bdean42, Bignoter, Bilalbinrais, Connelly, Csl77, Dawnseekker2000,
Dicklyon, EmreDuran, Gargaj, Headbomb, Jengelh, Jonathan Watt, Kri, Martin Kraus, Mdd, Michael Hardy, Mzervos, Nicola.Manini, Nixdorf, RJHall, Rainwarrior, Srleffler, Tabletop, The
Anome, Theseus314, TimBentley, TomGu, Wfaulk, 31 anonymous edits
Phong shading Source: http://en.wikipedia.org/w/index.php?oldid=565585331 Contributors: ALoopingIcon, Abhorsen327, Alexsh, Alvin Seville, Andreas Kaufmann, Asiananimal, Auntof6,
Bautze, Bignoter, BluesD, CALR, ChristosIET, Ciphers, Connelly, Csl77, Dhatfield, Dicklyon, Djexplo, Eekerz, Everyking, Eyreland, Gamkiller, Gargaj, GianniG46, Giftlite, Gogodidi,
Gwen-chan, Hairy Dude, Heavyrain2408, Hymek, Instantaneous, Jamelan, Jaymzcd, Jedi2155, Karada, Kleister32, Kotasik, Kri, Litherum, Loisel, Martin Kraus, Martin451, Mdebets, Michael
Hardy, Mild Bill Hiccup, N2e, Pinethicket, Preator1, RJHall, Rainwarrior, Rjwilmsi, Sigfpe, Sin-man, Sorcerer86pt, Spoon!, Srleffler, StaticGull, T-tus, Thddo, Tschis, TwoOneTwo, WikHead,
Wrightbus, Xavexgoem, Z10x, Zundark, 70 anonymous edits
Photon mapping Source: http://en.wikipedia.org/w/index.php?oldid=566664398 Contributors: Arabani, Arnero, Astronautics, Brlcad, CentrallyPlannedEconomy, Chas zzz brown,
CheesyPuffs144, Colorgas, Curps, Ewulp, Exvion, Fastily, Favonian, Flamurai, Fnielsen, Fuzzypeg, GDallimore, J04n, Jimmi Hugh, Kri, Ldo, LeCire, MichaelGensheimer, Nilmerg, Owen,
Oyd11, Patrick, Phrood, RJHall, Rkeene0517, Strattonbrazil, T-tus, Tesi1700, Thesalus, Tobias Bergemann, Wapcaplet, XDanielx, Xcelerate, 43 anonymous edits
Polygon Source: http://en.wikipedia.org/w/index.php?oldid=576263896 Contributors: Arnero, BlazeHedgehog, CALR, David Levy, Diego Moya, Forderud, Iceman444k, J04n, Jagged 85,
Mardus, Michael Hardy, Navstar, Pietaster, RJHall, Reedbeta, SimonP, 3 anonymous edits
Potentially visible set Source: http://en.wikipedia.org/w/index.php?oldid=574813182 Contributors: AManWithNoPlan, Chris the speller, Dlegland, Graphicsguy, Gwking, Kri, Ldo,
Lordmetroid, NeD80, Ohconfucius, WastedMeerkat, Weevil, Ybungalobill, 9 anonymous edits
Precomputed Radiance Transfer Source: http://en.wikipedia.org/w/index.php?oldid=589319599 Contributors: Abstracte, Colonies Chris, Deodar, Fanra, Imroy, Red Act, SteveBaker,
Tesi1700, Tim1357, WhiteMouseGary, 7 anonymous edits
Procedural generation Source: http://en.wikipedia.org/w/index.php?oldid=592225627 Contributors: -OOPSIE-, 2over0, ALoopingIcon, Amnesiasoft, Anetode, Arnoox, Ashley Y, Axem
Titanium, Bkell, Blacklemon67, CRGreathouse, Caerbannog, Cambrant, Carl67lp, Chaos5023, ChopMonkey, Chris TC01, ChrisGualtieri, Cjc13, Computer5t, CyberSkull, D.brodale,
DabMachine, Dadomusic, Damian Yerrick, Davidhorman, Delirium, Denis C., Devil Master, DeylenK, DirectXMan, Disavian, Dismas, Distantbody, Dkastner, Doctor Computer, DreamGuy,
Edtion, Eekerz, Eoseth, Eridani, EverGreg, Exe, FatPope, FattusMannus, Feydey, Finlay McWalter, Fippy Darkpaw, Fratrep, Fredrik, Furrykef, Fusible, Geh, GingaNinja, Gjamesnvda,
GoldenCrescent, GregorB, HappyVR, Hedja, Hervegirod, Holothurion, Iain marcuson, Ihope127, Inthoforo, IronMaidenRocks, JAF1970, Jacj, Jackoz, Jacoplane, Jarble, Jerc1, Jessmartin,
Jim1138, Jontintinjordan, Julian, Kbolino, Keavon, Keio, KenAdamsNSA, Khazar, Kuguar03, Kungpao, KyleDantarin, Lapinmies, LeftClicker, Len Raymond, Licu, Lightmouse, Longhan2009,
Lozzaaa, Lupin, MadScientistVX, Maestro814, Mallow40, Marasmusine, Martarius, MaxDZ8, Megasquid500, Mikeyryanx, Moskvax, Mr. Gonna Change My Name Forever, Mujtaba1998,
Nikai, Nils, Novous, Nuggetboy, Oliverkroll, One-dimensional Tangent, Oranjelo100, Pace212, Penguin, Philwelch, PhycoFalcon, Poss, Praetor alpha, Quicksilvre, Quuxplusone, RCX,
Rayofash, Retro junkie, Richlv, Rjwilmsi, Robin S, Rogerd, Rossumcapek, Ryuukuro, Saxifrage, Schmiddtchen, SeymourTheLlama, SharkD, Shashank Shekhar, Shinyary2, Simeon, Slippyd,
Spiderboy, Spoonboy42, Stevegallery, Svea Kollavainen, Taral, Technoutopian, Terminator484, The former 134.250.72.176, TheBronzeMex, ThomasHarte, Thunderbrand, Tlogmer, Torchiest,
TravisMunson1993, Trevc63, Tstexture, Valaggar, Virek, Virt, Viznut, Whqitsm, Wickethewok, WikiPile, XeonXT, Xobxela, Xxcom9a, Yannakakis, Yintan, Ysangkok, Zemoxian, Zvar, 237
anonymous edits
Procedural texture Source: http://en.wikipedia.org/w/index.php?oldid=591287443 Contributors: Altenmann, Besieged, CapitalR, Cargoking, D6, Dhatfield, Eflouret, Foolscreen, Gadfium,
Geh, Gurch, IndigoMertel, Jacoplane, Joeybuddy96, Ken md, MaxDZ8, Michael Hardy, MoogleDan, Nezbie, Ntfs.hard, PaulBoxley, Petalochilus, RhinosoRoss, Spark, Thparkth, TimBentley,
Viznut, Volfy, Wikedit, Wragge, Zundark, 22 anonymous edits
3D projection Source: http://en.wikipedia.org/w/index.php?oldid=589766195 Contributors: AManWithNoPlan, Aekquy, Akilaa, Akulo, Alfio, Allefant, Altenmann, Angela, Aniboy2000,
Baudway, BenFrantzDale, Berland, Bgwhite, Bloodshedder, Bobbygao, BrainFRZ, Bunyk, Canthusus, Charles Matthews, Cholling, Chris the speller, Ckatz, Cpl Syx, Ctachme, Cyp, Datadelay,
Davidhorman, Deom, Dhatfield, Dratman, Ego White Tray, Flamurai, Froth, Furrykef, Gamer Eek, Giftlite, Heymid, Ieay4a, Jaredwf, Jovianconflict, Kevmitch, Lincher, Luckyherb, Marco Polo,
Martarius, MathsIsFun, Mdd, Michael Hardy, Michaelbarreto, Miym, Mrwojo, Nbarth, Oleg Alexandrov, Omegatron, Paolo.dL, Patrick, Pearle, PhilKnight, Pickypickywiki, Plowboylifestyle,
PsychoAlienDog, Que, R'n'B, RJHall, Rabiee, Raven in Orbit, Remi0o, RenniePet, Rjwilmsi, RossA, Sandeman684, Sboehringer, Schneelocke, Seet82, SharkD, Sietse Snel, Skytiger2, Speshall,
Stephan Leeds, Stestagg, Tamfang, Technopat, TimBentley, Trappist the monk, Tristanreid, Twillisjr, Tyler, Unigfjkl, Van helsing, Vgergo, Waldir, Widr, Zanaq, 111 anonymous edits
Quaternions and spatial rotation Source: http://en.wikipedia.org/w/index.php?oldid=592935342 Contributors: AeronBuchanan, Albmont, Ananthsaran7, ArnoldReinhold, AxelBoldt, BD2412,
Ben pcc, BenFrantzDale, BenRG, Bgwhite, Bjones410, Bmju, Brews ohare, Bulee, CALR, Catskul, Ceyockey, Chadernook, Charles Matthews, CheesyPuffs144, Count Truthstein, Cyp, Daniel
Brockman, Daniel.villegas, Darkbane, David Eppstein, Davidjholden, Denevans, Depakote, Dionyziz, Dl2000, Download, Ebelular, Edward, Endomorphic, Enosch, Eregli bob, Eugene-elgato,
Fgnievinski, Fish-Face, Forderud, ForrestVoight, Fropuff, Fyrael, Gaius Cornelius, GangofOne, Genedial, Giftlite, Gj7, Gonz3d, Gutza, HenryHRich, Hyacinth, Ig0r, Incnis Mrsi, J04n, Janek
Kozicki, Jemebius, Jermcb, Jheald, Jitse Niesen, JohnBlackburne, JohnPritchard, JohnnyMrNinja, Josh Triplett, Joydeep.biswas, KSmrq, Kborer, Kordas, Lambiam, LeandraVicci, Lemontea,
Light current, Linas, Lkesteloot, Looxix, Lotu, Lourakis, LuisIbanez, Maksim343, ManoaChild, Markus Kuhn, MathsPoetry, Michael C Price, Michael Hardy, Mike Stramba, Mild Bill Hiccup,
Mtschoen, Nayuki, Oleg Alexandrov, Onlinetexts, PAR, Paddy3118, Paolo.dL, Patrick, Patrick Gill, Patsuloi, PiBVi, Ploncomi, Pt, Quondum, RJHall, Rainwarrior, Randallbsmith, Reddi,
Reddwarf2956, Rgdboer, Robinh, Ruffling, RzR, Samuel Huang, Sebsch, Short Circuit, Sigmundur, SlavMFM, Soler97, TLKeller, Tamfang, Terry Bollinger, Timo Honkasalo, Tkuvho,
TobyNorris, User A1, WVhybrid, Wa03, WaysToEscape, X-Fi6, Yoderj, Zhw, Zundark, 224 anonymous edits
Radiosity Source: http://en.wikipedia.org/w/index.php?oldid=588777015 Contributors: 63.224.100.xxx, ALoopingIcon, Abstracte, Angela, Ani Esayan, Bevo, Bgwhite, CambridgeBayWeather,
Cappie2000, ChrisGualtieri, Chrisjldoran, Cjmccormack, Conversion script, CoolKoon, Cspiel, DaBler, Dhatfield, DrFluxus, Favonian, Furrykef, GDallimore, Inquam, InternetMeme, Jdpipe,
Jheald, JzG, Klparrot, Kostmo, Kri, Kshipley, Livajo, Lucio Di Madaura, Luna Santin, M0llusk, Mark viking, Melligem, Michael Hardy, Mintleaf, Misterdemo, Nayuki, Ohconfucius, Oliphaunt,
Osmaker, Philnolan3d, Pnm, PseudoSudo, Qutezuce, RJHall, Reedbeta, Reinyday, Rocketmagnet, Ryulong, Sallymander, SeanAhern, Siker, Sintaku, Snorbaard, Soumyasch, Splintercellguy,

259

Article Sources and Contributors


Ssppbub, Thrapper, Thue, Tomalak geretkal, Tomruen, Trevorgoodchild, Uriyan, Vision3001, Visionguru, VitruV07, Waldir, Wapcaplet, Wernermarius, Wile E. Heresiarch, Yrithinnd, Yrodro,
, 79 anonymous edits
Ray casting Source: http://en.wikipedia.org/w/index.php?oldid=569524952 Contributors: *Kat*, AnAj, Angela, Anticipation of a New Lover's Arrival, The, Astronautics, Barticus88, Brazucs,
Cgbuff, D, Damian Yerrick, DaveGorman, David Eppstein, Djanvk, DocumentN, Dogaroon, DrewNoakes, Eddynumbers, Eigenlambda, EricsonWillians, Ext9, Exvion, Finlay McWalter,
Firsfron, Garde, Gargaj, Geekrecon, GeorgeLouis, HarisM, Hetar, Iamhove, Iridescent, J04n, Jagged 85, JamesBurns, Jlittlet, Jodi.a.schneider, Kayamon, Kcdot, Korodzik, Kris Schnee, LOL,
Lozzaaa, MeanMotherJr, Mikhajist, Modster, NeD80, Ortzinator, Pinbucket, RJHall, Ravn, Reedbeta, Rich Farmbrough, RzR, Tesi1700, TheBilly, ThomasHarte, TimBentley, Tjansen, Verne
Equinox, WmRowan, Wolfkeeper, Yksyksyks, 48 anonymous edits
Ray tracing Source: http://en.wikipedia.org/w/index.php?oldid=589687998 Contributors: 0x394a74, 8ty3hree, Abmac, Abstracte, Al Hart, Alanbly, Altenmann, Andreas Kaufmann, Anetode,
Anonymous the Editor, Anteru, Arnero, ArnoldReinhold, Arthena, Badgerlovestumbler, Bdoserror, Benindigo, Bjorke, Blueshade, Brion VIBBER, Brlcad, C0nanPayne, Cadience, Caesar,
Camtomlee, Carrionluggage, Cdecoro, Chellmuth, Chrislk02, Claygate, Coastline, CobbSalad, ColinSSX, Colinmaharaj, Conversion script, Cowpip, Cozdas, Cybercobra, D V S,
Daniel.Cardenas, Darathin, Davepape, Davidhorman, Delicious carbuncle, Deltabeignet, Deon, Devendermishra, Dhatfield, Dhilvert, Diannaa, Dicklyon, Dllu, Domsau2, DrBob, Ed g2s,
Elizium23, Erich666, Etimbo, FatalError, Femto, Fgnievinski, ForrestVoight, Fountains of Bryn Mawr, Fph, Furrykef, GDallimore, GGGregory, Geekrecon, Gidoca, Giftlite, Gioto, Gjlebbink,
Gmaxwell, GoingBatty, Goodone121, Graphicsguy, Greg L, GregorB, Gregwhitfield, H2oski2liv, Henrikb4, Hertz1888, Hetar, Hugh2414, Imroy, Ingolfson, InternetMeme, Iskander HFC,
Ixfd64, Japsu, Jawed, Jdh30, Jeancolasp, Jesin, Jim.belk, Jj137, Jleedev, Jodi.a.schneider, Joke137, JonesMI, Jpkoester1, Juhame, Jumping cheese, K.brewster, Kolibri, Kri, Ku7485,
KungfuJoe1110, Kvng, Lasneyx, Lclacer, Levork, Luke490, Lumrs, Lupo, Marc saint ourens, Martarius, Mate2code, Mattbrundage, Michael Hardy, Mikiemike, Mimigu, Minghong,
MoritzMoeller, Mosquitopsu, Mun206, Nerd65536, Niky cz, NimoTh, Nneonneo, Nohat, O18, OnionKnight, Osmaker, Paolo.dL, Patrick, Paulexyn0, Penubag, Pflatau, Phresnel, Phrood,
Pinbucket, Pjvpjv, Pmsyyz, Powerslide, Priceman86, Qef, R.cabus, RDBury, RJHall, Randomblue, Ravn, Rcronk, Reedbeta, Regenspaziergang, Requestion, Rich Farmbrough, RubyQ, Rusty432,
Ryan Postlethwaite, Ryan Roos, Samjameshall, Samuelalang, Scs, Sebastian.mach, SebastianHelm, Shen, Simeon, Sir Lothar, Skadge, SkyWalker, Slady, Soler97, Solphusion, Soumyasch, Spiff,
Srleffler, Stannered, Stevertigo, TakingUpSpace, Tamfang, Taral, The Anome, The machine512, TheRealFennShysa, Themunkee, Thumperward, Timo Honkasalo, Timrb, Tired time, ToastieIL,
Tom Morris, Tom-, ToolmakerSteve, Toxygen, Tuomari, Ubardak, Uvainio, VBGFscJUn3, Vanished user 342562, Vendettax, Versatranitsonlywaytofly, Vette92, VitruV07, Viznut, Voidxor,
Wapcaplet, Washboardplayer, Wavelength, Whosasking, WikiWriteyWeb, Wikiedit555, Wrayal, Yonaa, Zeno333, Zfr, , 300 anonymous edits
Reflection Source: http://en.wikipedia.org/w/index.php?oldid=541710956 Contributors: Al Hart, Chris the speller, Dbolton, Dhatfield, Epbr123, Hom sepanta, Jeodesic, Kri, M-le-mot-dit,
PowerSerj, Remag Kee, Rich Farmbrough, Siddhant, Simeon, Srleffler, 5 anonymous edits
Reflection mapping Source: http://en.wikipedia.org/w/index.php?oldid=583307148 Contributors: ALoopingIcon, Abdull, Anaxial, Bryan Seecrets, C+C, CosineKitty, Davidhorman, Fckckark,
Ferkel, Freeformer, Gaius Cornelius, GrammarHammer 32, IronGargoyle, J04n, Jogloran, Ldo, M-le-mot-dit, MaxDZ8, Paddles, Paolo.dL, Qutezuce, Redquark, Shashank Shekhar, Skorp, Smjg,
Srleffler, Sterrys, SteveBaker, Tkgd2007, TokyoJunkie, Vossanova, Wizard191, Woohookitty, Yworo, 41 anonymous edits
Relief mapping Source: http://en.wikipedia.org/w/index.php?oldid=570483454 Contributors: ALoopingIcon, D6, Dionyziz, Editsalot, Eep, JonH, Korg, M-le-mot-dit, PeterRander,
PianoSpleen, Qwyrxian, R'n'B, Scottc1988, Searchme, Simeon, Sirus20x6, Starkiller88, Vitorpamplona, Zyichen, 17 anonymous edits
Render Output unit Source: http://en.wikipedia.org/w/index.php?oldid=559302771 Contributors: Accord, Arch dude, Dicklyon, Erik Streb, Exp HP, Fernvale, Imzjustplayin, Ldo, MaxDZ8,
Paolo.dL, Qutezuce, Shandris, Swaaye, TEXHNK77, Trevyn, TrinitronX, UKER, 11 anonymous edits
Rendering Source: http://en.wikipedia.org/w/index.php?oldid=590292617 Contributors: 16@r, ALoopingIcon, AVM, Aaronh, Achraf52, Adailide, AdventurousSquirrel, Ahy1, Al Hart,
Alanbly, Altenmann, Alvin Seville, AnnaFrance, Asephei, AxelBoldt, Azunda, Ben Ben, Benbread, Benchaz, Bendman, Bjorke, Blainster, Bob2435643, Boing! said Zebedee, Bpescod, Bryan
Derksen, Cgbuff, Chalst, Charles Matthews, Chris the speller, CliffC, Cmdrjameson, Conversion script, Corti, Crahul, DVdm, Das-g, Dave Law, David C, Davidgumberg, Dedeche, Deli nk,
Dhatfield, Dhilvert, Dicklyon, Doradus, Doubleyouyou, Downwards, Dpc01, Dsimic, Dutch15, Dzhim, Ed g2s, Edcolins, Eekerz, Eflouret, Egarduno, Erudecorp, Favonian, FleetCommand,
Fm2006, Frango com Nata, Fredrik, Fuhghettaboutit, Funnylemon, Gamer3D, Gary King, GeorgeBills, GeorgeLouis, Germancorredorp, Gku, Gordmoo, Gothmog.es, Graham87, Graue, Gkhan,
HarisM, Harshavsn, Howcheng, Hu, Hu12, Hxa7241, Imroy, Indon, Interiot, Iskander HFC, Janke, Jaraalbe, Jeweldesign, Jheald, Jimmi Hugh, Jmencisom, Joyous!, Kayamon, Kennedy311,
Kimse, Kri, Kruusamgi, LaughingMan, Ldo, Ledow, Levork, Lindosland, Lkinkade, M-le-mot-dit, Maian, Mani1, Martarius, Mav, MaxRipper, Maximilian Schnherr, Mblumber, Mdd, Melaen,
Michael Hardy, MichaelMcGuffin, Minghong, Mkweise, Mmernex, Nbarth, New Age Retro Hippie, Obsidian Soul, Oicumayberight, Onopearls, Paladinwannabe2, Patrick, Paul A, Phil Boswell,
Phresnel, Phrood, Pinbucket, Piquan, Pit, Pixelbox, Pongley, Poweroid, Ppe42, Pqnd Render, RJHall, Ravedave, Reedbeta, Rich Farmbrough, Rilak, Ronz, Sam Hocevar, Seasage, Shawnc,
SiobhanHansa, Slady, Solarra, Spitfire8520, Sterrys, Stj6, Sverdrup, Tassedethe, Tesi1700, The Anome, TheProject, Tiggerjay, Tomruen, Urocyon, Vasiliy Faronov, Veinor, Vervadr, Wapcaplet,
Wik, Wikiedit555, Wikispaghetti, William Burroughs, Wmahan, Wolfkeeper, Xugo, 253 anonymous edits
Retained mode Source: http://en.wikipedia.org/w/index.php?oldid=578246862 Contributors: BAxelrod, Bovineone, Chris Chittleborough, Damian Yerrick, Klassobanieras, Peter L, Simeon,
SteveBaker, Uranographer, 13 anonymous edits
Scanline rendering Source: http://en.wikipedia.org/w/index.php?oldid=574813227 Contributors: Aitias, Andreas Kaufmann, CQJ, Dicklyon, Eaglizard, Edward, Epicgenius, Gerard Hill, Gioto,
Harryboyles, Hooperbloob, Iskander HFC, Jesse Viviano, Ldo, Lordmetroid, Moroder, Nixdorf, Phoz, Pinky deamon, RJHall, Rilak, Rjwilmsi, Samwisefoxburr, SimonP, Sterrys, Taemyr,
Thatotherperson, Thejoshwolfe, Timo Honkasalo, Valarauka, Walter bz, Wapcaplet, Weimont, Wesley, Wiki Raja, Xinjinbei, 42 anonymous edits
Schlick's approximation Source: http://en.wikipedia.org/w/index.php?oldid=589477369 Contributors: Alhead, AlphaPyro, Anticipation of a New Lover's Arrival, The, AySz88,
BenFrantzDale, KlappCK, Kri, Shenfy, Svick, 10 anonymous edits
Screen Space Ambient Occlusion Source: http://en.wikipedia.org/w/index.php?oldid=573324688 Contributors: 3d engineer, ALoopingIcon, Aceleo, Adsamcik, AndyTheGrump, Bgwhite,
Bombe, Buxley Hall, Chris the speller, Closedmouth, CommonsDelinker, CoolingGibbon, Cre-ker, Dcuny, Dontstopwalking, Ethryx, Ferret, Frap, Fuhghettaboutit, Gary King, Gerweck,
GoingBatty, IRWeta, InvertedSaint, JCChapman, Jackattack51, Jonesey95, KPudlo, Kri, Leadwerks, Leon3289, LogiNevermore, Lokator, Luke831, Malcolmxl5, ManiaChris, Manolo w,
NimbusTLD, ProjectPaatt, Pyronite, Retep998, SammichNinja, Sdornan, Sethi Xzon, Sietl, Sigmundur, Silverbyte, Stimpy77, Strata8, The Z UKBG, Tim1357, Tylerp9p, UncleZeiv, Vlad3D,
Woohookitty, 206 anonymous edits
Self-shadowing Source: http://en.wikipedia.org/w/index.php?oldid=568680176 Contributors: Amalas, Bender235, Drat, Eekerz, Invertzoo, Jean-Frdric, Jeff3000, Llorenzi, Midkay, Roxis,
Shawnc, Some guy, Vendettax, Woohookitty, XenoL-Type, 1 anonymous edits
Shadow mapping Source: http://en.wikipedia.org/w/index.php?oldid=587903917 Contributors: 7, Antialiasing, Aresio, Ashwin, Dominicos, Dormant25, Eekerz, Forderud, Fresheneesz,
GDallimore, Icehose, Klassobanieras, Kostmo, M-le-mot-dit, Mattijsvandelden, Midnightzulu, Mrwojo, Pearle, Praetor alpha, Rainwarrior, ShashClp, Starfox, Sterrys, Tommstein, 56 anonymous
edits
Shadow volume Source: http://en.wikipedia.org/w/index.php?oldid=563966011 Contributors: Abstracte, AlistairMcMillan, Ayavaron, Chealer, Closedmouth, Cma, Damian Yerrick, Darklilac,
Eekerz, Eric Lengyel, Forderud, Fractal3, Frecklefoot, Fresheneesz, GDallimore, Gamer Eek, J.delanoy, Jaxad0127, Jtsiomb, Jwir3, Klassobanieras, LOL, LiDaobing, Lkinkade, Lord Nightmare,
Mark kilgard, Mboverload, Mctylr, MoraSique, Mrwojo, Ost316, PigFlu Oink, Praetor alpha, Rainwarrior, Rivo, Rjwilmsi, Slicing, Snoyes, Some guy, Starfox, Staz69uk, Steve Leach, Swatoa,
Technobadger, TheDaFox, Tommstein, URB, Zolv, Zorexx, , 56 anonymous edits
Silhouette edge Source: http://en.wikipedia.org/w/index.php?oldid=519305799 Contributors: BenFrantzDale, David Levy, Forderud, Gaius Cornelius, Quibik, RJHall, Rjwilmsi, Wheger, 17
anonymous edits
Spectral rendering Source: http://en.wikipedia.org/w/index.php?oldid=574813239 Contributors: 1ForTheMoney, Bility, Brighterorange, Dead-Portalist, Ldo, Shentino, Srleffler, Tatu Siltanen,
Xcelerate, 11 anonymous edits
Specular highlight Source: http://en.wikipedia.org/w/index.php?oldid=553971900 Contributors: Altenmann, Bautze, BenFrantzDale, Cgbuff, Connelly, Dhatfield, Dicklyon, ERobson, Eekerz,
Ettrig, Jakarr, Jason Quinn, JeffBobFrank, Jtlehtin, Jwhiteaker, KKelvinThompson, KlappCK, Kri, Lapinplayboy, Michael Hardy, Mindmatrix, Mmikkelsen, Nagualdesign, Niello1,
Plowboylifestyle, RJHall, Reedbeta, Ti chris, Tommy2010, Versatranitsonlywaytofly, Wizard191, 38 anonymous edits
Specularity Source: http://en.wikipedia.org/w/index.php?oldid=581501673 Contributors: Barticus88, Dori, Fluffystar, Frap, Hetar, JDspeeder1, Jh559, M-le-mot-dit, Megan1967, Mild Bill
Hiccup, Nboughen, Neonstarlight, Nintend06, Oliver Lineham, Utrecht gakusei, Volfy, 5 anonymous edits
Sphere mapping Source: http://en.wikipedia.org/w/index.php?oldid=403586902 Contributors: AySz88, BenFrantzDale, Digulla, Eekerz, Jahoe, Paolo.dL, Smjg, SteveBaker, Tim1357, 1
anonymous edits

260

Article Sources and Contributors


Stencil buffer Source: http://en.wikipedia.org/w/index.php?oldid=585834901 Contributors: BluesD, ChrisGualtieri, Claynoik, Cyc, Ddawson, Eep, Forderud, Furrykef, Guitpicker07,
HMSSolent, Kitedriver, Ldo, Levj, Mhalberstein, MrKIA11, Mrwojo, O.mangold, Rainwarrior, Wbm1058, Zvar, , 22 anonymous edits
Stencil codes Source: http://en.wikipedia.org/w/index.php?oldid=518802653 Contributors: AManWithNoPlan, Bebestbe, ChrisHodgesUK, Gentryx, Jncraton, Michael Hardy, Reyk,
Vegaswikian, 9 anonymous edits
Subdivision surface Source: http://en.wikipedia.org/w/index.php?oldid=572260730 Contributors: Ablewisuk, Abmac, Andreas Fabri, Ati3414, Banus, Berland, BoredTerry, Boubek, Brock256,
Bubbleshooting, CapitalR, Charles Matthews, Crucificator, David Eppstein, Decora, Deodar, Feureau, Flamurai, Forderud, Furrykef, Giftlite, Husond, Khazar2, Korval, Lauciusa, Levork,
Listmeister, Lomacar, MIT Trekkie, Mark viking, Moritz Moeller, MoritzMoeller, Mysid, Nczempin, Norden83, Pifthemighty, Quinacrine, Qutezuce, RJHall, Radioflux, Rasmus Faber,
Romainbehar, Shorespirit, Smcquay, Surfgeom, Tabletop, The-Wretched, WorldRuler99, Xingd, 55 anonymous edits
Subsurface scattering Source: http://en.wikipedia.org/w/index.php?oldid=590118321 Contributors: ALoopingIcon, Azekeal, BenFrantzDale, Dominicos, Fama Clamosa, Frap, InternetMeme,
Kri, Ldo, Meekohi, Mic ma, Mudd1, NRG753, Piotrek Chwaa, Quadell, RJHall, Raffamaiden, Reedbeta, Robertvan1, Rufous, T-tus, Tinctorius, WereSpielChequers, Xezbeth, 22 anonymous
edits
Surface caching Source: http://en.wikipedia.org/w/index.php?oldid=542303130 Contributors: Amalas, AnteaterZot, AvicAWB, Brian Geppert, Forderud, Fredrik, Hephaestos, KirbyMeister,
LOL, Lockley, Markb, Mika1h, Miyagawa, Resoru, Schneelocke, Thunderbrand, Tregoweth, 16 anonymous edits
Texel Source: http://en.wikipedia.org/w/index.php?oldid=574813262 Contributors: Altenmann, Beno1000, BorisFromStockdale, Dicklyon, Flammifer, Furrykef, Gamer3D, Jamelan, Jynus,
Kmk35, Ldo, MIT Trekkie, Marasmusine, MementoVivere, Miracle Pen, Neckelmann, Neg, Nlu, ONjA, Quoth, RainbowCrane, Rilak, Sterrys, Thilo, Uusijani, Zarex, Zbbentley,
, , 25 anonymous edits
Texture atlas Source: http://en.wikipedia.org/w/index.php?oldid=574813268 Contributors: Abdull, Andreasloew, DarioFixe, Ed welch2, Eekerz, Fram, Gosox5555, Ldo, Mattg82, Melfar,
MisterPhyrePhox, Remag Kee, Spodi, Tardis, 14 anonymous edits
Texture filtering Source: http://en.wikipedia.org/w/index.php?oldid=562917262 Contributors: Alanius, Arnero, Banano03, Benx009, BobtheVila, Brighterorange, ChrisGualtieri, CoJaBo,
Dawnseeker2000, Eekerz, Flamurai, GeorgeOne, Gerweck, Gromobir, Hooperbloob, Jagged 85, Jusdafax, Michael Hardy, Mild Bill Hiccup, Obsidian Soul, RJHall, Remag Kee, Rich
Farmbrough, Shvelven, Srleffler, Tavla, Tolkien fan, Valarauka, Wilstrup, Xompanthy, , 27 anonymous edits
Texture mapping Source: http://en.wikipedia.org/w/index.php?oldid=592005266 Contributors: 16@r, ALoopingIcon, Abmac, Achraf52, Al Fecund, Alfio, Annicedda, Anyeverybody, Arjayay,
Arnero, Art LaPella, AstrixZero, AzaToth, Barticus88, Besieged, Biasoli, BluesD, Blueshade, Canadacow, Cclothier, Chadloder, Collabi, CrazyTerabyte, Daniel Mietchen, DanielPharos,
Davepape, Dhatfield, Djanvk, Donaldrap, Dwilches, Eekerz, Eep, Elf, EoGuy, Fawzma, Furrykef, GDallimore, Gamer3D, Gbaor, Gboods, Gerbrant, Giftlite, Goododa, GrahamAsher, Gut
informiert, Helianthi, Hellknowz, Heppe, Imroy, Isnow, JIP, Jackoutofthebox, Jagged 85, Jesse Viviano, Jfmantis, JonH, Kaneiderdaniel, Kate, KnowledgeOfSelf, Kri, Kusmabite, LOL, Luckyz,
M.J. Moore-McGonigal PhD, P.Eng, MIT Trekkie, ML, Mackseem, Martin Kozk, MarylandArtLover, Mav, MaxDZ8, Michael Hardy, Michael.Pohoreski, Micronjan, Neelix, Novusspero,
Obsidian Soul, Oicumayberight, Ouzari, Palefire, Plasticup, Pvdl, Qutezuce, RC-1290, RJHall, Rainwarrior, Rich Farmbrough, Ronz, SchuminWeb, Sengkang, Simon Fenney, Simon the Dragon,
SimonP, SiobhanHansa, Solipsist, SpunkyBob, Srleffler, Stephen, Svick, T-tus, Tarinth, TheAMmollusc, Tompsci, Toonmore, Twas Now, Vaulttech, Vitorpamplona, Viznut, Wayne Hardman,
Widefox, Willsmith, Ynhockey, Zom-B, Zzuuzz, 120 anonymous edits
Texture synthesis Source: http://en.wikipedia.org/w/index.php?oldid=574276275 Contributors: Akinoame, Altar, Banaticus, Barticus88, Borsi112, ChrisGualtieri, Cmdrjameson,
CommonsDelinker, CrimsonTexture, Darine Of Manor, Davidhorman, Dhatfield, Disavian, Drlanman, Emayv, Ennetws, Hu12, Instantaneous, Jhhays, John of Reading, Kellen, Kukini, Ldo,
Ljay2two, LucDecker, Mehrdadh, Michael Hardy, Nbarth, Nezbie, Nilx, Rich Farmbrough, Rpaget, Simeon, Spark, Spot, Straker, Tbhotch, TerriersFan, That Guy, From That Show!,
TheAMmollusc, Thetawave, Tom Paine, 46 anonymous edits
Tiled rendering Source: http://en.wikipedia.org/w/index.php?oldid=590268434 Contributors: 1ForTheMoney, Adavidb, Bility, CosineKitty, Dobie80, Eekerz, Imroy, Jesse Viviano, Khazar2,
Kinema, Kku, Ldo, LokiClock, Mblumber, Milan Kerlger, Otolemur crassicaudatus, Remag Kee, Seantellis, TJ Spyke, The Anome, ToolmakerSteve, Walter bz, Wavelength, Woohookitty, 18
anonymous edits
UV mapping Source: http://en.wikipedia.org/w/index.php?oldid=583794875 Contributors: Bk314159, Diego Moya, DotShell, Eduard pintilie, Eekerz, Ennetws, Ep22, Fractal3, Jleedev,
LucasVB, Lupinewulf, Mrwojo, Phatsphere, Radical Mallard, Radioflux, Raybellis, Raymond Grier, Rich Farmbrough, Richard7770, Romeu, Schorschi, Simeon, Werddemer, Yworo, Zephyris,
, 43 anonymous edits
UVW mapping Source: http://en.wikipedia.org/w/index.php?oldid=492403311 Contributors: Ajstov, Eekerz, Kenchikuben, Kuru, Mackseem, Nimur, Reach Out to the Truth, Romeu, Vaxquis,
5 anonymous edits
Vertex Source: http://en.wikipedia.org/w/index.php?oldid=585830807 Contributors: ABF, Aaron Kauppi, AbigailAbernathy, Aitias, Americanhero, Anyeverybody, Ataleh, Azylber,
Butterscotch, CMBJ, Coopkev2, Crisis, Cronholm144, David Eppstein, DeadEyeArrow, Discospinster, DoubleBlue, Duoduoduo, Epicgenius, Escape Orbit, Fixentries, Fly by Night, Funandtrvl,
Giftlite, Hvn0413, Icairns, J.delanoy, JForget, Jamesx12345, Knowz, Leuko, M.Virdee, Magioladitis, MarsRover, Martin von Gagern, Mecanismo, Mendaliv, Methecooldude,
Mhaitham.shammaa, Mikayla102295, Miym, NatureA16, Orange Suede Sofa, Panscient, Petrb, Pinethicket, Pumpmeup, R'n'B, Racerx11, SGBailey, SchfiftyThree, Shinli256, Shyland,
SimpleParadox, Squids and Chips, StaticGull, Steelpillow, Synchronism, TheWeakWilled, TimtheTarget, Tomruen, WaysToEscape, William Avery, WissensDrster, Wywin, , 155
anonymous edits
Vertex Buffer Object Source: http://en.wikipedia.org/w/index.php?oldid=592187403 Contributors: Acdx, Allenc28, BRW, Frecklefoot, GoingBatty, Jgottula, Joy, Korval, Ng Pey Shih 07,
Omgchead, Psychonaut, Red Act, Robertbowerman, Tarantulae, 32 anonymous edits
Vertex normal Source: http://en.wikipedia.org/w/index.php?oldid=544534184 Contributors: Anders Sandberg, David Eppstein, Eekerz, MagiMaster, Manop, Michael Hardy, Reyk, 1
anonymous edits
Viewing frustum Source: http://en.wikipedia.org/w/index.php?oldid=589795858 Contributors: Archelon, AvicAWB, Craig Pemberton, Crossmr, Cyp, DavidCary, Dbchristensen, Dpv, Eep,
Flamurai, Gdr, Hymek, Innercash, LarsPensjo, M-le-mot-dit, MithrandirMage, MusicScience, Nimur, Poccil, RJHall, Reedbeta, Robth, Shashank Shekhar, Torav, Welsh, Widefox, , 14
anonymous edits
Virtual actor Source: http://en.wikipedia.org/w/index.php?oldid=573745819 Contributors: ASU, Aqwis, BD2412, Bensin, Chowbok, Danielthalmann, Deacon of Pndapetzim, Donfbreed,
DragonflySixtyseven, ErkDemon, FernoKlump, Fu Kung Master, Hughdbrown, Jabberwoch, Joseph A. Spadaro, Lenticel, LilHelpa, Martarius, Martijn Hoekstra, Mikola-Lysenko, NYKevin,
Neelix, Otto4711, Piski125, Retired username, Sammy1000, Tavix, Uncle G, Vassyana, Woohookitty, Xezbeth, 25 anonymous edits
Volume rendering Source: http://en.wikipedia.org/w/index.php?oldid=591336466 Contributors: 10k, Andrewmu, Anilknyn, Art LaPella, Bcgrossmann, Beckman16, Berland, Bodysurfinyon,
Breuwi, Butros, CallipygianSchoolGirl, Cameron.walsh, Cengizcelebi, Chalkie666, Charles Matthews, Chowbok, Chroniker, Craig Pemberton, Crlab, Ctachme, DGG, Damian Yerrick,
Davepape, Decora, Deli nk, Dhatfield, Dmotion, Dsajga, Eduardo07, Edward, Egallois, Emal35, Exocom, GL1zdA, Greystar92, Hu12, Iab0rt4lulz, Iweber2003, JHKrueger, JeffDonner, Julesd,
Kostmo, Kri, Lackas, Lambiam, Ldo, Levin, Locador, Male1979, Mandarax, Martarius, Mdd, Mugab, Nbarth, Nippashish, Pathak.ab, Pearle, Praetor alpha, PretentiousSnot, RJHall, Rich
Farmbrough, Rilak, Rjwilmsi, Rkikinis, Sam Hocevar, Sjappelodorus, Sjschen, Squids and Chips, Stefanbanev, Sterrys, Theroadislong, Thetawave, TimBentley, Tobo, Tom1.xeon, Uncle Dick,
Welsh, Whibbard, Wilhelm Bauer, Wolfkeeper, Yvesb, mer Cengiz elebi, 130 anonymous edits
Volumetric lighting Source: http://en.wikipedia.org/w/index.php?oldid=574813302 Contributors: Amalas, Berserker79, Edoe2, Fusion7, GregorB, IgWannA, KlappCK, Ldo, Lumoy, Tylerp9p,
VoluntarySlave, Wwwwolf, Xanzzibar, 23 anonymous edits
Voxel Source: http://en.wikipedia.org/w/index.php?oldid=593062684 Contributors: Accounting4Taste, Alansohn, Alfio, Andreba, Andrewmu, Ariesdraco, Aursani, Axl, B-a-b, BenFrantzDale,
Bendykst, Biasedeyes, Bigdavesmith, Blackberry Sorbet, BlindWanderer, Bojilov, Borek, Bornemix, Calliopejen1, Carpet, Centrx, Chris the speller, CommonsDelinker, Craig Pemberton,
Cristan, Ctachme, CyberSkull, Czar, Daeval, Damian Yerrick, Dawidl, DefenceForce, Diego Moya, Dragon1394, DreamGuy, Dubyrunning, Editorfun, Erik Zachte, Everyking, Flarn2006,
Fredrik, Frostedzeo, Fubar Obfusco, Furrykef, George100, Gordmoo, Gousst, Gracefool, GregorB, Hairy Dude, Haya shiloh, Hendricks266, Hplusplus, INCSlayer, Jaboja, Jagged 85, Jamelan,
Jarble, Jedlinlau, Jedrzej s, John Nevard, Karl-Henner, KasugaHuang, Kbdank71, Kelson, Kuroboushi, Lambiam, LeeHunter, LordCazicThule, MGlosenger, Maestrosync, Marasmusine,
Mindmatrix, Miterdale, Mlindstr, Moondoggy, MrOllie, MrScorch6200, Mwtoews, My Core Competency is Competency, Null Nihils, OllieFury, Omegatron, P M Yonge, PaterMcFly, Pearle,
Pengo, Petr Kopa, Pine, Pleasantville, Pythagoras1, RJHall, Rajatojha, Retodon8, Roidroid, Romainhk, Ronz, Rwalker, Sallison, Saltvik, Satchmo, Schizobullet, SharkD, Shentino, Simeon,
Softy, Soyweiser, SpeedyGonsales, Spg3D, Stampsm, Stefanbanev, Stephen Morley, Stormwatch, SuperDuffMan, Suruena, The Anome, Thefirstfrontier, Thumperward, Thunderklaus, Tiedoxi,
Tinclon, Tncomp, Tomtheeditor, Torchiest, Touchaddict, VictorAnyakin, Victordiaz, Vossman, Voxii, Waldir, Wavelength, Wernher, WhiteHatLurker, Wlievens, Woodroar, Wyrmmage,

261

Article Sources and Contributors


Xanzzibar, XavierXerxes, Xezbeth, ZeiP, ZeroOne, , 256 , anonymous edits
Z-buffering Source: http://en.wikipedia.org/w/index.php?oldid=581972890 Contributors: Aakashrajain, Abmac, Alexcat, Alfakim, Alfio, Amillar, Antical, Archelon, Arnero, AySz88, Bcwhite,
BenFrantzDale, Bohumir Zamecnik, Bookandcoffee, CPnieuws, Chadloder, CodeCaster, Cutler, David Eppstein, DavidHOzAu, Delt01, Destynova, Drfrogsplat, Feraudyh, Fredrik, Furrykef,
Fuzzbox, GeorgeBills, Harutsedo2, Jmorkel, John of Reading, Kaszeta, Komap, Kotasik, Koza1983, Landon1980, Laoo Y, Ldo, LogiNevermore, LokiClock, Mav, Mild Bill Hiccup, Moroder,
Mronge, Msikma, Nowwatch, PenguiN42, Pgoergen, RJHall, Rainwarrior, Salam32, SchreiberBike, SoledadKabocha, Solkoll, Sterrys, T-tus, Tobias Bergemann, ToohrVyk, TuukkaH,
Wbm1058, Wik, Wikibofh, Zeus, Zoicon5, Zotel, , 75 anonymous edits
Z-fighting Source: http://en.wikipedia.org/w/index.php?oldid=585757840 Contributors: AxelBoldt, AySz88, CesarB, Chentianran, CompuHacker, Furrykef, Gamer Eek, Hyacinth, Jeepday,
Ldo, Mhoskins, Mrwojo, Nayuki, Otac0n, OwenBlacker, RJHall, Rainwarrior, Rbrwr, Reedbeta, The Rambling Man, Vacation9, Waldir, , 19 anonymous edits
3D computer graphics software Source: http://en.wikipedia.org/w/index.php?oldid=591198496 Contributors: -Midorihana-, 16@r, 3DAnimations.biz, 790, 99neurons, ALoopingIcon, Adrian
1001, Agentbla, Al Hart, Alanbly, AlexTheMartian, Alibaba327, Andek714, Antientropic, Aquilosion, Archizero, Arneoog, AryconVyper, Asav, Autumnalmonk, Bagatelle, BananaFiend,
BcRIPster, Beetstra, Bertmg, Bigbluefish, Blackbox77, Bobsterling1975, Book2, Bovineone, Brenont, Bsmweb3d, Bwildasi, Byronknoll, CALR, CairoTasogare, CallipygianSchoolGirl, Candyer,
Canoe1967, Carioca, Ccostis, Chowbok, Chris Borg, Chris TC01, Chris the speller, Chrisminter, Chromecat, Cjrcl, Codename Lisa, CorporateM, Cremepuff222, Cyon Steve, Cyrre, Davester78,
Dekisugi, Dgirardeau, Dicklyon, Dlee3d, Dobie80, Dodger, Dr. Woo, DriveDenali, Dryo, Dsavi, Dto, Dynaflow, EEPROM Eagle, ERobson, ESkog, Edward, Eiskis, Elf, Elfguy, Emal35,
EncMstr, Enigma100cwu, Enquire, EpsilonSquare, ErkDemon, Erp Erpington, Euchiasmus, Extremophile, Fiftyquid, Firsfron, Forderud, Frecklefoot, Fu Kung Master, GTBacchus, Gaius
Cornelius, Gal911, Genius101, Goncalopp, Greg L, GustavTheMushroom, Gvancollie, Herorev, Holdendesign, HoserHead, Hyad, Iamsouthpaw, IanManka, Im.thatoneguy, Intgr, Inthoforo,
Iphonefans2009, Iridescent, JLaTondre, Jameshfisher, Jan Tomanek, JayDez, Jdm64, Jdtyler, Jncraton, JohnCD, Joshmings, Jreynaga, Jstier, Jtanadi, Juhame, Julian Herzog, K8 fan, KDS4444,
KVDP, Kev Boy, Koffeinoverdos, Kotakotakota, Lambda, Lantrix, Laurent Canc, Lead holder, Lerdthenerd, LetterRip, Licu, Lifeweaver, Lightworkdesign, LilHelpa, Litherlandsand, Lolbill58,
Longhair, M.J. Moore-McGonigal PhD, P.Eng, Malcolmxl5, Mandarax, Marcelswiss, Markhobley, Martarius, Materialscientist, Matuhin86, Mayalld, Michael Devore, Michael b strickland, Mike
Gale, Millahnna, Mlfarrell, Mojo Hand, Mr mr ben, MrOllie, NeD80, NeoKron, Nev1, Nick Drake, Nickdi2012, Nixeagle, Nopnopzero, Nutiketaiel, Oddbodz, Oicumayberight, Optigon.wings,
Ouzari, Papercyborg, Parametric66, Parscale, Paul Stansifer, Pepelyankov, Phiso1, Plan, Quincy2010, Radagast83, Raffaele Megabyte, Ramu50, Rapasaurus, Raven in Orbit, Relux2007,
Requestion, Rich Farmbrough, Ronz, Rtc, Ryan Postlethwaite, Samtroup, SchreiberBike, Scotttsweeney, Sendai2ci, Serioussamp, ShaunMacPherson, Skhedkar, Skinnydow, SkyWalker, Skybum,
Smalljim, Snarius, Snoblomma, Sparklyindigopink, Sparkwoodand21, Speck-Made, Spg3D, Stib, Strattonbrazil, Sugarsmax, Tbsmith, Team FS3D, TheRealFennShysa, Thecrusader 440,
Three1415, Thymefromti, Tim1357, Tommato, Tritos, Truthdowser, Uncle Dick, VRsim, Vdf22, Victordiaz, VitruV07, Waldir, WallaceJackson, Wcgteach, Weetoddid, Welsh,
WereSpielChequers, Woohookitty, Wsultzbach, Xx3nvyxx, Yellowweasel, ZanQdo, Zarius, Zundark, , 430 anonymous edits

262

Image Sources, Licenses and Contributors

Image Sources, Licenses and Contributors


File:Glasses 800 edit.png Source: http://en.wikipedia.org/w/index.php?title=File:Glasses_800_edit.png License: Public Domain Contributors: Gilles Tran
File:Yellow Submarine Second Life.png Source: http://en.wikipedia.org/w/index.php?title=File:Yellow_Submarine_Second_Life.png License: Creative Commons Attribution 3.0
Contributors: Canoe1967
Image:Raytraced image jawray.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Raytraced_image_jawray.jpg License: Attribution Contributors: User Jawed on en.wikipedia
Image:Glasses 800 edit.png Source: http://en.wikipedia.org/w/index.php?title=File:Glasses_800_edit.png License: Public Domain Contributors: Gilles Tran
Image:utah teapot.png Source: http://en.wikipedia.org/w/index.php?title=File:Utah_teapot.png License: Public domain Contributors: Gaius Cornelius, Kri, Mormegil, SharkD, 1 anonymous
edits
Image:Perspective Projection Principle.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Perspective_Projection_Principle.jpg License: GNU Free Documentation License
Contributors: Duesentrieb, EugeneZelenko, Fantagu
File:Show how 3D real time ambient occlusion works 2013-11-23 10-45.jpeg Source:
http://en.wikipedia.org/w/index.php?title=File:Show_how_3D_real_time_ambient_occlusion_works_2013-11-23_10-45.jpeg License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Snakkino
Image:Aocclude bentnormal.png Source: http://en.wikipedia.org/w/index.php?title=File:Aocclude_bentnormal.png License: Creative Commons Attribution-ShareAlike 3.0 Unported
Contributors: Original uploader was Mrtheplague at en.wikipedia
File:MipMap Example STS101 Anisotropic.png Source: http://en.wikipedia.org/w/index.php?title=File:MipMap_Example_STS101_Anisotropic.png License: GNU Free Documentation
License Contributors: MipMap_Example_STS101.jpg: en:User:Mulad, based on a NASA image derivative work: Kri (talk)
Image:Image-resample-sample.png Source: http://en.wikipedia.org/w/index.php?title=File:Image-resample-sample.png License: Public Domain Contributors: en:user:mmj
File:Example of BSP tree construction - step 1.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_1.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree construction - step 2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_2.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree construction - step 3.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_3.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree construction - step 4.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_4.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree construction - step 5.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_5.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree construction - step 6.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_6.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree construction - step 7.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_7.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree construction - step 8.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_8.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree construction - step 9.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_construction_-_step_9.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Zahnradzacken
File:Example of BSP tree traversal.svg Source: http://en.wikipedia.org/w/index.php?title=File:Example_of_BSP_tree_traversal.svg License: Creative Commons Attribution-Sharealike 3.0
Contributors: Zahnradzacken
Image:BoundingBox.jpg Source: http://en.wikipedia.org/w/index.php?title=File:BoundingBox.jpg License: Creative Commons Attribution 2.0 Contributors: Bayo, Maksim, Metoc,
WikipediaMaster
File:Bump-map-demo-full.png Source: http://en.wikipedia.org/w/index.php?title=File:Bump-map-demo-full.png License: GNU Free Documentation License Contributors:
Bump-map-demo-smooth.png, Orange-bumpmap.png and Bump-map-demo-bumpy.png: Original uploader was Brion VIBBER at en.wikipedia Later version(s) were uploaded by McLoaf at
en.wikipedia. derivative work: GDallimore (talk)
File:Bump map vs isosurface2.png Source: http://en.wikipedia.org/w/index.php?title=File:Bump_map_vs_isosurface2.png License: Public Domain Contributors: GDallimore
Image:Catmull-Clark subdivision of a cube.svg Source: http://en.wikipedia.org/w/index.php?title=File:Catmull-Clark_subdivision_of_a_cube.svg License: GNU Free Documentation License
Contributors: Ico83, Kilom691, Mysid, Zundark
Image:Eulerangles.svg Source: http://en.wikipedia.org/w/index.php?title=File:Eulerangles.svg License: Creative Commons Attribution 3.0 Contributors: Lionel Brits
Image:plane.svg Source: http://en.wikipedia.org/w/index.php?title=File:Plane.svg License: Creative Commons Attribution 3.0 Contributors: Original uploader was Juansempere at
en.wikipedia.
File:Panorama cube map.png Source: http://en.wikipedia.org/w/index.php?title=File:Panorama_cube_map.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: SharkD
File:Environment mapping.png Source: http://en.wikipedia.org/w/index.php?title=File:Environment_mapping.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:
User:George7378
File:Lambert2.gif Source: http://en.wikipedia.org/w/index.php?title=File:Lambert2.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: GianniG46
Image:Diffuse reflection.gif Source: http://en.wikipedia.org/w/index.php?title=File:Diffuse_reflection.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: GianniG46
File:Diffuse reflection.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Diffuse_reflection.PNG License: GNU Free Documentation License Contributors: Original uploader was
Theresa knott at en.wikipedia
Image:Displacement.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Displacement.jpg License: Creative Commons Attribution 2.0 Contributors: Original uploader was T-tus at
en.wikipedia
Image:DooSabin mesh.png Source: http://en.wikipedia.org/w/index.php?title=File:DooSabin_mesh.png License: Public domain Contributors: Fredrik Orderud
Image:DooSabin subdivision.png Source: http://en.wikipedia.org/w/index.php?title=File:DooSabin_subdivision.png License: Public Domain Contributors: Zundark
file:Local illumination.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Local_illumination.JPG License: Public Domain Contributors: Danhash, Gabriel VanHelsing, Gtanski,
Jollyroger, Joolz, Kri, Mattes, Metoc, Paperquest, PierreSelim
file:Global illumination.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Global_illumination.JPG License: Public Domain Contributors: user:Gtanski
File:Gouraudshading00.png Source: http://en.wikipedia.org/w/index.php?title=File:Gouraudshading00.png License: Public Domain Contributors: Maarten Everts
File:D3D Shading Modes.png Source: http://en.wikipedia.org/w/index.php?title=File:D3D_Shading_Modes.png License: Public Domain Contributors: Luk Buriin
Image:Gouraud_low_anim.gif Source: http://en.wikipedia.org/w/index.php?title=File:Gouraud_low_anim.gif License: Creative Commons Attribution 2.0 Contributors: Jalo, Kri, Man vyi,
Origamiemensch, WikipediaMaster, Wst, Yzmo
Image:Gouraud_high.gif Source: http://en.wikipedia.org/w/index.php?title=File:Gouraud_high.gif License: Creative Commons Attribution 2.0 Contributors: Freddo, Jalo, Origamiemensch,
WikipediaMaster, Yzmo
File:Obj lineremoval.png Source: http://en.wikipedia.org/w/index.php?title=File:Obj_lineremoval.png License: GNU Free Documentation License Contributors: AnonMoos, Maksim,
WikipediaMaster
Image:Isosurface on molecule.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Isosurface_on_molecule.jpg License: unknown Contributors: Kri, StoatBringer, 1 anonymous edits
File:CFD simulation showing vorticity isosurfaces behind propeller.png Source:
http://en.wikipedia.org/w/index.php?title=File:CFD_simulation_showing_vorticity_isosurfaces_behind_propeller.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:
User:Citizenthom
Image:Lambert Cosine Law 1.svg Source: http://en.wikipedia.org/w/index.php?title=File:Lambert_Cosine_Law_1.svg License: Public Domain Contributors: Inductiveload

263

Image Sources, Licenses and Contributors


Image:Lambert Cosine Law 2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Lambert_Cosine_Law_2.svg License: Public Domain Contributors: Inductiveload
Image:DiscreteLodAndCullExampleRanges.MaxDZ8.svg Source: http://en.wikipedia.org/w/index.php?title=File:DiscreteLodAndCullExampleRanges.MaxDZ8.svg License: Public Domain
Contributors: MaxDZ8
Image:WireSphereMaxTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereMaxTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8
Image:WireSphereHiTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereHiTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8
Image:WireSphereStdTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereStdTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8
Image:WireSphereLowTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereLowTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8
Image:WireSphereMinTass.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WireSphereMinTass.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8
Image:SpheresBruteForce.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:SpheresBruteForce.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8
Image:SpheresLodded.MaxDZ8.jpg Source: http://en.wikipedia.org/w/index.php?title=File:SpheresLodded.MaxDZ8.jpg License: Public Domain Contributors: MaxDZ8
Image:DifferenceImageBruteLod.MaxDZ8.png Source: http://en.wikipedia.org/w/index.php?title=File:DifferenceImageBruteLod.MaxDZ8.png License: Public Domain Contributors:
MaxDZ8
Image:MipMap Example STS101.jpg Source: http://en.wikipedia.org/w/index.php?title=File:MipMap_Example_STS101.jpg License: GNU Free Documentation License Contributors:
en:User:Mulad, based on a NASA image
File:Mipmap illustration1.png Source: http://en.wikipedia.org/w/index.php?title=File:Mipmap_illustration1.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: PhL38F
File:Mipmap illustration2.png Source: http://en.wikipedia.org/w/index.php?title=File:Mipmap_illustration2.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: PhL38F
Image:Painters_problem.png Source: http://en.wikipedia.org/w/index.php?title=File:Painters_problem.png License: GNU Free Documentation License Contributors: Bayo, Grafite,
Kilom691, Maksim, Paulo Cesar-1, 1 anonymous edits
Image:NURBS 3-D surface.gif Source: http://en.wikipedia.org/w/index.php?title=File:NURBS_3-D_surface.gif License: Creative Commons Attribution-Sharealike 3.0 Contributors: Greg A L
Image:NURBstatic.svg Source: http://en.wikipedia.org/w/index.php?title=File:NURBstatic.svg License: GNU Free Documentation License Contributors: Original uploader was
WulfTheSaxon at en.wikipedia.org
Image:motoryacht design i.png Source: http://en.wikipedia.org/w/index.php?title=File:Motoryacht_design_i.png License: GNU Free Documentation License Contributors: Original uploader
was Freeformer at en.wikipedia Later version(s) were uploaded by McLoaf at en.wikipedia.
Image:Surface modelling.svg Source: http://en.wikipedia.org/w/index.php?title=File:Surface_modelling.svg License: GNU Free Documentation License Contributors: Surface1.jpg: Maksim
derivative work: Vladsinger (talk)
Image:nurbsbasisconstruct.png Source: http://en.wikipedia.org/w/index.php?title=File:Nurbsbasisconstruct.png License: GNU Free Documentation License Contributors:
Mauritsmaartendejong, McLoaf, 1 anonymous edits
Image:nurbsbasislin2.png Source: http://en.wikipedia.org/w/index.php?title=File:Nurbsbasislin2.png License: GNU Free Documentation License Contributors: Mauritsmaartendejong,
McLoaf, Quadell, 1 anonymous edits
Image:nurbsbasisquad2.png Source: http://en.wikipedia.org/w/index.php?title=File:Nurbsbasisquad2.png License: GNU Free Documentation License Contributors: Mauritsmaartendejong,
McLoaf, Quadell, 1 anonymous edits
Image:Normal vectors2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Normal_vectors2.svg License: Public Domain Contributors: Cdang, Oleg Alexandrov, 2 anonymous edits
Image:Surface normal illustration.png Source: http://en.wikipedia.org/w/index.php?title=File:Surface_normal_illustration.png License: Public Domain Contributors: Oleg Alexandrov
Image:Surface normal.png Source: http://en.wikipedia.org/w/index.php?title=File:Surface_normal.png License: Public Domain Contributors: Original uploader was Oleg Alexandrov at
en.wikipedia
Image:Reflection angles.svg Source: http://en.wikipedia.org/w/index.php?title=File:Reflection_angles.svg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors:
Arvelius, EDUCA33E, Ies
Image:Normal map example.png Source: http://en.wikipedia.org/w/index.php?title=File:Normal_map_example.png License: Creative Commons Attribution-ShareAlike 1.0 Generic
Contributors: Juiced lemon, Julian Herzog, Maksim, Metoc
File:Normal map example with scene and result.png Source: http://en.wikipedia.org/w/index.php?title=File:Normal_map_example_with_scene_and_result.png License: Creative Commons
Attribution 3.0 Contributors: Julian Herzog
Image:Oren-nayar-vase1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase1.jpg License: GNU General Public License Contributors: M.Oren and S.Nayar. Original
uploader was Jwgu at en.wikipedia
Image:Oren-nayar-surface.png Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-surface.png License: Public domain Contributors: Image:Oren-nayar-reflection.png Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-reflection.png License: Public domain Contributors: Image:Oren-nayar-vase2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase2.jpg License: GNU General Public License Contributors: M. Oren and S. Nayar. Original
uploader was Jwgu at en.wikipedia
Image:Oren-nayar-vase3.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase3.jpg License: GNU General Public License Contributors: M. Oren and S. Nayar. Original
uploader was Jwgu at en.wikipedia
Image:Oren-nayar-sphere.png Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-sphere.png License: Public domain Contributors: File:Painter's algorithm.svg Source: http://en.wikipedia.org/w/index.php?title=File:Painter's_algorithm.svg License: GNU Free Documentation License Contributors: Zapyon
File:Magnify-clip.png Source: http://en.wikipedia.org/w/index.php?title=File:Magnify-clip.png License: Public Domain Contributors: User:Erasoft24
File:Painters problem.svg Source: http://en.wikipedia.org/w/index.php?title=File:Painters_problem.svg License: Public Domain Contributors: Wojciech Mua
Image:particle sys fire.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Particle_sys_fire.jpg License: Public Domain Contributors: Jtsiomb
Image:particle sys galaxy.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Particle_sys_galaxy.jpg License: Public Domain Contributors: User Jtsiomb on en.wikipedia
Image:Pi-explosion.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pi-explosion.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Sameboat
Image:Particle Emitter.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Particle_Emitter.jpg License: GNU Free Documentation License Contributors: Halixi72
Image:Strand Emitter.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Strand_Emitter.jpg License: GNU Free Documentation License Contributors: Anthony62490, Halixi72,
MER-C
Image:Bidirectional scattering distribution function.svg Source: http://en.wikipedia.org/w/index.php?title=File:Bidirectional_scattering_distribution_function.svg License: Public Domain
Contributors: Twisp
Image:Phong components version 4.png Source: http://en.wikipedia.org/w/index.php?title=File:Phong_components_version_4.png License: Creative Commons Attribution-ShareAlike 3.0
Unported Contributors: User:Rainwarrior
Image:Phong-shading-sample.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Phong-shading-sample.jpg License: Public Domain Contributors: Jalo, Mikhail Ryazanov,
WikipediaMaster, 1 anonymous edits
File:Glas-1000-enery.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Glas-1000-enery.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Tobias R Metoc
Image:Procedural Texture.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Procedural_Texture.jpg License: GNU Free Documentation License Contributors: Gabriel VanHelsing,
Lionel Allorge, Metoc, Wiksaidit
file:Axonometric projection.svg Source: http://en.wikipedia.org/w/index.php?title=File:Axonometric_projection.svg License: Public Domain Contributors: Yuri Raysper
File:Perspective Transform Diagram.png Source: http://en.wikipedia.org/w/index.php?title=File:Perspective_Transform_Diagram.png License: Public Domain Contributors: Skytiger2, 1
anonymous edits
File:Diagonal rotation.png Source: http://en.wikipedia.org/w/index.php?title=File:Diagonal_rotation.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: MathsPoetry
File:Versor action on Hurwitz quaternions.svg Source: http://en.wikipedia.org/w/index.php?title=File:Versor_action_on_Hurwitz_quaternions.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Incnis Mrsi
File:Space of rotations.png Source: http://en.wikipedia.org/w/index.php?title=File:Space_of_rotations.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Flappiefh,
MathsPoetry, Phy1729, SlavMFM

264

Image Sources, Licenses and Contributors


File:Hypersphere of rotations.png Source: http://en.wikipedia.org/w/index.php?title=File:Hypersphere_of_rotations.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Hawky.diddiz, MathsPoetry, Perhelion, Phy1729
Image:Radiosity - RRV, step 79.png Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_-_RRV,_step_79.png License: Creative Commons Attribution-Sharealike 3.0
Contributors: DaBler, Kri, McZusatz
Image:Radiosity Comparison.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_Comparison.jpg License: GNU Free Documentation License Contributors: Hugo Elias
(myself)
Image:Radiosity Progress.png Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_Progress.png License: GNU Free Documentation License Contributors: Hugo Elias (myself)
File:Nusselt analog.svg Source: http://en.wikipedia.org/w/index.php?title=File:Nusselt_analog.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Jheald
Image:Utah teapot simple 2.png Source: http://en.wikipedia.org/w/index.php?title=File:Utah_teapot_simple_2.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Dhatfield
File:Recursive raytrace of a sphere.png Source: http://en.wikipedia.org/w/index.php?title=File:Recursive_raytrace_of_a_sphere.png License: Creative Commons Attribution-Share Alike
Contributors: Tim Babb
File:Ray trace diagram.svg Source: http://en.wikipedia.org/w/index.php?title=File:Ray_trace_diagram.svg License: GNU Free Documentation License Contributors: Henrik
File:BallsRender.png Source: http://en.wikipedia.org/w/index.php?title=File:BallsRender.png License: Creative Commons Attribution 3.0 Contributors: Averater, Magog the Ogre, 1
anonymous edits
File:Ray-traced steel balls.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Ray-traced_steel_balls.jpg License: GNU Free Documentation License Contributors: Original uploader
was Greg L at en.wikipedia (Original text : Greg L)
File:Glass ochem.png Source: http://en.wikipedia.org/w/index.php?title=File:Glass_ochem.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Purpy Pupple
File:PathOfRays.svg Source: http://en.wikipedia.org/w/index.php?title=File:PathOfRays.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Traced by User:Stannered,
original by en:user:Kolibri
Image:Refl sample.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Refl_sample.jpg License: Public Domain Contributors: Lixihan
Image:Mirror2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Mirror2.jpg License: Public Domain Contributors: Al Hart
Image:Metallic balls.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Metallic_balls.jpg License: Public Domain Contributors: AlHart
Image:Blurry reflection.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Blurry_reflection.jpg License: Public Domain Contributors: AlHart
Image:Glossy-spheres.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Glossy-spheres.jpg License: Public Domain Contributors: AlHart
Image:Spoon fi.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Spoon_fi.jpg License: GNU Free Documentation License Contributors: User Freeformer on en.wikipedia
Image:cube mapped reflection example.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Cube_mapped_reflection_example.jpg License: GNU Free Documentation License
Contributors: User TopherTG on en.wikipedia
Image:Cube mapped reflection example 2.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Cube_mapped_reflection_example_2.JPG License: Public Domain Contributors: User
Gamer3D on en.wikipedia
File:Render Types.png Source: http://en.wikipedia.org/w/index.php?title=File:Render_Types.png License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors: Maximilian
Schnherr
Image:Cg-jewelry-design.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Cg-jewelry-design.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
http://www.alldzine.com
File:Latest Rendering of the E-ELT.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Latest_Rendering_of_the_E-ELT.jpg License: unknown Contributors: Swinburne Astronomy
Productions/ESO
Image:SpiralSphereAndJuliaDetail1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:SpiralSphereAndJuliaDetail1.jpg License: Creative Commons Attribution 3.0 Contributors:
Robert W. McGregor Original uploader was Azunda at en.wikipedia
File:ESTCube orbiidil 2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:ESTCube_orbiidil_2.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Quibik,
Utvikipedist
File:Screen space ambient occlusion.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Screen_space_ambient_occlusion.jpg License: Public domain Contributors: Vlad3D at
en.wikipedia
Image:7fin.png Source: http://en.wikipedia.org/w/index.php?title=File:7fin.png License: GNU Free Documentation License Contributors: Original uploader was Praetor alpha at en.wikipedia
Image:3noshadow.png Source: http://en.wikipedia.org/w/index.php?title=File:3noshadow.png License: GNU Free Documentation License Contributors: Original uploader was Praetor alpha at
en.wikipedia
Image:1light.png Source: http://en.wikipedia.org/w/index.php?title=File:1light.png License: GNU Free Documentation License Contributors: Original uploader was Praetor alpha at
en.wikipedia. Later version(s) were uploaded by Solarcaine at en.wikipedia.
Image:2shadowmap.png Source: http://en.wikipedia.org/w/index.php?title=File:2shadowmap.png License: GNU Free Documentation License Contributors: User Praetor alpha on
en.wikipedia
Image:4overmap.png Source: http://en.wikipedia.org/w/index.php?title=File:4overmap.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Original uploader
was Praetor alpha at en.wikipedia
Image:5failed.png Source: http://en.wikipedia.org/w/index.php?title=File:5failed.png License: GNU Free Documentation License Contributors: Original uploader was Praetor alpha at
en.wikipedia
Image:Shadow volume illustration.png Source: http://en.wikipedia.org/w/index.php?title=File:Shadow_volume_illustration.png License: GNU Free Documentation License Contributors:
User:Rainwarrior
File:Specular highlight.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Specular_highlight.jpg License: GNU Free Documentation License Contributors: Original uploader was
Reedbeta at en.wikipedia
Image:Specular highlight.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Specular_highlight.jpg License: GNU Free Documentation License Contributors: Original uploader was
Reedbeta at en.wikipedia
Image:Stencilb&w.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Stencilb&w.JPG License: GNU Free Documentation License Contributors: Levj, 1 anonymous edits
File:3D von Neumann Stencil Model.svg Source: http://en.wikipedia.org/w/index.php?title=File:3D_von_Neumann_Stencil_Model.svg License: Creative Commons Attribution 3.0
Contributors: Gentryx
File:2D von Neumann Stencil.svg Source: http://en.wikipedia.org/w/index.php?title=File:2D_von_Neumann_Stencil.svg License: Creative Commons Attribution 3.0 Contributors: Gentryx
file:2D_Jacobi_t_0000.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0000.png License: Creative Commons Attribution 3.0 Contributors: Gentryx
file:2D_Jacobi_t_0200.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0200.png License: Creative Commons Attribution 3.0 Contributors: Gentryx
file:2D_Jacobi_t_0400.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0400.png License: Creative Commons Attribution 3.0 Contributors: Gentryx
file:2D_Jacobi_t_0600.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0600.png License: Creative Commons Attribution 3.0 Contributors: Gentryx
file:2D_Jacobi_t_0800.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_0800.png License: Creative Commons Attribution 3.0 Contributors: Gentryx
file:2D_Jacobi_t_1000.png Source: http://en.wikipedia.org/w/index.php?title=File:2D_Jacobi_t_1000.png License: Creative Commons Attribution 3.0 Contributors: Gentryx
file:Moore_d.gif Source: http://en.wikipedia.org/w/index.php?title=File:Moore_d.gif License: Public Domain Contributors: Bob
file:Vierer-Nachbarschaft.png Source: http://en.wikipedia.org/w/index.php?title=File:Vierer-Nachbarschaft.png License: Public Domain Contributors: Ma-Lik, Zefram
file:3D_von_Neumann_Stencil_Model.svg Source: http://en.wikipedia.org/w/index.php?title=File:3D_von_Neumann_Stencil_Model.svg License: Creative Commons Attribution 3.0
Contributors: Gentryx
file:3D_Earth_Sciences_Stencil_Model.svg Source: http://en.wikipedia.org/w/index.php?title=File:3D_Earth_Sciences_Stencil_Model.svg License: Creative Commons Attribution 3.0
Contributors: Gentryx
File:Catmull-Clark subdivision of a cube.svg Source: http://en.wikipedia.org/w/index.php?title=File:Catmull-Clark_subdivision_of_a_cube.svg License: GNU Free Documentation License
Contributors: Ico83, Kilom691, Mysid, Zundark

265

Image Sources, Licenses and Contributors


Image:ShellOpticalDescattering.png Source: http://en.wikipedia.org/w/index.php?title=File:ShellOpticalDescattering.png License: Creative Commons Attribution-Sharealike 3.0
Contributors: Meekohi
Image:Subsurface scattering.png Source: http://en.wikipedia.org/w/index.php?title=File:Subsurface_scattering.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Piotrek Chwaa
Image:Sub-surface scattering depth map.svg Source: http://en.wikipedia.org/w/index.php?title=File:Sub-surface_scattering_depth_map.svg License: Public Domain Contributors: Tinctorius
Image:VoronoiPolygons.jpg Source: http://en.wikipedia.org/w/index.php?title=File:VoronoiPolygons.jpg License: Creative Commons Zero Contributors: Kmk35
Image:ProjectorFunc1.png Source: http://en.wikipedia.org/w/index.php?title=File:ProjectorFunc1.png License: Creative Commons Zero Contributors: Kmk35
Image:Texturedm1a2.png Source: http://en.wikipedia.org/w/index.php?title=File:Texturedm1a2.png License: GNU Free Documentation License Contributors: Anynobody
Image:Bumpandopacity.png Source: http://en.wikipedia.org/w/index.php?title=File:Bumpandopacity.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Anynobody
Image:Perspective correct texture mapping.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Perspective_correct_texture_mapping.jpg License: Public Domain Contributors:
Rainwarrior
Image:Texturemapping subdivision.svg Source: http://en.wikipedia.org/w/index.php?title=File:Texturemapping_subdivision.svg License: Public Domain Contributors: Arnero
Image:Ahorn-Maser Holz.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Ahorn-Maser_Holz.JPG License: GNU Free Documentation License Contributors: Das Ohr, Ies,
Skipjack, Wst
Image:Texture spectrum.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Texture_spectrum.jpg License: Public Domain Contributors: Jhhays
Image:UVMapping.png Source: http://en.wikipedia.org/w/index.php?title=File:UVMapping.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Tschmits
Image:UV mapping checkered sphere.png Source: http://en.wikipedia.org/w/index.php?title=File:UV_mapping_checkered_sphere.png License: Creative Commons Attribution-ShareAlike 3.0
Unported Contributors: Jleedev
Image:Cube Representative UV Unwrapping.png Source: http://en.wikipedia.org/w/index.php?title=File:Cube_Representative_UV_Unwrapping.png License: Creative Commons
Attribution-Sharealike 3.0 Contributors: - Zephyris Talk. Original uploader was Zephyris at en.wikipedia
File:Two rays and one vertex.png Source: http://en.wikipedia.org/w/index.php?title=File:Two_rays_and_one_vertex.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:
CMBJ
File:Polygon mouths and ears.png Source: http://en.wikipedia.org/w/index.php?title=File:Polygon_mouths_and_ears.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:
User:Azylber
File:Vertex normals.png Source: http://en.wikipedia.org/w/index.php?title=File:Vertex_normals.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Anders
Sandberg
File:ViewFrustum.svg Source: http://en.wikipedia.org/w/index.php?title=File:ViewFrustum.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:MithrandirMage
Image:CTSkullImage.png Source: http://en.wikipedia.org/w/index.php?title=File:CTSkullImage.png License: Public Domain Contributors: Original uploader was Sjschen at en.wikipedia
Image:CTWristImage.png Source: http://en.wikipedia.org/w/index.php?title=File:CTWristImage.png License: Public Domain Contributors: http://en.wikipedia.org/wiki/User:Sjschen
Image:Croc.5.3.10.a gb1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Croc.5.3.10.a_gb1.jpg License: Copyrighted free use Contributors: stefanbanev
Image:volRenderShearWarp.gif Source: http://en.wikipedia.org/w/index.php?title=File:VolRenderShearWarp.gif License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0
Contributors: Original uploader was Lackas at en.wikipedia
Image:MIP-mouse.gif Source: http://en.wikipedia.org/w/index.php?title=File:MIP-mouse.gif License: Public Domain Contributors: Original uploader was Lackas at en.wikipedia
Image:Big Buck Bunny - forest.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Big_Buck_Bunny_-_forest.jpg License: unknown Contributors: Blender Foundation / Project Peach
Image:voxels.svg Source: http://en.wikipedia.org/w/index.php?title=File:Voxels.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Pieter Kuiper, Vossman
Image:Ribo-Voxels.png Source: http://en.wikipedia.org/w/index.php?title=File:Ribo-Voxels.png License: Creative Commons Attribution-Sharealike 2.5 Contributors: TimVickers, Vossman
Image:Z buffer.svg Source: http://en.wikipedia.org/w/index.php?title=File:Z_buffer.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: -ZeusImage:Z-fighting.png Source: http://en.wikipedia.org/w/index.php?title=File:Z-fighting.png License: Public domain Contributors: Mhoskins at en.wikipedia
Image:ZfightingCB.png Source: http://en.wikipedia.org/w/index.php?title=File:ZfightingCB.png License: Public domain Contributors: CompuHacker (talk)

266

License

License
Creative Commons Attribution-Share Alike 3.0
//creativecommons.org/licenses/by-sa/3.0/

267

Você também pode gostar