Você está na página 1de 12

A software processor developed at RPI adds color-shaded imaging

capability to 3-D geometric modeling systems, greatly enhancing


their usefulness in industrial design applications.

Toward Fast Color-Shaded Images


of CAD/CAM Geometry

Paolo Sabella and Michael J. Wozny


Rensselaer Polytechnic Institute

The growing demand for advanced 3-D geometric Assemblies. Complex mechanical assemblies, especial-
modeling capabilities in mechanical CAD/CAM systems ly those which consist of densely packed subassemblies,
stems from the need to attack complex design more represent a natural application area of solid geometry
directly and automatically. One aspect of this need is a modeling, because the assembly-related issues such as fit-
more user-friendly interface. ting components together, or checking interferences, or
It is becoming increasingly clear that an effective lining up holes are basically all volumetric in nature.
human-computer interface for handling complex 3-D Solid models of real assemblies generally consist of hun-
geometry must include a color-shaded image capability dreds of individual primitives, making visualization ex-
in addition to a highly interactive line-drawing capabili- tremely difficult on a line-drawing system, even with the
ty. Just about all of the CAD/CAM vendors have taken hidden lines removed.
the first step and offer a color line-drawing capability. Consider, for example, the solid geometry model of
Although only a few vendors are addressing the problem the power supply assembly shown in Figure 1.1 Although
of generating and manipulating realistic-looking, 3-D, this model consists of only 114 primitives, it is essentially
color-shaded geometry today, this capability will hit the incomprehensible when all edges of all the primitives are
commercial market in the not too distant future. displayed (Figure la), discernible with the hidden lines
This article describes a software processor for render- removed (Figure lb), and easily understood in the color-
ing high-quality, color-shaded "snapshots" directly and shaded form (Figure lc). In this case the CPU time need-
rapidly from commercial 3-D CAD/CAM systems. Fast ed to calculate the hidden-line images was approximately
rendering is obtained through speed enhancement in the 10 times longer than that required to compute the color-
vector-to-raster scan conversion algorithm and through shaded, hidden-surface image, assuming in both cases
the use of new IBM hardware (DACU), which allows, that all Boolean operations had previously been com-
among other things, graphic devices to be connected pleted and a boundary file computed. Of course, such a
directly to an IBM mainframe channel. calculation is very dependent on the algorithms used and
on the complexity of the model (4000 polygons in the
above case). But the fact remains that fast color-shaded
Importance of color-shaded images algorithms can provide useful results on real models in a
reasonable time.
This section describes several industrial applications Color-shaded rendering of solid and surface geometry
where color-shaded images are useful. Although several is also very useful in industrial design applications such
geometric models were originally created by the in- as the evaluation of new product mock-ups for aesthetic
dustrial sponsors of the RPI research program, all im- appeal.
ages with the exception of Figure 7 were computed at
RPI using the software processor discussed here. The Surface features. In applications where the shape of a
hardware complement used for this work is described in surface is important, it is often necessary to ascertain the
Appendix A. presence and nature of surface features such as regions of

60 0272-1716/83/1100-0060501.00 © 1983 IEEE IEEE CG&A


undesirable curvature (creases or flat areas) and silhou-
ette edges. In renderings on vector displays, the number
of vectors drawn is generally kept to a minimum to avoid
flicker and/or clutter on the display. Thus, unusual
features are not usually apparent to the designer.
Figure 2 shows a surface in which undesirable wrinkles
are present. The wireframe representation hides the true
shape of the surface, while the color-shaded representa-
tion clearly shows the surface structure inside the bound-
ary. True, one can always examine the internal surface
shape in wireframe by adding more vectors, but the im-
age becomes very cluttered, making the nature of the
shape hard to understand. Furthermore, it is not always
apparent from a first look at a wireframe image that
more detail will reveal possible problems.
Figure 3 depicts adjacent surface patches with a slope
discontinuity along the common boundary. Again, this
mistake could easily be missed in the complex wireframe
representation, while the color-shaded image actually ac-
centuates the feature.
Color-shaded representations also allow the user to
determine easily if all the surface patches necessary to
completely define the part have been created. The

(a)

Figure 1. Power supply assembly model displayed with


all edges shown (a), with hidden lines removed (b), and in Figure 2. Wrinkled surface: vector rendering (a); raster
color-shaded raster form (c). rendering (b).

November 1983 61
wireframe bulkhead in Figure 4a has one ruled surface are added. For each pixel, the scan conversion module
not defined, which is very evident in Figure 4b. determines which of the overlapping surfaces is visible
To examine surface features, one needs proper and its color.
highlighting and reflections. One can draw an analogy to
looking down the fender of a used car to determine if the
car has ever been involved in an accident. The Klein bot- Color and illumination attributes
tle in Figure 5 has sufficient highlights to impart a clear
understanding of the surface shape. For engineering pur- In a color raster environment, a user must specify
poses, it is the location of the resulting highlight on the display parameters never used in a vector system. These
surfaces that is important, and not the location of the are the background color, the color and position of the
light sources themselves. Yet most systems do the op- light sources, and the illumination attributes associated
posite-i.e., they force the engineer to position the light with surfaces.
sources first, and then determine if the resulting high- The illumination attributes consist of a color for the
lights show off the desired surface properties. diffused reflection, a color for the specular reflection,
and a parameter describing the angular spread of the
reflection. Defining a color depends on the reflectance
Software processor architecture model used. It can range in complexity from specifying
simply the red, green, and blue components of the color,
Figure 6 outlines the information flow necessary to to inputting the entire reflectance spectrum as a function
render a color-shaded image of CAD geometry. The first of incidence angle (a three-dimensional graph).2 The
step is to extract the geometry from a CAD system. Next, shading scheme used in the RPI processor requires 10 in-
the geometry is approximated with a polygonal-model, put parameters. The RGB values range from 0 to 1 and
and then color and lighting attributes of the final image can be arrived at interactively by a mixing procedure.3

(a) (a) 02//f0

Figure 3. Wheels: vector rendering (a); raster rendering Figure 4. An N/C part (bulkhead): vector rendering (a);
(b)- raster rendering (b).

62 IEEE CG&A
(For scan-line-rendering algorithms, the illumination polygonal approximation module. Examples of data
model consists of reflection parameters. Ray-tracing needed to approximate surfaces and solids are given in
algorithms, on the other hand, allow additional parame- the next section, "Polygonal approximation."
ters to be defined, such as the amount of light re-
flected-i.e., mirror reflection-the amount of light Display type associativities. Each entity has to be
transmitted, and the index of refraction.) associated with a display type. Since there are usually
A specific combination of illumination attributes is many more entities than display types, it is unnecessary
called a display type. Once a combination has been deter- (and inconvenient) to require the user to specify a display
mined, it can be archived and referred to by a name such type for each entity. Instead, all entities are given a
as "blue plastic" or "dark copper" much like the use of default display type, which the user overrides by using
a name such as "dashed" or "dotted" for a line type in a the set or group facility of the geometry database.
vector environment. In addition to grouping entities for the convenience of
To simplify the rendering of a model, one can specify a specifying display types, the set or group facility can be
default display type for all surfaces, a default position for used to store the display type associativities. For exam-
the light sources, and a default background color. How- ple, if the group name is an alphanumeric string, the
ever, to take full advantage of the color "snapshot," the string itself can spell the name of the display type, such as
user needs the ability to selectively assign display types ''copper" or "steel."
within the model and to add, remove, and position light
sources. View definition. In order to render the model in the
same position as it appears on the vector screen, it is
necessary to retrieve the view transformation and
Accessing geometry perspective data being used by the display routines of the
CAD system. If this information is not available, then
The RPI software processor is used for rendering sur- the view must be specified by the user or a default view
faces and solids that have boundary representations. For chosen.
solids defined in terms of a CSG (constructive solid The geometry access module can reside within the
geometry) tree, a boundary representation would have to CAD system if one wishes to have the same scene on both
be computed. This capability generally exists in CSG the vector and raster displays. While not essential, it may
modelers for NC machining and display.4 (Atherton5 be desirable to take advantage of existing menu
and Edwards6 present rendering techniques that work capabilities by placing the color attribute model within
directly on CSG trees.) the CAD system. The polygonal approximation and scan
Many commercial CAD/CAM packages provide in- conversion models can be separate tasks, preferably in
terface capabilities that allow external programs to ac- specialized hardware. This makes it possible to continue
cess the system's geometric database. The information designing on the CAD system while the image is being
that must be extracted for the color rendering consists of computed.
the following. At RPI, a raster capability has been integrated in-
dependently into two CAD modeling systems. In the case
Geometry-defining data. The data defining geometry of the solid modeler GDP,1 menus were added to guide
have to be put into the form expected as input in the the user through every desired scenario for creating and

RASTER
IMAGE

Figure 5. Klein bottle. Figure 6. System information flow.

November 1983 63
assigning display types, as well as positioning the light vature criteria, overcomes these difficulties. For tensor
source. The scan conversion and shading module (see product surfaces, Lane et al.8"0 give a technique for sub-
"Raster4K rendering algorithm" below) was integrated dividing the surface based on tolerances of planarity of
directly into the display driver. the surface and curvature of the edges.
In the case of NCAD,7 a surface modeler, the display The subdivision algorithm performs operations on the
type was associated with the geometry via the existing coefficient matrix of surfaces defined in terms of basis
group entity, by using the characters 'C' at the begin- functions that are the Bernstein polynomials."' Other
ning of the group name. The rest of the group name con- common basis functions can be easily converted to the
tains the 10 parameters defining the diffused and Bernstein basis. For example, if the bicubic surfaces are
specular colors. Similarly, light sources are lines in the expressed in terms of the power basis, i.e., by the
database placed in a group with '*L' at the beginning of 4 x 4 x 3 matrix P,
the group name. The rest of the name is the RGB color X(u,v) = [u3 u2ul]P [u] =ju4Pju
and intensity of the light.
With this naming convention, only one routine was u2
added to NCAD; it accesses the database and writes to a
file the surface geometry, the corresponding display
types, the viewpoint, and the light sources. This routine where u = [u3u2ulJandi = [v3 v2vlJ,
performs the functions of a geometry access and a color
attribute module. The polygonal approximation module then the geometry access module implicitly converts the
and scan converter reside in a separate task, which reads basis functions into Bernstein polynomials via the 4 x 4
in the file and produces the raster image, freeing the transformation M,
CAD system while the image is being computed. It also iii = uM, `I = uiM
permits the user to input other display information, such where
as the screen format, background color, ambient reflec- -3 3 I
tance component, and the resolution desired on output,
if defaults for these are not desired. M = 0 0 3
3
3 -6 3
O O O
_3
Polygonal approximation The resulting transformed representation is
This step could be avoided with a scan conversion X(u,v) = -MM- P(MT)-IMTjiT
routine that can simultaneously render entities from each
class of geometry used in the CAD/CAM system. For the = uAiBvT,
where
RPI processor, however, a scan conversion routine that B = M-IP(MT)-1.
displays polygonal surfaces was chosen on the basis of
the following criteria: The resulting matrix B contains the coefficients needed
* simplicity-to facilitate the development of the by the polygonal approximation module, namely the ver-
Raster4K technique (see below); tices of the control polyhedron surrounding the surface.
* robustness-numerical techniques such as those The Lane-Carpenter subdivision scheme was used in
described by Lane et al.8 are not guaranteed to con- the approximation algorithm implemented for bicubic
verge on complicated surfaces; surfaces in CATIA (a geometric modeler written by
* efficiency-all operations can be converted to fixed- Dassault Aircraft in France and marketed in the US by
point arithmetic for performance gains (this was not IBM) and for the rational bicubics in NCAD. For ra-
done in the present implementation). tionals the coefficient matrix is 4 x 4 x 4, and entries cor-
responding to the control polyhedron vertices must first
The curved geometry of existing modelers can readily be projected into Euclidean 3-space before a planarity
be approximated by polygonal tiles. As is well-known, test such as the one described by Lane and Carpenterl' is
one can generate a smooth shaded image that does not performed.
have the appearance of faceted edges simply by using, for Appendix B explains the subdivision process in more
example, normal vector interpolation in the shading detail and gives a method for filling in the surface cracks
algorithm and sufficiently small polygons properly that appear when adjacent portions of the underlying uv
placed on a surface.9 domain are subdivided to different levels.
Other functions performed by the polygonal approx-
imation module are the transformation of the geometry Polyhedral solids. Polyhedral solid models are already
into the screen coordinate system and the triangulation bounded by planar facets. If some facets are originally
of the polygons. derived from curved surfaces, it is necessary to have a
normal vector associated with each vertex on those
Curved surfaces. Parametrically defined surfaces can facets. The normals are used later to perform smooth
be naively approximated by samples spaced evenly within shading across the facet, giving the appearance of a
the parameter domain. This, however, leads to over- curved surface. Ifthe normal to the actual curved surface
sampling of flat regions and small objects, while areas of is not available, then it can be approximated by averag-
high curvature appear faceted. A recursive subdivision ing the normals of all facets sharing the vertex. Care must
technique, which subdivides the surface based on cur- be taken not to average in facets actually belonging to
64 IEEE CG&A
flat surfaces that happen to be adjacent to a curved sur- the most important problem to be solved for generating
face approximation. Otherwise the result will be shading, an image is to determine at each pixel which surfaces are
giving the appearance of curved surfaces even where the visible. Once the visible-surfaces are known, each surface
solid is flat. is shaded independently. As a result, shadows and
Thus, the data needed from the geometry access mod- reflections are not handled. (By computing the scene
ule is a list of faces, each containing one or more bound- from the viewpoint of each light source, shadows can be
ary curves, as well as one bit of information per face in- created as black surfaces,14 but this is a preprocessing
dicating whether it is a curved approximation. Each step and not in the image generation itself.)
boundary curve is a list of vertices. Figure 8 is a rendering of the scene in Figure 7 by a
scan-line algorithm. Note that a smooth reflective sur-
face such as the cylinder in Figure 7 is perceived only by
Scan conversion means of reflections from other objects in the scene. The
shading schemes used in techniques other than ray trac-
The generation of a raster image consists of computing ing can model at best the scattered reflections found on
a color value for each pixel on the display. The quality of rough surfaces-i.e., specular reflections.
the image is directly proportional to the amount of com- Even though these algorithms are much faster than ray
putation performed for each pixel. Thus, on a scale of tracing, at present they still take several minutes for
image quality, low-interactivity techniques lie at the high reasonable-quality images (see Table 1).
end, and high-interactivity techniques lie at the low end.
Ray tracing. The technique best known for high- Special-purpose hardware. The image-generation time
quality rendering is called ray tracing,6"2 which gener- can be greatly reduced by the use of special-purpose
ates an image by back-tracking the light rays emerging hardware. It is reasonable to expect that raster image
from each pixel. This produces a highly realistic image generation can be delegated to the display device just as
since shadows and reflections can be displayed. How- vector image generation can. Several systems are already
ever, image generation can take a large amount of CPU available; for example, the Evans & Sutherland CT-5
time because the intersection of a line with a surface has Simulator can render around 30 frames a second. Others
to be computed many times for each pixel. Computation more appropriate for mechanical design include the Lex-
times on the order of several hours to tens of hours are idata Solidview, the Cubic Systems CS-3, and the Adage
common for ray-traced images. Figure 7 displays a ray- 3000. None of these do ray tracing, but they can generate
traced rendering of a sphere and a cylinder on a shiny a scene in seconds. They achieve this speed over a host
surface. This image took 1½/2 hours to compute on an computer by eliminating the wait on page faults and by
IBM 4341 11 computer, under CMS with eight megabytes using microcoded programs on special-purpose hard-
of memory. ware.
Faster rendering. There are a number of well-estab-
lished, raster rendering techniques, such as the z-buffer, Raster4K rendering algorithm
the list priority, the scan-line, and the Warnock algo-
rithms.13 These algorithms all take advantage of some The Raster4K rendering algorithm was designed to
form of coherence in the scene. That is, they use the generate reasonably high-quality images for use in
results of a previous calculation as an aid in the current mechanical design. In determining how much quality is
calculation. necessary, one must consider the fact that in a design en-
They are known as hidden-surface algorithms, since vironment speed is of utmost importance. However, the

Figure 8. Objects in Figure 7 rendered by scan-line


Figure 7. Ray-traced Image of sphere and cylinder. algorithm.

November 1983 65
inherent limitation of relatively low resolution must also computed. The algorithm ceases to consider the polygon
be taken into account. The guiding criterion for the when the lowest vertex is encountered. For details on par-
development of Raster4K was to minimize low-resolu- ticular implementations of scan-line algorithms, see
tion artifacts such as aliasing (staircasing) and Mach Sutherland et al.'3 The technique used to obtain high
banding, which tends to hide details, while simultaneous- resolution is described next.
ly keeping down the response time. Thus, a ray-tracing The input to the algorithm is in the form of polygons,
technique was not chosen for this application because which must be broken up into triangles. This incurs the
shadows and reflections are too expensive in computa- additional costs of triangulating and storing the triangles
tion for the next gain in engineering information. and processing more edges. On the other hand, using tri-
Raster4K uses a look-up table technique for perform- angles simplifies the method, eliminates sorting in X to
ing antialiasing, typically up to a resolution of 4K x 4K insert a span, and gives shading independent of orienta-
on a 512 x 512 display. It is essentially a discrete version tion of the object. '7
of Catmull's algorithm,15 which weights the color of a
pixel by the area of each edge covering it. (Although the Initialization. Starting with triangles transformed into
ideas were obtained independently, a recently published the image coordinate system and lying within the window
paper by Fiume et al.16 presents a similar approach.) Npix by Npix, one performs a bucket sort'8 on the peak
vertex of each triangle by inserting each triangle into a
Description of algorithm. For the purposes of this arti- linked list. There is one list for each scan line (see Figure
cle, a scan line is defined as a horizontal row of pixels, 9). The array of lists is called the Y-bucket.
and the scan conversion process proceeds from top to In the next preprocessing step, one computes the look-
bottom. A scan-line algorithm solves the hidden-surface up tables to be used for antialiasing. These are called the
problem one scan line at a time, taking advantage of mask tables. A mask is an 8 x 8 array of subpixels used to
coherence of edges from one scan line to the next. For ex- represent edges within a pixel. There are two types: filled
ample, in order to scan-convert a convex polygon, the
algorithm encounters the peak vertex first. On each
subsequent scan line, the polygon can be filled in be-
tween the left and right edges, which are incrementally
(a)

Yi
J LIST OF
TRIANGLES
WITH PEAK
VERTICES AT Yi

/ DIRECTION
. OF EDGE
S (b)
S

Y
SE _SE
___xxi.7,/~~~

- WE X W m
NPIX
ENTRIES
IEEEEEA
Figure 9. Y-bucket data structure. Figure 10. Filled mask (a); line mask (b).

66 IEEE CG&A
masks and line masks. A filled mask is a 64-bit word con- the union of the masks instead of creating a
taining all bits set when a left edge crosses the pixel (see new entry. This corrects thin triangles and
Figure 10a). The corresponding filled mask for a right corners (see Figure 11). For triangles which
edge is obtained by taking the complement of the filled are less than one subpixel thin, this operation
mask for a left edge. would leave a zero mask. In this case, OR in
A line mask is a 64-bit word with the bits set only along the line mask of the edge so that thin edges do
the edge (see Figure 10b). This is the same for left and not completely disappear.
right edges. Since the filled masks depend on the order in (2.2.4) If the edge has reached the pixel contain-
which the pixel is crossed, and there are 28 positions to ing the end vertex of the edge, continue with a
enter a pixel and 28 positions to exit, the table size need new edge beginning at this vertex. However, if
only be 28 x 28 or 784 64-bit words long-i.e., 3136 bytes this vertex is also the end vertex of the other
long. The following sections describe how these masks edge in the same triangle, do not continue,
are used to perform clipping and area computation but instead set the terminate flag.
within a pixel. For a 512x 512 image, the use of 8x8 (2.2.5) Repeat steps 2.2.1 through 2.2.4 until
masks gives an effective resolution of 4K x 4K for the either the edge has entered a pixel off the scan
computation of edges; the complexity of the algorithm line or the terminate flag is set.
remains on the order of 512 x 512, the resolution to (2.3) Fill in pixels between the right and left edges
which shading is computed. Hence the name Raster4K. of the same triangle.
(2.4) Process each pixel in the X-bucket as
Processing. The following are the steps of processing: follows:
(1) Initialize the active edge list (AEL), a linked list of (2.4.1) Process each edge in the pixel as follows:
data blocks containing information used in track- (2.4.1.1) Clip the edge against all edges that
ing an edge from one scan line to the next. A data are in front of it in the same pixel; e.g., if A
block contains items such as a pointer to the is behind B, set the mask belonging to A to
triangle on which the edge lies, the destination Mask A=Mask A ANID NOT (Mask B).
vertex at which the edge terminates, the current (2.4.1.2) Compute the exposed area of the
x,y,z coordinates along the edge, and the current edge by counting the number of bits set.
normal vector along the edge. In this algorithm This can be performed by a look-up opera-
these data blocks maintain the coherence informa- tion on each byte by means of a bit-count
tion passed from one scan line to the next. We table 256 entries long.
refer to them simply as active edges. The AEL con- (2.4.1.3) If any bits remain set after clipping,
tains all active edges crossing the current scan line. then the edge is visible; compute the
specular and diffused reflections for the
(2) Process each scan line as follows: edge.20
(2.1) For each peak in the Y-bucket list, enter at (2.4.1.4) The intensity given to the pixel is the
the current scan line two edges (left and right) sum of the intensity of each edge within the
into the AEL. Left and right edges of the same pixel weighted by the exposed area of the
triangle are always placed next to each other edge including the background color
because of step 2.2.3 below. weighted by the area not covered by any
(2.2) Process each edge in the AEL as follows: edge.
(2.2.1) "Walk" the edge at the subpixel resolu-
tion until it is outside the pixel. A Bresenham Raster4K performance
algorithm19 is used for efficiency. The goal
here is to obtain the subpixel coordinates at By the use of an 8 x 8-bit mask within each pixel, the
which the edge enters and exits the pixel. For resolution at which edges in the scene are computed is
edges starting inside the pixel, walk back- always eight times higher than the resolution at which the
wards to obtain the entry coordinates. This shading is computed. If one is prepared to sacrifice qual-
will be corrected for in 2.2.3 below. ity for speed, then computing a scene to an edge resolu-
(2.2.2) Look up the masks using the entry and tion of 512 x 512 can be done by using the Raster4K
exit subpixel coordinates as an address. Take algorithm of a 64 x 64 window and expanding the subpix-
the complement to obtain the fill mask of els within each pixel to a 512 x 512 window. However,
right edges. the 64 x 64 shading resolution is far from adequate.
(2.2.3) Enter the mask, z value, normal vector, Figure 12 is a series of renderings of the same model
and intensity into the X-bucket. The X-buck- shown in Figure 2. The model consists of 80 rational
et is a data structure that serves as a z-buffer bicubic patches subdivided to a planarity tolerance of
and contains additional information. It con- one pixel into 20,780 triangles.
tains a linked list of edge data covering the Table 1 lists the CPU times to compute these images.
pixel for each x coordinate on the current scan As a comparison, data is also shown for the model sub-
line. divided into different planarity tolerances. The setup
If an edge belonging to the same triangle is the time is the time taken to perform the polygonal approx-
first entry in the X-bucket (this would occur imation. For further comparison, computation times are
because of adjacency in the AEL), take only given for other images shown in the figures in this article.
November 1983 67
To obtain the various resolutions, the algorithm is
performed on a window the size of the shading resolu-
tion and then expanded into a window eight times the
size, i.e., the edge resolution. In order to be displayed
on a 512 x 512 display, the image is compressed by pixel
averaging.
For an edge resolution of 4K x 4K or higher (Figure 2b
is 16K x 16K displayed on a 2048 x 1536 film recorder),
the masks do not have to be expanded and then com-
pressed, because the color of the pixel can be computed
as described by counting the bits and weighting the areas.
The algorithm has been tailored to make the edge resolu-
tion of 4K most computationally cost effective. Thus, in-
creasing from 1K to 2K costs about 100 percent more
(one would expect 400 percent), while the jump from 2K
to 4K is relatively cheap. Notice, in the cases of Figures
4b and 8, that computing at 4K is actually cheaper than at
2K. These are cases where the higher resolution computa-
tion is less than expanding and averaging the subpixels.
Figure 11. Edge processing: mask for right edge (a); for In practice it is better to simply display the image at
left edge (b); final mask for pixel (c). lower resolution when a quick look at the image is need-

Table 1.
Timing data for Images shown In this article.

COMPUTATION TIME AT VARIOUS EDGE


RESOLUTIONS (CPU SECS)
PLANARITY
NUMBER OF TOLERANCE SETUP TIME 512 1024 2048 4096 8092
FIGURE TRIANGLES (PIXELS) (CPU SECS) x x x x x
512 1024 2048 4096 8092
3b 68,948 1 116 135 230 471 590 1560
4b 11,692 1 5 42 113 304 298 978
5 73,416 1 34 132 233 518 584 1609
8 20,112 1 9 43 100 246 197 599
2 38,176 0.5 18 87 181 419 478 1368
2 20,780 1 9 63 148 365 411 1245
2 11,744 1.5 6 48 127 330 364 1153

68 IEEE CG8A
Figure 12. Series of renderings of wrinkled surface in Figure 2 at various edge resolutions: 1024 x 1024 (a);
2048 x 2048 (b); 4096 x 4096 (c); 8192 x 8192 (d).

ed. This avoids the need to expand and does not produce well as surfaces trimmed to generalized boun-
the effect present in low shading resolution akin to pixel daries.
replication. We have found that a 128 x 128 or a 256 x 256 (2) The extension of the scan-conversion module to
window for an image on a 512 x 512 display conveys most handle arbitrary polygons.
of the essential feedback needed from the raster image; (3) The migration of the scan-conversion module into
the edge resolution being eight times higher helps even hardware.
more as the window size decreases. Besides speed, (4) Finally, but by far the most important, the
another advantage of this approach is the ability to see development of an environment in which an
more than one view of a model, or different models, side engineer without artistic experience or familiarity
by side for comparison on the same screen. with the properties of light and color can obtain
the most from a shaded image.
Conclusion Hardware implementation will play a vital role in more
user-friendly systems. As images are generated faster and
We have described the integration of color rendering closer to real time, dynamic changes in the image begin to
into a CAD/CAM environment. Several areas are being be perceived as features. This is experienced by anyone
explored further: who moves from a static vector display to a vector
(1) The extension of the polyhedral approximation display with dynamic 3-D rotations. The motion adds a
module to handle arbitrarily high-order surfaces as new dimension in the perception of shape.
November 1983 69
In addition to dynamic motion, parameters such as the These cracks can be avoided by using a data structure
specular reflectivity of a surface and the position of the that allows all vertices created during subdivision to be
light source can be varied in real time to give better interconnected in a network. Thus, each vertex is repre-
perception of a design. E sented by a four-direction link (up, down, left, and
right), which is inserted into the network upon every sub-
division. An advantage of this is that vertices are used
only once and do not have to be duplicated when they are
Appendix A: hardware configuration shared.
When a subpatch is determined to be planar with
The software package of the RPI color-rendering pro- straight edges and thus suitable for display as a polygon,
cessor is implemented on an IBM 4341 I. The frame buf- its upper left corner is stacked. This choice is ar-
fer used is a Raster Technologies Model 1, which has a bitrary-any corner can be stacked. After all patches
512 x 512 resolution, 24 bits per pixel. have been subdivided, the corners are popped from the
Communications to the frame buffer take place via a stack and polygons are strung together by marching
Device Attachment Control Unit (DACU),21 which sup- around the network with a "first turn" philosophy. In
ports parallel data transfers up to one megabyte per sec- cases where upper left corners are stacked, the following
ond. The output from the DACU is Unibus protocol. steps are performed:
The time necessary to transmit an image over the chan-
nel in run-length encoded format is on the order of (1) Take the upper left vertex from the stack and
several seconds. Referring to Table 1, one can see that, follow the down pointer.
compared to the CPU time for image computation, a few (2) Take all vertices up to and including the first one
seconds of I/O to the display is not a bottleneck. In fact, with a non-null right pointer.
in many cases, disk I/O is slower than transmission over (3) Follow the right pointer.
the DACU/Unibus interface. (4) Take all vertices up to and including the first one
with a non-null up pointer.
(5) Follow the up pointer.
Appendix B: fitting together patches after (6) Take all vertices up to the original starting vertex.
subdivision This procedure also performs a check on the integrity
of the network. The result is a list of vertices that can be
A problem arises in subdividing a surface for approx- displayed as a polygon.
imation by four-sided "polygons" connecting the corner
points. (Strictly speaking, these are not polygons because
all four points do not lie in a plane. However, the 2-D
projection of the four points can be treated as a Acknowledgments
polygon.) When the surface is subdivided to different
levels, cracks appear. Figure 13 shows a configuration We wish to acknowledge the sponsors and the staff of
where a surface topologically represented as FDGH is RPI's Center for Interactive Computer Graphics for sup-
subdivided twice in the upper right (ABCD) and once in port on this project. In particular, Northrop supplied the
the other three quadrants. part used in Figure 4; IBM, the power supply in Figure 1;
A crack will appear along AMB if FABE is entered as and Gray Lorig, the ray-traced image in Figure 7.
a four-sided polygon because the point at M does not In addition, Mitchell Levinn operated the camera for
necessarily lie on the straight line connecting A and B. It Figure 2b, and Pat Search helped in color selection.
lies on the surface. This work was supported under NSF grant ISP79-20240
and other industry grants, which we gratefuly acknowl-
edge. Any opinions, findings, conclusions, or recom-
mendations expressed in this article are those of the
authors and do not necessarily reflect the views of the
National Science Foundation or any of the industrial
sponsors.

References
1. W. Fitzgerald, F. Gracer, and R. Wolfe, "GRIN: Interac-
tive Graphics for Modeling Solids," IBM J. Research and
Development, Vol. 25, No. 4, July 1981, pp. 281-294.
2. R. L. Cook and K. E. Torrance, "A Reflectance Model for
Computer Graphics," Computer Graphics (Proc. Sig-
graph '81), Vol. 15, No. 3, Aug. 1981, pp. 307-316.
3. John Zawada, "Color Mixing Program," Project Report,
Center for Interactive Computer Graphics, RPI, Troy,
Figure 13. Subdivision of a surface. New York, May 1982.

70 IEEE CG&A
4. A. A. G. Requicha and H. B. Voelcker, "Solid Modeling: Barr, A. H., and Bruce Edwards, "Project Review," Center for
A Historical Summary and Contemporary Assessment," Interactive Computer Graphics, Aug. and Dec. 1981.
IEEE Computer Graphics and Applications, Vol.2, No.2, Breen, David, "Project Review," Center for Interactive Com-
Mar. 1982, pp. 9-24. puter Graphics, Rensselaer Polytechnic Institute, Dec. 1982.
5. P. Atherton, "A Scan-Line Display Approach to Interac- Crow, F. C., "The Aliasing Problem in Computer Generated
tive Constructive Solid Modeling," doctoral thesis, Shaded Images," Comm. ACM, Vol. 20, No. 11, Nov. 1977,
Rensselaer Polytechnic Institute, Aug. 1983. pp. 799-805.
6. Bruce Edwards, "Implementation of a Ray-Tracing Lane, J. M., and R. F. Riesenfeld, "A Theoretical Develop-
Algorithm for Rendering Superquadric Solids," master's ment for Computer Generation of Piecewise Polynomial Sur-
thesis, Rensselaer Polytechnic Institute, Dec. 1982. faces," IEEE Trans. Pattern Analysis and Machine In-
7. Paolo Sabella, "Project Review," Center for Interactive telligence, Vol. PAMI-2, No. 1, Jan. 1980, pp. 35-46.
Computer Graphics, Rensselaer Polytechnic Institute, Pitteway, M. L. V., and D. K. Watkinson, "Bresenham's
May, Aug., and Dec., 1982. Algorithm with Grey Scale," Comm. ACM, Vol. 23, No. 11,
8. J. M. Lane, L. C. Carpenter, J. T. Whitted, and J. F. Nov. 1980, pp. 625-626; corrigendum, Vol. 24, No. 2, Feb.
Blinn, "Scan Line Methods for Displaying Parametrically 1981, p. 88.
Defined Surfaces," Comm. ACM, Vol. 23, No. 1, Jan. Puccio, Phil, "Color Display Support for the Geometric Design
1980, pp. 23-34. Processor," user's manual and tech. report, Center for Interac-
9. J. D. Foley and A. van Dam, Fundamentals of Interactive tive Computer Graphics, Rensselaer Polytechnic Institute,
Computer Graphics, Addison-Wesley, Reading, Mass., 1981.
1982. Snyder, Derek, "A Color Raster Addition to the GDP System
10. J. M. Lane and L. C. Carpenter, "A Generalized Scan with Full Hidden Surface Removal," master's thesis and tech.
Line Algorithm for the Computer Display of Curved Sur- report, Center for Interactive Computer Graphics, Rensselaer
faces," Computer Graphics and Image Processing, Vol. Polytechnic Institute, 1981.
11, No. 3, Nov. 1979, pp. 290-297. Whitted, J. T., and D. M. Weimer, "A Software Test-Bed for
11. 1. D. Faux and M. J. Pratt, Computational Geometryfor the Development of 3D Raster Graphics Systems," Computer
Design and Manufacture, Ellis Horwood, 1979. Graphics (Proc. Siggraph '81), Vol. 15, No. 3, Aug. 1981, pp.
12. J. T. Whitted, "An Improved Illumination Model for 27 1-277.
Shaded Display," Comm. ACM, Vol. 23, No. 6, June
1980, pp. 343-349.
13. 1. E. Sutherland, R. F. Sproull, and R. A. Schumacker,
"A Characterization of Ten Hidden Surface Algorithms,"
Computing Surveys, Vol. 6, No. 1, Mar. 1974, pp. 1-55. Paolo E. Sabella is a research assistant
14. P. Atherton, K. Weiler, and D. Greenberg, "Polygon with the Center for Interactive Computer
Shadow Generation," Computer Graphics (Proc. Sig- Graphics at Rensselaer Polytechnic In-
graph '78), Vol. 12, No. 3, Aug. 1978, pp. 275-281. stitute. His research interests include
15. E. E. Catmull, "A Hidden Surface Algorithm with Anti- geometric modeling, computer-aided
Aliasing," Computer Graphics (Proc. Siggraph '78), Vol. design, and display techniques.
12, No. 3, Aug. 1978, pp. 6-9. Sabella received a BA in mathematics
and physics from Ohio Wesleyan Univer-
16. E. Fiume, A. Fournier, and L. Rudolph, "A Parallel Scan sity and a BS in aeronautical engineering
Conversion Algorithm with Anti-Aliasing for a General- and MS in computer and systems engineer-
Purpose Ultracomputer," Computer Graphics (Proc. Sig- ing from RPI. He is a member of the IEEE and the ACM.
graph '83), Vol. 17, No. 3, July 1983, pp. 141-150.
17. T. Duff, "Smoothly Shaded Renderings of Polyhedral
Objects on Raster Displays,"Computer Graphics (Proc.
Siggraph '79), Vol. 13, No. 2, Aug. 1979, pp. 270-275.
18. D. E. Knuth, The Art ofComputer Programming, Vol. 3,
"Sorting and Searching," Addison-Wesley, Reading, JMichael J. Wozny is currently the director
Mass., 1973. of the Center for Interactive Computer
Graphics and a professor of electrical and
19. J. E. Bresenham, "Algorithm for Computer Control of a systems engineering at Rensselaer Poly-
Digital Plotter," IBM Systems J., Vol. 4, No. 1, 1965, pp. technic Institute in Troy, New York. His
25-30. current research interests are computer
20. J. F. Blinn, "Models of Light Reflections for Computer systems, computer graphics, CAD/CAM,
Synthesized Pictures," Computer Graphics (Proc. Sig- and computer-aided engineering. He was
graph '77), Vol. 11, No. 2, Summer 1977, pp. 192-198. on the faculty and managed computer
facilities at Purdue University from 1965
21. "Device Attachment Control Unit," reference and opera- to 1970 and at Oakland University in Michigan from 1970 to
tion manual, DACU-ROM-O, IBM, Poughkeepsie, N.Y. 1975; he conducted computer applications research in tem-
porary appointments at the former NASA Electronics Research
Center in Cambridge (1968) and at GM Research Laboratories
(1973).
An active consultant to industry, government, and univer-
Additional readings sities, Wozny is editor-in-chief ofIEEE Computer Graphics and
Applications. He is on the editorial advisory board of the
newsletter, CAD/CAMAlert, on the board of directors of the
Barr, A. H., "Project Review," Center for Interactive Com- National Computer Graphics Association, and on the boards of
puter Graphics, Rensselaer Polytechnic Institute, May 1982. two companies.
Barr, A. H., "Superquadrics and Angle-Preserving Transfor- Wozny received his PhD(EE) degree from the University of
mations," IEEE Computer Graphics and Applications, Vol. 1, Arizona in 1965. He is a member of Eta Kappa Nu, Tau Beta Pi,
No. 1, Jan. 1981, pp. 11-23. and Sigma Xi.

November 1983 71

Você também pode gostar