Você está na página 1de 6




 
 !"# $
%'&( ) *
-+ , ./10243%576 8 9 ][ \^Y^`_bac6d\d8E^de%.Y^d5 †
:<;>= ?A@CBED1= FHG-IAJLKNMABOG7P M>;AQRSM7P FT= UVI>BE@XWYI4Z7;CFHG :<;>= ?A@CBED1= FHGfIAJcKNMABgGhP M7;CQRSM>P FT= UVI>BE@iWYI4Z7;FjG

k4= l4ZhBO@m4n$o]I4P ZhUp@BO@A;CQ7@CBE@ Q= UVMCl>@ DpIAJ*F1qC@#rCP I4ZCQstMFu@CBMFvF1Bj= w7ZCFu@IAJLMq7Z7BxBj= ryM7;C@hzf{LqC@P @|JEFV= UVMCl>@#= DVM-?AI4P Z7UV@#BE@C;AQ7@CBE@ Q = UiMCl>@Xscq>= rAqD|qCIs}DVF1qA@
DuF1BxZCr F1Z7BE@IAJF1qC@#@|Gy@-IAJ*F1qC@q7ZhBjBj= ryM7;C@)M7;CQ~D'Z7BxBEI4Z7;CQ`= ;Cl @|G@|stM7P Pgzf{LqC@-r @C;CF1BEM7P= UVMCl>@fD'qCIs}DNM-?CI4P Z7Up@)BE@C;CQh@CBE@ Q= UiMCl>@XsL= F1q~P = l4qFT= ;Al€z#{qC@)= UVMCl>@
D'qAIys*DNMrCP @MCBw>I4Z7;AQhMCBOG IAJF1qC@)@ Gy@|stM7P PYM7;CQUpI7BO@#D‚F1BxZCr|F1Z7BHM7PQh@|F‚M>= P DNryM7; w>@)D‚@ @C;dz{LqC@Bj= l4qCF1UpI>DuFN= UVMCl>@-D'qCIs*DNM)?CI4P Z7Up@)BE@C;CQ7@ABO@yQ= UVMCl>@fs= F1q
@'ƒ7MCl>l>@CBHMyFu@yQD'qAMCQ4= ;Cl€z<{LqC@X= UVMCl>@$Qh@C„>= r FuDcM7;NI4„>@C;7= ;Cl= ;iF1qC@V@ Gy@|stM7P PhFuIstMABOQhDF1qC@fP @|JEF*D1= Q7@sLq>= rCq-= D$;AIAFMFM7P P7?>= D1= w7P @i= ;X@= F1qC@CB]IAJSF1qC@iIAF1qC@CBt= UiMCl>@ Dyz
… ZCrCq#= ;JHI7BxUVMFT= I4;NrM>;#w7@VrCBj= FT= ryM7P4FuI„ABE@ Q4= r FT= ;Cl#F1qA@iQ4= DTD1= „AMFT= I4;fIAJ<M#q7Z7BxBj= ryM7;C@hz

A BSTRACT [Rusinkiewicz et al., 2006]. They observed that artists and illus-
trators accentuate the lighting and shading to highlight regions of
Illustrations convey overall shape as well as surface detail using interest and draw the attention of a viewer to a certain region. Their
certain lighting and shading principles. We investigate the use of technique does not require any explicit feature detection and auto-
such an illustration-inspired lighting model to accentuate features matically enhances features in the data.
automatically by dynamically modulating the light position. Some of the illustration principles for shaded relief,
We apply this shading technique to accentuate detail in volumet- as discussed by cartographers, that are applicable are
ric data. As the technique is primarily a gradient-based technique, [Rusinkiewicz et al., 2006]:
we discuss and compare gradient computation techniques and their ‡ There should not be any specular highlights or shadows in the
effectiveness. The technique highlights details at various levels in
illustration.
‡
the volume making use of the multiple levels of blurred gradients
computed. The results demonstrate that surface detail is accentu- The light position should seem to be towards the upper left
ated regardless of the surface orientation and the size of features. corner of the subject being illustrated.
We have applied our techniques to scientific and medical data to
accentuate surfaces features in the application domains. ‡ The light direction is locally adjusted to best emphasize local
features.

1 I NTRODUCTION
‡ The height of features should be exaggerated and the transi-
tions between ridges and valleys should be accentuated.
Scientific illustrations have the ability to convey information based ‡ In order to accentuate local as well as global features, the
on the intent of the illustration. Illustrators have been using lighting
and shading principles to convey overall shape information as well blending between various smoothed levels is used.
as finer details regarding the surface of the subject. In the field of volume visualization, such techniques would prove
Automatically generating illustrative images from scientific data invaluable as the datasets are large as well as highly detailed. For
is a challenging task. Applying illustration-inspired principles to vi- example, in a medical CT scanned dataset not only is the size of a
sualizing scientific data can produce expressive and insightful im- dataset huge, but the detail of each anatomical part is crucial. Vi-
ages. Generating effective visualizations of large datasets can be sualizing such data for exploratory as well as diagnostic purposes
aided by the use of such techniques that provide context as well as is quite challenging. The use of standard techniques seems to pro-
are able to provide enhanced surface detail to the viewer. duce either a soft, diffused visualization that conveys overall shape
Recent work in the field of illustrative computer graphics has information without detail, or a crisp detailed zoomed-in visualiza-
dealt with generating feature-enhanced images by applying tech- tion where the context is lost.
niques from the field of mapmaking and scientific illustrations The exaggerated shading technique with some modifications can
† e-mail: alark1@cs.umbc.edu be applied to generate expressive images of volumetric data. For
volumes, we use the gradient of a voxel as the normal of the sur-
† e-mail:rheingan@cs.umbc.edu face. As the effectiveness of the technique is based on the quality of
the computed gradients, we investigate the use of the linear regres- that during ray casting the sample spacing along the ray is constant
sion based gradient computation technique that has been proven to whereas in texture-mapping based volume rendering the slices lead
produce better gradients than the central differences method. to an increase in the sampling distance based on the view vector.
The exaggerated shading techniques are successful at accentu- Raycasting has always been known for better image quality and led
ating regions due to their dynamically adjusted lighting direction. to our decision of using it for our adaptation of the exaggerated
This is an excellent alternative to the geometry dependent lighting shading technique.
approach [Lee et al., 2005] where a light source is placed in regions As the technique relies heavily on the quality of the computed
of interest to emphasize those regions to draw the viewer’s atten- normals, use of the central differences method for computing the
tion. This technique automatically exaggerates the surface detail gradients is limited due to the small neighborhood that it searches
without requiring a manual/automatic placement of light sources during its computation. The 4D linear regression techique was in-
around each feature. This technique helps in generating expressive troduced by Neumann et al [2000] and has been proven to more
images in an illustrative manner without any user input and pro- accurately compute gradients of volumetric data. We demonstrate
vides the user with context as well as surface detail in the visual- and compare the results of these gradient computation techniques
ization. Expressive images generated in this manner will facilitate in our exaggerated shading framework.
a better exploration of the dataset and lead to increased insight.
3.1 The Shading Algorithm
2 R ELATED W ORK
As per the technique introduced in the original paper
Illustrative visualization is a field of wide interest and there have [Rusinkiewicz et al., 2006] paper, the shading algorithm con-
been many different approaches to generating expressive and illus- sists of the local lighting model, multiscale shading and modified
trative images. Rheingans and Ebert [2001] have used gradient- local lighting to accentuate local features.
based techniques to identify and accentuate features in a volume. The local lighting model is a modified cosine shading model with
Kindlmann et al. [2003] introduced the ability of producing insight- an ambient term in order to prevent any self shadowing.
ful images with the use of curvature based techniques to enhance
n̂  l
1 1 ˆ
volumetric data automatically. (1)
In volumetric datasets, the gradient serves as the normal for the 2 2
surface and is used for lighting, shading as well as enhancement The light source is placed in the upper left corner of the image
purposes. Due to its simplicity and easy of use, the central differ- and n̂ denotes the gradient direction of the voxel under considera-
ences method is widely used in volume visualization to compute tion.
gradients of a volume. Linear regression based gradient computa- Decaudin [1996] proposed a cartoon style lighting model where
tion techniques [Neumann et al., 2000] though computationally ex- the results are weighted to be closer to pure white and black. The
pensive are better at computing the gradient for volumetric data. paper discusses the fact that soft toon shading is much more ef-
Volume Rendering has been performed primarily us- fective at enhancing the contrast between ridges and valleys. In
ing one of raycasting [Drebin et al., 1988], shear-warp the original paper exaggerated shading paper they used a “soft toon
[Lacroute and Levoy, 1994], texture-mapping based techniques shader” that was based on these principles and is given by,
[Cabral et al., 1994] or splatting [Westover, 1990]. There have
clamp      a  n̂  lˆ
been discussions regarding the trade-off between image quality and 1 1
1 1 (2)
interactivity, where researchers have preferred texture-mapping 2 2
based volume rendering for its speed and raycasting for the where a is a user-selected “exaggeration” parameter. Figure 2
increased image quality. Our aim is to generate expressive images shows results of cosine lighting on the left image followed by three
and so we chose the raycasting approach to generate higher quality toon-shaded images of the sphere with various settings for parame-
expressive images. ter a.

3 A PPROACH

The original exaggerated shading paper [Rusinkiewicz et al., 2006]


enhances ridges and valleys by using an illustrative lighting model.
It relies on preserving and accentuating the edges as they are es-
sential to a complete and correct understanding of the underlying
data. k4= l4ZhBO @ dn]{qA@fP @|JHF1UpI>D‚FL= UVMCl>@pQ7@C„>= r|FuDpMi?AI4P Z7UV@XBO@A;CQ7@CBE@ Q~= UVMCl>@pIAJtM
The technique blurs the normals at various levels and in the final r I>D'= ;C@)P = FVUVIhQ7@CP IAJM#D'„hqC@CBE@-?CI4P Z7Up@hzV{qC@);C@|ƒCFF1q7BE@ @)= UiMCl>@ D$Qh@C„>= r F
calculation considers the contribution of each level. Their tech- MFuIhI4 ;
vD|qAMCQ7@ Q = UVMCl>@IAJNF1qC@D'„7qC@ABO@ s= F1qb= ;CrCBE@yMCD1= ;Cl ?>M7P ZC@ DIAJXF1qA@
nique uses Gaussian blurring to produce multiple levels of blurred
normals. In this process of blurring though, the blurring is per-
@'ƒ7MCl>l>@CBHMyFT= I4;„AMABEM7UV@|Fu@CB< M #M>;AQ #BE@ D|„>@yr|FT= ?A@CP GCz
formed without regard for boundaries between these rides and val-
leys. Bilateral filtering has been known to preserve edges and Local lighting is the third component of exaggerated shading.
boundaries in subsequent blurred representations. We used a three Instead of relying on a single light source to highlight all the fea-
dimensional version of the biltateral filtering algorithm discussed tures or placing multiple light sources to highlight specific features,
by [Tomasi and Manduchi, 1998]. in this technique the light vector is modified based on the gradient
Some other limitations of the original paper [2006] are that they of the voxel to enhance local features.
used a texture-mapping based approach for their volume rendering The soft toon shading is computed using a modified version
example. In case of texture mapping based volume rendering, the where
volume rendering integral is evaluated by inserting slices orthogo-
nal to the view vector and subsequent blending. In the raycasting ci  clamp      a  n̂i  lˆi 1  
1 1 (3)
approach, the volume rendering integral is solved along the ray as
the ray traverses the volume. The difference between the two is where
li  1  lglobal n̂i  1  n̂i 1  lglobal   (4)
In the above equations, ci is the color of the voxel after lighting
and shading, a is the exaggerated shading parameter, n̂i is the gra-
dient of the voxel at the current scale, n̂i  1 is the gradient obtained
by smoothing the gradient at level i. lglobal is the light vector and
li 1 is the modified light vector taking the smoothed normals into
account. We discuss the process of smoothing normals and gener-
ating multiple levels of smoothed normals in the next section.

3.2 Multiscale shading


Large datasets have information at various levels of detail. For ex-
ample, a cartographic dataset will have structural information about
an entire region, but at the same time the detail in a valley may be of
interest to a user and within that valley, there maybe more relevant
information regarding the surface of the valley. Such multiscale
detail cannot be easily captured in a single visualization using stan-
dard visualization techniques.
The paper proposes multiscale shading by using the multiple lev-
els of blurred normals. For multiscale shading, the above lighting
calculations are performed with different levels of smoothed gra-
dients for lighting computation. The idea is to actually compute
the light based on different smoothed levels of the surface, but in
practice smoothed versions of the gradient are used.
The smoothing of normals can be performed using a simple
gaussian smoothing algorithm around each voxel. The bilateral fil-
tering technique [Tomasi and Manduchi, 1998] introduced for im-
ages has been known to preserve the edges while filtering out detail.
It can be formulated as [Fleishman et al., 2003]:
k4= l4ZhBO @ dn<{qC@FuI4„#BEIsbr I4;F‚M>= ;ADL?AI4P Z7UV@VBE@C;CQh@CBE@ Q= UiMCl>@ DIAJl7M>ZAD‚D1= M7;
  
  7w P Z7BxBO@yQDT@ r I4;CQM7;CQF1q>= BOQ P @|?C@APl4BHMCQ4= @C;FuDyzN{qA@w7IAFvFuI4UBEIys ryI4;F‚M>= ;CD
 P Fu@CBE@ Ql4BEMCQ4= @C;FuD$JHI7BDT@ r I4;CQ~M7;CQ#F1q>= BEQ~P @ ?C@CPdI4wF‚M>= ;C@ Qw Gw7= P MFu@CBHM7P  P

u p 2
I  p
I u I p
 ∑ p  V  u e e 2σc2 2σs2
I  u 
      Fu@CBj= ;Al€zi{qA@;AIh= D‚@= ;-F1qC@)w7IAFvFuI4U BOIs = DVw7@|FvFu@CB  P Fu@CBE@ QF1qAM7;F1qC@fFuI4„
∑p  
u p 2 I u I p
BEIyspz{LqC@JH@yMF1Z7BE@ D*D'ZArCqNMDtF1qC@@|Gy@D‚I4r '@|FuDLM7;CQNF1qC@X;CI>D‚F1BH= P DMCBE@iw>@ FvFu@CB
„ABE@ D‚@CBO?A@ Q= ;XF1qC@fw7IAFvFuI4U BEIys MDcr I4UX„AMABE@ Q-FuIfF1qC@VFuI4„BEIsz
e 2σc2 e 2σs2
V u

In the equation for bilateral filtering, the first exponential term in


the numerator filters out noise from the data, the second exponential
term in the numerator ensure preservation of features. The second
exponential term in the denominator is for normalization.
We have extended this technique to three-dimensions for blur-
ring the gradients to generate multiple levels of gradients for the
exaggerated shading technique. Figure 3 shows different levels
of gradients obtained using gaussian blurring and bilateral filter-
ing. As can be seen from the images, the bilateral filtering clearly
smooths out more of the noise in the higher levels but is able to
preserve the features surrounding the eyes and nose even at higher
filtered levels.
As can be seen in figure 4, the images depict a volume rendered
image of the engine dataset obtained by the use of multi-scale shad-
ing. For the left image, the multiple levels of gradients were gen-
erated using gaussian smoothing whereas for the right image, the
multiple levels were generated using bilateral filtering. A marked
difference can be seen in the way the lower left tube of the engine
is rendered. More detail is seen in the left image that was generated k4= l4ZhBO@ €nS{LqC@ D‚@c= UVMCl>@ DSQ7@C„7= r|F]M*?AI4P Z7UV@BE@C;AQ7@CBE@ QN= UVMCl>@tIAJ€F1qA@}@A;Clh= ;C@
using gaussian blurring. The right image shows a more smoothed QhMF‚MCD‚@|FsL= F1q-@'ƒ>MCl>l>@CBHMFu@ QD'qAMCQ4= ;Cl€zt{LqC@NP @|JEFc= UiMCl>@pstMCDLl>@C;C@CBHMFu@ Q w|G
version in that particular region. Features such as the indentation ZCD'= ;Cll7M7ZCD‚D'= M7; w7P ZhBjBj= ;ClIAJF1qA@ l4BEMCQ4= @C;F)MF-Q4=  @ABO@A;FP @|?A@CP D-M7;CQF1qA@
going across the engine is clearly seen in the right image as com- Bj= l4qFV= UiMl>@istMCD$Q7@A„>= r|Fu@yQw GZCD'= ;Cl w>= P MyFu@ABEM7P  P Fu@CBj= ;AlJHI7B*w7P ZhBjBj= ;Cl)F1qA@
pared to the left image. l4BHMCQ4= @C;CFuDyz {LqC@pBj= l4qCFt= UVMCl>@rCP @yMCBxP G$Q7@C„7= r|FuD}D'Z7BOJxMry@*JH@yMF1Z7BE@ DLw>@ FvFu@CB%D'ZCrAq
MCDF1qC@#rCZ7BO?7= ;Cl = ;CQh@C;F‚MFT= I4;I4;F1qC@#@A;Clh= ;C@  DiD'Z7BOJxMry@-M7;CQ MCr r @A;F1ZAMFu@ D
3.3 Gradient computation techniques F1qC@Nw7I4Z7;CQhMABj= @ DiUXZCrAq#w>@|FvFu@AB<F1qAM7;NF1qC@NP @ JEFc= UiMl>@hz
The exaggerated shading technique is based on computing the ac-
curacy of the gradients, we investigate the use of linear regression
based gradient computation in comparison with the standard cen-
tral differences method. The central differences method computes equation for computing the gradient using central differences is as
gradients by comparing voxel values along each dimension. The follows:
 
1
f  x i  1  y i  zi  1
f  x i  1  y i  zi 
∇ f  x i  y i  zi  f  x i  y i  1  zi  f  x i  y i  1  zi 
2 2
 1 1

f  x i  y i  zi  1  f  x i  y i  zi  1 
2 2
1 1
2 2

where ∇ f  xi  yi  zi  is the gradient at a location Pi and f  xi  yi  zi 


is the value at that location.
Neumann et al.[Neumann et al., 2000] introduced a 4D linear re-
gression based method for gradient estimation which approximates
the density function in its local neighborhood with a 3D regres-
sion hyperplane. Their techniques produces smoother, high-quality
gradients that can be used for lighting, shading and other gradient-
based techniques for enhancing volume visualization.
A spherical function is used to weight the contributions of neigh-
boring voxels in the computations of the plane equation coeffi-
k4= l4ZhBO@ €n{qA@ D‚@p= UVMCl>@ D<Q7@C„>= r|F<?AI4P Z7UV@BE@C;CQ7@ABO@yQ#= UVMCl>@ D]IAJ%F1qC@c@A;Clh= ;C@
QhMF‚MCD‚@|F}sL= F1qi@'ƒ>MCl>l>@CBHMFu@ Q#D|qAMCQ4= ;Cl€z<{qC@XP @|JEF}= UVMCl>@Ls}MD*l>@C;A@CBHMyFu@yQZCD

cients. The plane equations are:


26 26
= ;Alr @C;F1BHM7P%Q4=  @CBE@C;Cr @yDNUp@|F1qCI4Q)JjI>Btl4BHMCQ4= @C;Fpr I4UX„7ZCF‚MyFT= I4;`zV{LqC@#Bj= l4qF
A  wA ∑ wk v k x k B  wB ∑ wk v k y k = UiMCl>@$s}MDcl>@C;C@CBHMFu@ Q~ZAD1= ;Al-F1qC@  P = ;C@yMABtBE@ l4BE@ DTD1= I4;Up@|F1qAIhQ-JHI7B]l4BE M

k 0 k 0 Q4= @C;Fr I4UX„hZF‚MFT= I4;`z}{qA@l4BHMCQ4= @C;FuDcl>@A;C@CBHMFu@ Qw GiF1qC@NP = ;C@yMAB<BE@ l4BE@ DTD1= I4;
Up@ F1qCIhQfMCBE@UXZArCqD'UVIhIAF1qC@ABSM7;CQfq>= l4q7P = l4qF*= UX„7I>BOF‚M7;F]Jj@yMF1Z7BE@ D*D'ZCrCqiMCD
C  wC
26
∑ w k v k zk D  wD
26
∑ wk v k F1qC@NqCI4P @N= ;XF1qA@V@C;Clh= ;A@iFuIstMCBEQ7DcF1qC@fw7IAFvFuI4U-z
k 0 k 0
where
Name Dimensions Raycasting Lit Exaggerated
wA 
1
 wB 
1 Sphere 64x64x64 14 12 8
k  0 wk x k
∑26 k  0 wk y k
∑26
2 2 Engine 128x128x64 13 12 7
UNC Brain 128x128x72 13 12 7
wC 
1
 wD  1 Tooth 128x128x256 13 12 8

∑26 2
k 0 w k zk ∑26 
k 0 wk
CT Head 256x256x112 12 10 7
Abdomen 128x256x256 12 11 7
and the weighting function wk is inversely proportional to the
Human Head 256x256x256 12 11 5
distance dk of the neighboring voxel from the current voxel. The
voxel gradient is then given by
  {%M7w7P @m4n
UN„AMCr|F-IAJX@'ƒ7Ml>l>@ABEMFu@ Q D'qAMCQ4= ;Cl I4; qAMABEQ>stMCBE@Mryr @CP @ABEMFu@ Q
A BHM|GAryMCDuFT= ;Cl
∇f  B
C

We have previously experimented with these gradient computa- 5 R ESULTS


tion techniques and found the 4D linear regression techniques to be
smoother and of higher quality [Rheingans and Ebert, 2001]. We applied the exaggerated shading technique to various volumes
Figure 5 shows volume rendering with exaggerated shading ap- from different domains. In the medical domain, the UNC brain
plied to an engine dataset. The left image shows the result obtained dataset was used as a representative dataset, since it has complex
by using central differences, whereas the right image was obtained detail of the brain that needs to be visualized. Figure 6 shows a
with gradient computed using 4D linear regression. The left image volume rendered image without and with exaggerated shading. Our
shows more of the surface detail of the flat surface of the engine discussions with doctors have confirmed that such visualizations
compared to the right image. The right image gives more informa- of the gyri (folds) and the sulci (valleys) of the brain can help in
tion regarding the depth of features, especially the big hole situated diagnosing the damage caused due to a brain stroke. They said that
in the upper right region of the engine. it could also help in determining the extent of a brain tumor that
needs to be surgical removed.
4 H ARDWARE ACCELERATED E XAGGERATED SHADING On applying our techniques to CT scanned abdomen data, we
found the exaggerated shading technique to effectively accentuate
With the ever-increasing capabilities of newer graphics cards, per- the surface features in the data. Figure 7 shows an image gener-
forming hardware accelerated raycasting has become possible. We ated using exaggerated shading where the bowl-shaped indentation
implemented the exaggerated shading technique in the fragment in the hip bone (illiac fossa) as well as detail of the stent on the ab-
shader to analyze its impact on the overall interactivity of the ap- dominal aorta can be clearly seen. Figure 8 shows a comparison of
plication. We performed some experiments with datasets of various the use of bilateral filtered and Gaussian smoothed gradients. The
sizes and found a consistent 3 to 4 frames per second impact on the left image shows bilateral filtered gradients that seem to highlight
overall interactivity regardless of the size of the dataset. The results the left illiac fossa better than the right image which is obtained
are reported in Table 1. The performance of the hardware acceler- using Gaussian smoothed gradients.
ated raycasting decreases by 25-40% as regards the frames per sec- We applied our technique to the domain of hurricane visualiza-
ond. As regards the percentage increase in the number of instruc- tion and visualized the cloud water attribute from hurricane Katrina.
tions in a shader that a graphics card has to execute, the shader’s Figure 1 shows our results on applying exaggerated shading. Only
length increases by about 15%. the rightmost image with the exaggerated shading shows a break
We believe that such results make the technique usable for real- in the eyewall. Such a discontinuity can be helpful to scientists to
world medical and scientific applications with large datasets. predict the dissipation of a hurricane.
k4= l4ZhBO@ dn}o]I4P Z7UV@NBO@A;CQ7@CBE@ Q= UiMCl>@ DLIAJ<F1qA@: <W RBHM>= ;fQhMF‚MCD‚@ F zL{qA@VFuI4„)BEIs D|qCIs}Dc?CI4P Z7Up@fBE@C;AQ7@CBE@ Q = UVMCl>@ DLJvBEI4U F1qC@ND1= Qh@XM7;CQF1qC@#w>IAFvFuI4U BEIys
'D qAIys*DL= UVMCl>@ DBE@C;CQ7@CBE@ QfJuBOI4U F1qC@iwAMCrhz]{qA@pP @ JEF1UpI>D‚F}= UiMCl>@ DtMABO@VBHM|GAryMCDuFu@ QNs= F1qCI4ZFcP = l4qFT= ;Cl4F1qC@D‚@ r I4;AQfryI4P Z7UN;pr I4;F‚M>= ;CD}?CI4P Z7Up@BE@C;AQ7@CBE@ Q)= UiMCl>@ D
sL= F1qXP = l4qFT= ;ClXM7;CQVF1qA@LF1q>= BOQfM7;CQiJjI4Z7BOF1qpr I4P Z7UX;D|qCIs}D= UiMl>@yDs= F1q@'ƒ7MCl>l>@CBHMyFu@yQND|qAMCQ4= ;Cl-„AMCBHM7Up@|Fu@ABYM VM7;CQNM dz{LqC@L@|ƒ>MCl>l>@CBHMFu@ QfD'qAMCQ7@ Q#= UiMCl>@ D
MCr r @C;CF1ZAMFu@ DLF1qC@NBj= Q7l>@ DLM7;CQf?AM7P P @|GAD*= ;pF1qA@NwCBHM>= ;dztW]P @yMABxP GV?7= D'Z>M>P = = ;CliF1qA@lAGhBj=jJHI4P Q7D*M7;CQfF1qC@pD|Z7P r= x?AM7P P @|GADtIAJSF1qC@XwABEM>= ;irM>;-qC@AP „f= ;XQ4= Ml4;AI>D1= ;AlNF1qA@
QhM7UVMCl>@pryM7ZCDT@ Q)Q`ZA@pFuIMfwABHMA= ;NDuF1BEI '@hz

tures of interest without the need for placing multiple local light
sources and shows more information than using standard volume
rendering techniques for visual investigation of a dataset. The gra-
dient computation can be performed using 4D linear regression in-
stead of central differences to generate better gradients. The three-
dimensional versioan of bilateral filtering generates smoother rep-
resentations of the gradients at multiple scales while preserving the
boundaries and edges in the smoothing process.

R EFERENCES
k4= l4ZhBO@ dnL{LqC@ DT@-= UVMCl>@ DcD|qCIs M-ryI4UX„AMABH= D‚I4;#IAJtF1qC@w>= P MFu@CBHM7P  P Fu@CBj= ;Cl
7M ;CQF1qC@
]M7ZCDTD1= M7;)D'UVIhIAF1q>= ;ClFu@ rCq7;>= `ZC@ D#w>@C= ;ClZCD‚@yQ = ;#F1qC@-@'ƒ>MCl>l>@CB
[Cabral et al., 1994] Brian Cabral, Nancy Cam, and Jim Foran. Accel-
MFu@ QfD'qAMCQ4= ;ClXFu@yrCq7;>= `ZC@psL= F1qf„AMCBHM7Up@|Fu@ABSM €z]{LqC@iP @|JHF*= UiMCl>@w>@ FvFu@CB erated volume rendering and tomographic reconstruction using texture
Q7@C„7= r|FuDVF1qA@-= P P = MCrXJjI>D‚DTM= ;-F1qA@#= UVMCl>@isL= F1qw>= P MFu@CBHM7P  P Fu@CBj= ;Cl)F1qAM7;#F1qA@ mapping hardware. In Proceedings of the 1994 symposium on Volume
= UiMCl>@$sL= F1qNl7M7ZCDTD1= M>;-D'UpI4IAF1q>= ;Al#= ;XF1qC@fw7IAFvFuI4U Bj= l4qCF z visualization, pages 91–98. ACM Press, 1994.
[Decaudin, 1996] Philippe Decaudin. Cartoon looking rendering of 3D
scenes. Research Report 2919, INRIA, June 1996.
[Drebin et al., 1988] Robert A. Drebin, Loren Carpenter, and Pat Hanra-
han. Volume rendering. In Proceedings of the 15th annual conference
6 C ONTRIBUTIONS
on Computer graphics and interactive techniques, pages 65–74. ACM
Press, 1988.
Our paper discusses the use of illustration-inspired lighting and [Fleishman et al., 2003] Shachar Fleishman, Iddo Drori, and Daniel
shading for volumetric datasets. The main contributions of this pa- Cohen-Or. Bilateral mesh denoising. In SIGGRAPH ’03: ACM SIG-
per are the discussion regarding the use of raycasting instead of the GRAPH 2003 Papers, pages 950–953, New York, NY, USA, 2003. ACM
texture-mapping approach, the comparison between different gra- Press.
dient computation techniques and the benefits of using bilateral fil- [Kindlmann et al., 2003] Gordon Kindlmann, Ross Whitaker, Tolga Tas-
tering instead of Gaussian filtering for the generation of multiple dizen, and Torsten Moller. Curvature-based transfer functions for direct
levels of the gradients. volume rendering: Methods and applications. In VIS ’03: Proceedings
of the 14th IEEE Visualization 2003 (VIS’03), page 67, Washington, DC,
USA, 2003. IEEE Computer Society.
7 C ONCLUSION [Lacroute and Levoy, 1994] Philippe Lacroute and Marc Levoy. Fast vol-
ume rendering using a shear-warp factorization of the viewing trans-
We have applied the exaggerated shading technique to enhance vol- formation. In Proceedings of the 21st annual conference on Computer
umetric features. We have demonstrated the applicability of the graphics and interactive techniques, pages 451–458. ACM Press, 1994.
techniques for the medical visualization domain as well as the hur- [Lee et al., 2005] C. H. Lee, X. Hao, and A. Varshney. Geometry-
ricane domain. The exaggerated shading technique highlights fea- dependent lighting. IEEE Transactions on Visualization and Computer
k4= l4ZhBO@ dn<{LqC@ DT@f= UiMCl>@ DD'qCIs ?CI4P Z7Up@NBE@C;CQh@CBE@ Q~= UVMCl>@ DLIAJ}MfWY{ D‚ryM7;fIAJ]F1qA@XM7w>Q7I4UV@C;-M7;CQ„7@CP ?>= Dyz}{qC@-„AMFT= @C;FiqAMCDcM7;-MNDuFu@C;F$I4;XF1qC@NM7w7Q7I4Ui= ;AM7P
CM I>BOF‚M)sLq7= rCqrM>; w7@#D‚@y@C;~= ;)F1qC@)= UVMCl>@ Dz#{qC@~P @|JHF$Fjs<I= UiMl>@yDVr I4;F‚M>= ;F1qA@#?CI4P Z7Up@)BE@C;CQh@CBE@ Q= UVMCl>@ DpIAJLF1qC@#QhMF‚MCD‚@ FVs= F1qCI4ZFNM7;CQ~sL= F1q P = l4qFT= ;Cl€z
{LqC@-DuFu@C;FVryM7;~w7@NrCP @yMCBxP GD‚@ @C;M7;CQ = DXq>= l4P = l4qFu@ Qz-{LqC@)BH= l4qF1UpI>D‚FV= UVMCl>@fD'qCIs F1qC@-@'ƒ7MCl>l>@CBHMyFu@yQ D'qAMCQ4= ;Cl)Fu@yrCq7;>= `ZC@ DNs= F1q~„AMCBHM7Up@|Fu@AB*M €zN{qA@
w7IyscP
vD'q>M>„7@ QN= ;CQ7@C;CF‚MyFT= I4;i= ;cF1qC@$q>= „ ‚= P P = MCr*JjI>D‚DTM = Dtw>@|FvFu@AB%D‚@ @C;p= ;cF1qC@@'ƒ7MCl>l>@CBHMyFu@yQXD'q>MQ`= ;Cli= UiMCl>@7zS{qC@Q4MyF‚MCDT@|FstMCD]I4wF‚M>= ;A@ Qir I4Z7BOFu@ D‚GIAJ€?AI4P ?7= Dz I>BEl€z

Graphics, 12(2):197–207, 2005.


[Neumann et al., 2000] Laszlo Neumann, Balázs Csebfalvi, Andreas
König, and Meister Eduard Gröller. Gradient estimation in volume
data using 4d linear regression. In Proceedings of Eurographics, Fa-
voritenstrasse 9-11/186, A-1040 Vienna, Austria, 2000. human contact:
technical-report@cg.tuwien.ac.at.
[Rheingans and Ebert, 2001] Penny Rheingans and David Ebert. Volume
illustration: Non-photorealistic rendering of volume models. In IEEE
Transactions on Visualization and Computer Graphics, volume 7(3),
pages 253–264, 2001.
[Rusinkiewicz et al., 2006] Szymon Rusinkiewicz, Michael Burns, and
Doug DeCarlo. Exaggerated shading for depicting shape and detail.
ACM Transactions on Graphics (Proc. SIGGRAPH), 25(3), July 2006.
[Tomasi and Manduchi, 1998] C. Tomasi and R. Manduchi. Bilateral fil-
tering for gray and color images. In ICCV ’98: Proceedings of the Sixth
International Conference on Computer Vision, page 839, Washington,
DC, USA, 1998. IEEE Computer Society.
[Westover, 1990] Lee Westover. Footprint evaluation for volume render-
ing. In Proceedings of the 17th annual conference on Computer graphics
and interactive techniques, pages 367–376. ACM Press, 1990.

Você também pode gostar