Você está na página 1de 15

A Report On

HIDDEN SURFACE REMOVAL TECHNIQUES By, Praveen Kumar .K (110922007) Srivathsa D (110922008) 1st semester M Tech (CAMDA), Manipal Institute Of Technology, Manipal.

1 MIT,Manipal

HIDDEN SURFACE REMOVAL TECHNIQUES

Hidden surface removal also known as hidden surface determination is the process used to determine which surfaces and parts of surfaces are not visible from a certain viewpoint. A hidden surface determination algorithm is a solution to the visibility problem, which was one of the first major problems in the field of 3D computer graphics. The process of hidden surface determination is sometimes called hiding, and such an algorithm is sometimes called a hider. The analogue for line rendering is hidden line removal. Hidden surface determination is necessary to render an image correctly, so that one cannot look through walls in virtual reality. It involves the drawing of objects that are closer to the viewing position and eliminating objects which are obscured by other nearer objects. There are many techniques for hidden surface determination. They are fundamentally an exercise in sorting, and usually vary in the order in which the sort is performed and how the problem is subdivided. Sorting large quantities of graphics primitives is usually done by divide and conquers. There are basically two kinds of removal algorithms viz Object space Image space.

In object space comparison takes place within the real 3D sense. It works best for scenes that contain a fewer number of polygons. In image space we decide on the visibility at each pixel position. The Different hidden surface removal techniques:1. Z-buffering. 2. Painters algorithm. 3. Back face culling. 4. Binary space partitioning. 5. Ray tracing. 6. Floating horizon algorithm. 2 MIT,Manipal

Z buffering :During rasterization the depth/Z value of each pixel (or sample in the case of antialiasing, but without loss of generality the term pixel is used) is checked against an existing depth value. If the current pixel is behind the pixel in the Z-buffer, the pixel is rejected, otherwise it is shaded and its depth value replaces the one in the Z-buffer. Z-buffering supports dynamic scenes easily, and is currently implemented efficiently in graphics hardware. This is the current standard. The cost of using Z-buffering is that it uses up to 4 bytes per pixel, and that the rasterization algorithm needs to check each rasterized sample against the z-buffer. The z-buffer can also suffer from artifacts due to precision errors (also known as z-fighting), although this is far less common now that commodity hardware supports 24-bit and higher precision buffers.

Fig 1 Z buffering In z buffering we have the scan line algorithm. The steps involved in it are:1. Store the background colour in the buffer. 2. Scan and covert each polygon. 3. At each pixel position determine if z value is nearer than the current stored z value. 4. If it is lower than swap current colour for stored colour.

Z buffer algorithm:For all positions of (x,y) in the view screen 3 MIT,Manipal

frame(x,y)=I background depth(x,y)=max_distance end for each polygon in the mesh for each point(x,y) in the polygon-fill algorithm compute z the distance of corresponding 3d point from COP if depth(x,y) > z// a closer point depth(x,y)=z frame(x,y)=I(p) //shading endif endfor endfor Determining Z-Depth :-

Ax+By+Cz+D=0 normal vector:N(A,B,C) Insert known(x,y) into the plane equation and solve for z

Z=(-ax-by-d)/c Then at (x1+Dx,y1) Z=z1-a Dx/c a/c is a constant for a plane Dx=1

so incrementing:Zi + 1 =Zi //for a/c across scan lines Zj +1 =Zij // for b/c across scan lines If we have the plane equation: Advantages of z-buffering :-

1. Need large memory to keep z values. 2. Can be implemented in hardware. 3. Can do any number of primitives. 4. Handling cyclic and penetrating polygons. 5. Handles polygon streams in any order. 6. Transparency.

4 MIT,Manipal

Painters algorithm :It sorts polygons by their barycenter and draws them back to front. This produces few artifacts when applied to scenes with polygons of similar size forming smooth meshes and backface culling turned on. The cost here is the sorting step and the fact that visual artifacts can occur. It is the simplest hidden surface removal algorithm. 1. Start by drawing the objects which are farthest from the view point. 2. Continue drawing the objects from far to near. 3. Draw the closest objects nearer. 4. Objects must be drawn in a particular order based on the distance from the view point. 5. If the viewing position is changed the drawing order is changed.

The name "painter's algorithm" refers to the technique employed by many painters of painting distant parts of a scene before parts which are nearer thereby covering some areas of distant parts. The painter's algorithm sorts all the polygons in a scene by their depth and then paints them in this order, farthest to closest. It will paint over the parts that are normally not visible thus solving the visibility problem at the cost of having painted invisible areas of distant objects. The algorithm can fail in some cases, including cyclic overlap or piercing polygons. In the case of cyclic overlap, as shown in the figure to the right, Polygons A, B, and C overlap each other in such a way that it is impossible to determine which polygon is above the others. In this case, the offending polygons must be cut to allow sorting. Newell's algorithm, proposed in 1972, provides a method for cutting such polygons. Numerous methods have also been proposed in the field of computational geometry. The case of piercing polygons arises when one polygon intersects another. As with cyclic overlap, this problem may be resolved by cutting the offending polygons. In basic implementations, the painter's algorithm can be inefficient. It forces the system to render each point on every polygon in the visible set, even if that polygon is occluded in the finished scene. This means that, for detailed scenes, the painter's algorithm can overly tax the computer hardware. A reverse painter's algorithm is sometimes used, in which objects nearest to the viewer are painted first with the rule that paint must never be applied to parts of the image that are already painted. In a computer graphic system, this can be very efficient, since it is not necessary to calculate the colours (using lighting, texturing and such) for parts of the more distant scene that are hidden by nearby

5 MIT,Manipal

objects. However, the reverse algorithm suffers from many of the same problems as the standard version.

Draw polygons as an oil painter might: The farthest one first.


Sort polygons on farthest z Resolve ambiguities where z's overlap Scan convert from largest z to smallest z

Fig 2

Since closest drawn last, it will be on top (and therefore it will be seen). Need all polygons at once in order to sort.

BACK FACE CULLING :A simple way to remove hidden surfaces for convex bodies is to put away all "backfacing" polygons, as they are not seen by the viewer ( Fig.1). A polygon is "backfacing" if its normal N is facing away from the viewing direction V, i.e. if (NV) = cos() > 0.

Fig 3

6 MIT,Manipal

For

the

non-perspective (NV) =

projection

we

can

direct V along =

e.g.

the Z axis, =

therefore Nz .

(Nez )

cos()

The value cos() = Nz is used also to determine brightness of the face in the "headlight" mode. The backface culling algorithm has a HUGE speed advantage if we can use it since the test is cheap and we expect at least half the polygons will be discarded. If objects are not convex, one need to do more work. Usually it is performed in conjunction with a more complete hidden surface algorithm. BINARYSPACE PARTITIONING :-

(BSP) divides a scene along planes corresponding to polygon boundaries. The subdivision is constructed in such a way as to provide an unambiguous depth ordering from any point in the scene when the BSP tree is traversed. The disadvantage here is that the BSP tree is created with an expensive pre-process. This means that it is less suitable for scenes consisting of dynamic geometry. The advantage is that the data is pre-sorted and error free, ready for the previously mentioned algorithms. Note that the BSP is not a solution to HSR, only a help.
In computer science, binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyper planes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree. Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency by precomputing the BSP tree prior to low-level rendering operations. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes. GENERATING THE SURFACE:Binary space partitioning is a generic process of recursively dividing a scene into two until the partitioning satisfies one or more requirements. The specific method of division varies depending on its final purpose. For instance, in a BSP tree used for collision detection, the original object would be partitioned until each part becomes simple enough to be individually tested, and in rendering it is desirable that each part be convex so that the painter's algorithm can be used. The final number of objects will inevitably increase since lines or faces that cross the partitioning plane must be split into two, and it is also desirable that the final tree remains reasonably balanced. 7 MIT,Manipal

Therefore the algorithm for correctly and efficiently creating a good BSP tree is the most difficult part of an implementation. In 3D space, planes are used to partition and split an object's faces; in 2D space lines split an object's segments.

Fig 4 BINARY SPACE PARTITIONING ALGORITHM FOR BSP:BSP trees are used to improve rendering performance in calculating visible triangles for the painter's algorithm for instance. The tree can be traversed in linear time from an arbitrary viewpoint. Since a painter's algorithm works by drawing polygons farthest from the eye first, the following code recurses to the bottom of the tree and draws the polygons. As the recursion unwinds, polygons closer to the eye are drawn over far polygons. Because the BSP tree already splits polygons into trivial pieces, the hardest part of the painter's algorithm is already solved - code for back to front tree traversal.

traverse_tree(bsp_tree* tree, point eye) { location = tree->find_location(eye);

if(tree->empty()) return;

8 MIT,Manipal

if(location > 0) {

// if eye in front of location

traverse_tree(tree->back, eye); display(tree->polygon_list); traverse_tree(tree->front, eye); } else if(location < 0) // eye behind location { traverse_tree(tree->front, eye); display(tree->polygon_list); traverse_tree(tree->back, eye); } else { traverse_tree(tree->front, eye); traverse_tree(tree->back, eye); } } RAY TRACING:IT attempts to model the path of light rays to a viewpoint by tracing rays from the viewpoint into the scene. Although not a hidden surface removal algorithm as such, it implicitly solves the hidden surface removal problem by finding the nearest surface along each view-ray. Effectively this is equivalent to sorting all the geometry on a per pixel basis. In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television special effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration). // eye coincidental with partition hyperplane

9 MIT,Manipal

ALGORITHM :Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the colour of the object visible through it. Scenes in ray tracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography. Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final colour of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene. It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded. A computer simulation that starts by casting rays from the light source is called Photon mapping, and it takes much longer than a comparable ray trace. Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray travelling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated. The light intensity of this pixel is computed using a number of algorithms, which may include the classic rendering algorithm and may also incorporate techniques such as radiosity.

10 MIT,Manipal

Fig 5 Ray tracing As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection between a ray and a sphere. In vector notation, the equation of a sphere with center radius is and

Any point on a ray starting from point with direction

(here

is a unit vector) can be written as

where t is its distance between source) and

and . In our problem, we know

, , (e.g. the position of a light :

, and we need to find t. Therefore, we substitute for

Let

for simplicity; then

Knowing that d is a unit vector allows us this minor simplification:

This quadratic equation has solutions

The two values of t found by solving this equation are the two ones such that where the ray intersects the sphere.

are the points

11 MIT,Manipal

Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from with opposite direction). If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere. Let us suppose now that there is at least a positive solution, and let t be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere. The normal to the sphere is simply

where reflection of

is the intersection point found before. The reflection direction can be found by a with respect to , that is

Thus the reflected ray has equation

Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection. This is merely the math behind the linesphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of raytracing, but this demonstrates an example of the algorithms used.

12 MIT,Manipal

Floating Horizon Algorithm. The equation of a surface in 3D is f (x,y,z)=0,In the present case,ussingfloating horizon algorithm,a set of vertical planes parallel to xy plane is created. The intersection of the surface f(x,y,z)=0 with the vertical planes is given by g (x,y)=0.In the present curve as shown in the fig, four such vertical planes are taken Fig 6. Fig 7 shows the four intersection curves. Here the y value of a curve at a given x value is compared.ie if the y value of the first curve is less than the y value of the next successive curve, a portion of first curve will be hidden. Thus the resultant surface obtained from these set of intersection curves shows only the visible surface by hiding the hidden surface.

Fig 6

13 MIT,Manipal

Fig 7

14 MIT,Manipal

References
1.Dr Anup Chawla,IITD,NPTEL Course material of Computer Aided Design.,NPTEL. 2.Ibrahim Zeid,CAD/CAM-Theory and Practice. 3.www.wikipedia.org.

15 MIT,Manipal

Você também pode gostar