Você está na página 1de 62

3D projection on cubes

Thesis

By

[Name]

Dated

Thesis statement
University

Name

ACKNOWLEDGEMENTS
Everyone helped me loads!

Name

Contents

ACKNOWLEDGEMENTS............................................................1
ABSTRACT............................................................................3
PROBLEM STATEMENT.............................................................5
LITERATURE REVIEW..............................................................6
RESEARCH METHODOLOGIES..................................................15
FINDINGS............................................................................20
REQUIREMENT SPECIFICATION.........................................24
DESIGN SPECIFICATION....................................................28
EVALUATION.....................................................................33
CONCLUSION.......................................................................39
BIBLIOGRAPHY....................................................................40
APPENDIX A......................................................................43
APPENDIX B......................................................................54

Name

ABSTRACT
Computer vision is an extensive field with various approached developed and
substantial attention attached to the projection of objects in recent times. The projection
of the objects has taken assistance from the acquisition of geometry. The acquisition of
geometry has mostly limited the development of reconstruction of images that are static.
The 3D projection of objects in motion has recently grabbed attention with the
advancements in technology. It is still a challenging problem to construct 3D projection
on irregular or dynamic surfaces. Novel techniques have been presented to speed the
projection mechanism of static surfaces or objects. The case of dynamic objects is still a
new one. In the present work, we present a design and implementation of 3D projection
that will transform any irregular shape into a dynamic video. The presented approach is
based on the idea of constructing any object irrespective of the shape and size into a
dynamic video. The details of masking and warping as well as the respective software
and hardware will also be discussed such as the 3 chip DLP projector.
.

Name

Chapter 1
PROBLEM STATEMENT
Video projections have been used extensively in the present professional
markets. In an ideal case, the projection of objects in video processing allows the
breaking into various components depending on the colour specification. This is mainly
achieved with the help of various computer software and graphic tools. The projection
lenses can be arranged in such a way that the 2 dimensional projected image is broken
into 3 dimension of the different projectors assembly and the cubical screen can allow
the various rotation angle of the images to add another dimension in the plane. The 3
dimensional projections used in modern day technology are based on graphical
processors and platforms that enable the projection mapped on a particular 3D engine,
and later on project on the screen. However, an addition of dimension (rotation) can
enable the display of another dimension in the 2 dimensional planar images that rotates
among a set of screen and give a complete 3 dimensional viewing of the images.
The overall objective is the compilation of impressive useful examples of
techniques of video mapping. With application of proof of concept mechanisms
projection mapping systems used for pooling. The project will utilize video projection
mapping as an exciting initiative that will transform any surface into a video display that
is dynamic. We shall also utilize specialized software for warping and masking of the
image in projection to create a perfect fit on the irregular screen shaping.
The concept of Video projection mapping includes the projection of virtual
textures on real time physical objects. The projection is based on high video projectors
to generate an immersive illusion. An example of this process is a surface of
tetrahedrons that can be constructed with all the sides visible and augmented to the
projector. The augmentation will depict the fading colours in the form of textures. The

Name

example considered a very simple surface, but in real life the surfaces can be complex or
dynamic. Complex surfaces like buildings can be the best examples of such surfaces.
The main focus of this work is to implement the video projection on surfaces in the
forms of cubes. Moreover, most of the study has focused on the projection mapping on
static surfaces or scenes. In this work, we will try to implement the 3D projection
mechanism on dynamic surfaces.
Such types of projections can only be accomplished with the creation of virtual
worlds in which textures are created and aligned with the physical world. The textures
used are normally limited in size and shape. Currently, textures are mainly with the help
of computer softwares while the alignment of the binary masked images is performed
manually. The process initiates with mapping the binary mask image to the physical
shape or scene with the help of a projector. Later, the mask or textures are applied to the
video contents. Various advanced techniques are capable of reconstructing the whole
scenes in the form of a 3D model. These techniques choose a viewpoint on the projector
into a physical scene through 3D mapping.

Name

Chapter 2
LITERATURE REVIEW
Review of existing material and to identify research methods and strategies that may be
applied.

Projection Mapping, Rear Projection and 3D Immersive Video: A Review of


Literature
In the past two decades, the world has witnessed revolutionary changes in
virtually all areas of human activity. This development has been possible due to the
technological revolution that has swept across the world during the noted period. Film
developers, TV producers, ad developers, video game developers as well as animated
media developers currently have a wide range of options in terms of technological
concepts that can assist them to produce movies, shows or ads that could not be
imagined a few decades ago. The advent of three-dimensional media (3D) has made
things even better. This advancement has been possible courtesy of concepts such as
projection mapping, rear projection and 3D immersive video technology among many
others. Unfortunately, information about these technological concepts is largely limited
to the technical circles. As a result, this literature review explores the three concepts in
an attempt to cast light on their underlying concepts. It hopes bring a better
understanding to the lay public about what these concepts really are and the role they
play in the media industry.
Projection Mapping
Rowe (2014, p.) defines projection mapping as a group of techniques for
projecting imagery onto physical three-dimensional objects in order to augment the
object or space with digital content. In other words, projection mapping is a modern
technique that allows people to use real life objects, which in most cases irregular, as
6

Name

surfaces upon which they can project digital content of their choice. The technique was
initially referred to as video mapping. It employs special software that specifically
designed to facilitate its use. It is important to note at this point, that contrary to the
rather confining definition of Rowe, projection mapping can also be done on twodimensional objects. The idea here is that as long as there is a surface upon which the
projection can be made the stage is set for projection mapping. This implies that it does
not differ so much from conventional projection techniques. The key aspect that sets it
apart is that while conventional projection requires regular shaped surfaces such as a
rectangle or square to serve as display surface, projection mapping works with just any
surface no matter how complex. The technology aids the projector to perfectly fit the
image on any irregular surface.
Although projection mapping is considered new, the journey for its development
started in the 16th century with Giovanni Batista Della Portas successful projection of a
live, but inverted image on a flat wall in a dark chamber. Since then there have been
gradual improvements in the technology as time went by. It grew in both sophistication
and applicability to its current state. Video mapping from which projection mapping has
evolved became a reality in the 1960s and has since then been widely used in a variety
of industries including art, advertisement, and entertainment. Projection mapping which
is the most modern state of this technological concept started taking root in 1969 when
Disneyland stunned its visitors with scary animations that were projected onto some
skulls to make them appear as if they were alive. However, it was not until the 1990s
that this technology elicited academic interest. Since then it has been a subject of
research and improvement, which have made it more sophisticated and versatile. It
suffices to say that today; the technology has a wide range of applications across many
industries. However, it is making a spectacle in the advertisement industry than
anywhere else.

Name

This technological concept employs the basics of pre-existing concepts, but adds
a new dimension and additional aspects to achieve its stunning results. This has been
possible because according to Bimber (2006), most of the modern augmented reality
applications have turned their focus on mobility. All sorts of animations are now
displayable on type of surface. This is a complete departure from the traditional modes
of projection, which required a specific shape for a display. Projection mapping, so to
say, wraps video footage upon an irregular or 3D surface so that the edges of the footage
do not spill onto the surroundings of the display surface.
To make this concept work, the first step involves taking high resolution pictures
of the object upon which the projection is to be done. The idea behind the images is that
there is need for a detail filled faade of the object, especially if it is a building. The
images, which should be taken from different angles to give different perspectives of the
same object, are necessary because for the projection to be effectively wrapped around
the object, the object must first be mimicked in a computer. Therefore, the pictures help
the mapping software to come up with a model of the object. This can be achieved using
software such as Adobe Photoshop and After Effects, which assist in correcting the
perspective of the image of the object. This step is necessary to minimize distortion of
the projected footage, which is a common problem associated with this technology. The
fine details vary depending on the software that is used to do the mapping, but the most
commonly employed software is known as Mad Mapper. It helps in the creation of
several masks upon which the desired animations are done. The idea here is that once the
masks are developed using the model that was developed from the images, it can be
projected up the object that is was developed for. The results are often stunning,
especially if the object upon which the projection is made is large.
The fine details of the technical aspects of projection mapping complex in nature.
Nonetheless, they can be categorized into motion graphics applications, sound design,

Name

real-time video applications, and projections (Ekim 2011, p.11). All these aspects have
to be seamlessly blended with each other to give good results. The basic concept behind
its success is to consider technology both as a tool and a medium (Ekim 2011). As a tool,
it facilitates the creation and modification of graphics as well as images while as a
medium, technology aids in the presentation of the created content to audiences.
Projection mapping has gained popularity and is currently used in numerous
areas of activity such as film, media and the visualization industry in general. The film
industry can use the technology to create an immersive effect. The media can use it in a
variety of applications as befits their needs. The military can use the technology for
planning purposes. With pictures of an area targeted for a particular operation, the
military can create a 3D virtual replica of the landscape and use as if they were looking
at the actual location. The most common use of this technological concept is in the
advertisement industry. Companies across the world are increasingly turning to
projection mapping to display all sorts of information on the most unlikely of surfaces,
creating spectacles that have before been witnessed elsewhere. Finally, the concept is
also used by experts in the recreation of accident scenes to help understand the
circumstances that surrounded it. This technological concept is revolutionary and as
studies that seek to improve it continue, the future can only be brighter.
Rear Projection
Rear projection is also known as process photography and is the process of
combining a pre-filmed background with a live or foreground performance. It is an incamera cinematic effects technique that is largely used in the film industry. Its being
referred to as an in-camera effect means that it is a technique that is carried out on a
recording or negative before it is taken to the lab for any further modifications. Any
modifications that occur once a video recording or a negative has left already left the
camera do not qualify as in-camera effects. This technique calls for a movie screen upon

Name

which the pre-filmed video or background is projected. The screen should be positioned
between the projector and the foreground objects that are to be incorporated in the
resultant video. To a keen eye, process shots are easily recognizable because they have a
typical washed-out look that set them apart. The washed-out look results from the fact
that the pre-filmed background is projected on to a screen and then recaptured together
with the desired foreground objects.
Rear projection became a viable technological concept in the 1930s after
appropriate cameras and projectors were developed. It had been conceived earlier, but
could not be actualized due to technical difficulties. After its first use in 1930 the
concept was developed with time making it more effective. For example, in 1933, the
use of three synchronized projectors was introduced to make the projections that were
displayed on the screen brighter and more distinctive so as to give them a closer
resemblance with fore ground objects. As years went by, the concept was further refined
leading to the emergence of concepts such as the travelling matte were incorporated to
make the rear projection technique an effective tool for use in the film industry.
Developments such as the Mansard Process also came about as a result of the efforts
that were aimed at making rear projection as sophisticated as possible. However, it is
important to note that rear projection has significantly lost popularity due to the
emergence of better and cheaper ways of achieving the same effects that it was used to
produce. As a result, it is generally considered an old fashioned technique and is almost
obsolete. Very few individuals, if any, still have the capability of successfully using it in
film production.
The technique required actors or other desired foreground objects to be
positioned in front of a movie screen, while a projector that was positioned behind the
screen projected a reversed image of the background onto the screen. The idea behind
the reversal of the image from the projector was to make it appear normal from the

10

Name

foreground. The background could be stationary or moving, but in both cases, it was
referred to as the plate because it was meant to serve as the surface or setting on which
the events in the foreground were taking place. As noted already, no matter how hard the
film producers try to make the background bright, there is always an element of
disjointedness in the resultant video. The background always shows some element of
faintness, commonly referred to as a washed-out appearance.
It was most frequently used to give the illusion that actors were in a moving
vehicle, when in actual sense; the moving vehicle was filmed separately. It also found
widespread application in scenes that required film producers to give the impression that
someone was looking outside the window of a moving vehicle, aircraft or spaceship.
Examples of movies that used this technology in its heyday include 2001: A Space
Odyssey, Pulp Fiction, Aliens, and Terminator 2: Judgment Day, the Austin Powers film
series and Natural Born Killers. Each of these movies employed the technique to
achieve a different goal, showing that technique was used for a wide array of reasons.
The first and most dominant reason for its use was to help avoid costly on-location video
capture. Other reasons behind the development and use of this technology include the
creation of special effects that could not be effectively created through normal filming.
For example, in Natural Born Killers, it was used to emphasize the subconscious
motivation of the characters while in the Austin Powers film series; it served the
purpose of creating the movie the impression of an old spy movie. Thus it is apparent
that the technique was used for a variety of reasons besides the creation of the illusion of
motion. Nevertheless, there are no chances of future development because current film
producers pay little attention to the technique.
3D Immersive Video
Immersive video technology is a technological concept that is gaining popularity
as 3D video continues to become ubiquitous. The technology allows a user to have total

11

Name

control over the viewing direction of a 3D video. They can rotate it around; view it from
above and even sideways. This playback capability is made possible by the fact that
during the development of immersive video, several cameras are used to capture an
event at the same time. The various perspectives are then brought together through
advanced reconstruction technology to form a 3D video. The user is then given the
freedom to rotate the video as they please through remote control. In addition to having
control over the direction of viewing, the user also has control over that playback speed.
In simple terms, the immersive video technology places total control of the playback
experience at the discretion of the user.
Research on immersive video technology has largely focused on the
visualization of virtual environments on special, large-scale projection systems, such as
workbenches or CAVEs (Fehn et al. 2002, p.705). A lot of technical work goes into this
process and as a result, researchers from the field of immersive technology have had to
work closely with their counterparts from 3D video field. The technology was conceived
long ago, but has been difficult to actualize due to the numerous hurdles that researchers
have faced over the years. Even after the invention of the 2D color TV and similar
display devices such as computer screens, it has still taken quite a bit of work to come up
with the 3D multi-view playback.
It takes the use of special software to reconstruct and render the multi-camera
perspectives. And the fact that the development of this kind of video requires multiple
cameras to capture the actual event is the main reason why the technology has not been
ubiquitous just yet. Huang et al. (2008) explain that a 3D space is curved by projecting
silhouettes from the multiple camera perspectives into a space through the use of
projection matrixes, which are secured by camera calibration. This process involves a
series of calculations that help to estimate the shape of the 3D object even before it is
created. The calculations help to eliminate unnecessary and easily rectifiable errors.

12

Name

They add that even though this approach is easier to use in virtual spaces, it is highly
limited when it comes to real world. Therefore, the process of developing 3D objects or
video and rendering them to display surfaces for audiences is a complex and constantly
evolving endeavor. Experts are constantly at work to make the technology better.
The complex nature of this technology is easily notable on the limitations that we
see all around us. While the 2D technology requires minimal additional effort to view, it
is still common for people to be required to put on special glasses in order to view 3D
content that has already been developed. This shows that although the world shows a
desire to move towards this direction, technical as well as equipment related challenges
are still all over the place. It will be quite some time before the technology of 3D
immersive video is usable all over the world. It is however important to point out that
high capability smartphones such as the IPhone 4 and 4S as well as the Galaxy Nexus
have an application that gives them the ability to support 3D immersive video playback.
To emphasize how complex and demanding 3D immersive video technology is,
Huang et al. (2008) explain that employed several PCs and up to eight IEEE 1394
cameras in a 5.5mX5.5mX2.5m space. The implication here is that if the process was to
be repeated in a bigger space such as a stadium, there would be need for numerous
cameras. Incidentally, the infrastructure within state-of-art stadiums makes it possible to
use this technology. The case is however different if an attempt was to be made to use
this technology to process news casts for television audiences. In its current state, not
only would the technology require numerous cameras recording the same events for 3D
reconstruction, but it would also be extremely expensive, making it unviable.
As already noted, research is ongoing and the technology is likely to become
cheaper to work with and also easier to operate in terms of the equipment required for it
to work. A breakthrough step in this respect would be to develop a camera with the
capability of capturing angles of up to 180 degrees or beyond so that only two cameras

13

Name

would be enough to capture an angle of 360 degrees. In addition, such a technology


ought to be accompanied by improved reconstruction and rendering techniques. Finally,
cheap display equipment with 3D capability will also be necessary. If and when then
these steps are made, 3D immersion video will be ready to take the world to the next
level of audio-visual engagement.

14

Name

Chapter 3
RESEARCH METHODOLOGIES

The 3 dimensional projections on cubes are the concept used to display


holographic or 3 dimensional imaging using 3 dimensional objects with 2 dimensional
projections. The concept used for this particular research methodology is known as
graphical project. The graphical projection is a method used to represent the 3dimensional objects into 2 dimensions and maps them accordingly into three different
perspective views to represent the various elements in these 2 dimensional projections.
The graphical projections are used in various engineering design and engineering
drawings. It is also known as the axonometric projection in which the entire coordinate
system is shifted to 120 angle degrees (Sharp J., 2002).

Figure-1: axonometric projection of the entire cube at an angle


The 3-dimensional projection is achieved by the collective data of various perspective
views and joins the common elements (union) of these elements in the entire view. The
projections are developed through projection by the use of various projectors that linked
together to form a 3 dimensional projection of the image. Computer Aided drawing are
the software used to view the three perspective views of the object, and then project the
15

Name

2 dimensional planar object in one 3 dimension. There are two forms of projection views

Perspective projection
Parallel projection

Figure-2: the multi-dimensional (4D) viewing of the cube


The figure-2 is the representation, of the 4 dimensional perspective view of the
mapping using cube. The base or the inner cube is the base or the element of the
projector which maps the entire image by joining the various elements annotated with
the elements. In simpler words, the smallest element of the image, are break into cubes
(pixels) and the perspective view used by the computer aided drawing software can
convert the 2-dimension image in form 3-dimensional form using cube as building
blocks. The regrowth of the cubes by keeping a standard scaling point determines the
quality of the projected 4 dimensional mapping. The quality of the projected image
depends on the detail of the pixel (cube) and the resolution (scaling factor) that would
determine the quality of the regeneration of the projected image (Conturo, T., 1999).

16

Name

Figure-3: the perspective viewing of a cube


The perspective projection is the 3 dimensional viewing for a particular
perspective coordinates. In simpler words, a perspective view is the lens view,
whether its the lens of a camera, projector or even an eye. The image shown in the
figure-3 is a perspective view of the 3 dimensional image of a cube. The converging of
the lines from the edges of the cube is the perspective point, or the perspective
coordinate. But the problem remains, that how this perspective view actually works in
order to understand the 3 dimensional view, and to process the 3 dimensional image and
then to map it on the various projectors on the cubes to display the different image to
map a complete 3 dimensional object. A 3 dimensional map consists of 3 transitional
side and 3 rotational angles on a 3 dimensional plane that maps the 3 projection on a
particular project. So there are several methods of mapping a 3 dimensional projection
using the perspective projection on the particular cube assembly that exhibits the 3
dimensional viewing of the image.
The 3-dimensional cubic projection mapping consists of six elements. The three
translational projections and three rotational projections in the respective x-y and z
17

Name

plane, which maps the entire matrice in this form


x
(x1,y1,z1, 1,

y
(x1,y1,z1, 1,

z
(x1,y1,z1, 1,

x
(x1,y1,z1, 1,

y
(x1,y1,z1, 1,

z
(x1,y1,z1, 1,

2, 3)
(x2,y1,z1, 1,

2, 3)
(x1,y2,z1, 1,

2, 3)
(x1,y1,z2, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x3,y1,z1, 1,

2, 3)
(x1,y3,z1, 1,

2, 3)
(x1,y1,z3, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x4,y1,z1, 1,

2, 3)
(x1,y4,z1, 1,

2, 3)
(x1,y1,z4, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x5,y1,z1, 1,

2, 3)
(x1,y5,z1, 1,

2, 3)
(x1,y1,z5, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x6,y1,z1, 1,

2, 3)
(x1,y6,z1, 1,

2, 3)
(x1,y1,z6, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
(x1,y1,z1, 1,

2, 3)
2, 3)
2, 3)
2, 3)
2, 3)
2, 3)
Figure-4: The entire mapping of a 6x6 matrice of a single building block

The entire mappings of the matrices are completed in a 6x6 matrice that data are
stored in the form of matrices. The technique is to distribute the various elements of the
matrices and form relationships within the projectors to project the data through various
panel, projectors on these 3 dimensional cubes that further mapped the 3 dimensional
objects onto the cubic screen. The projection drawing of the 3 dimensional objects is
completed by using the isometric view to the elements of the cube, and the elements and
coordinates of the cube with respect to the isometric view. The data are mapped on the
matrice as per the elements in various coordinates by translating in the particular
direction, by keeping the rotation projections steady. This allows a complete set of data
that maps the 3 dimensional data of the cube with respect to a particular rotational
orientation. Now each rotational angle rotates in 360 degrees, respectively, and at each
degree, the same mapping is completed in these particular rotational orientations. The
technique is shown in figure-5 of the following mapping technique (Landau, 2005).

18

6x6

Name

Figure-6: The 6 degree mapping of the cube in the respective translation and rotation in
the axis
The resolution of the particular images will be determined by the resolutions of the
camera which obtains the maximum detail per pixels. The data stored will also affect the
processing speed of the processor. So in order to increase the quality of the projected
image, a powerful process with fast processing speed is required. The projectors are
installed in these three perspective view to project the each view of the projected image
from a perspective view. The audience observing the 3D projector screen would see the
3 dimensional holographic images in the form of cubes.

Chapter 4
FINDINGS
The 3 dimensional cubic projections are adapted using the 3 point perspective view
shown in figure-7.

19

Name

Figure-7: 3 viewing point of a perspective graphical projection of a 3 dimensional cube


Notice the three bending angles of the lines around the plan in Figure-7. The
three rotational angles are the perspective projection of the cube which links the 4 faces
of the cube. The technique is to map the projection onto the 6x6 matrice as per the entire
pixel and to annotate the values of each pixel with the three dimensional coordinates and
three rotational coordinates.
The 180 rotation of the perspective view will yield the opposite side of the x-y
and z coordinates of the cube, with the same translation values by rotating at an angle
180 degrees. This technique allows mapping the various faces of the object by predicting
the possible. The three point view is the method of perspective viewing of symmetrical
objects and the following methodology can help in implementing the 3 dimensional
projections on the cubes.

20

Name

Figure-8 (a): the 4 projectors two motion Kinect projection layout for a 4 dimensional
screening on cubes
The 4 dimensional screening is a common technique used in motion-rides and
other recreational projections in which the audience are introduced in a cubic formed
arena, and the four projections are made through various projectors connected at right
angle to each other (rhombic formation) and the Kinect controls the motion of the 4
dimensional imaging onto the cubes, the screening or the projections of the 3
dimensional image are rectified with an additional dimension that depicts its degree of
freedom which makes it more applauding. Little these people know, that the technique is
similar to the old camera projection used in various movies to keep the film rolling in
order to cut down the editing cost to connect these films. The entire scene of 15-20
minutes was shot in one cut.

21

Name

Figure-8 (b): The 3 dimensional colour mapping on the cube using three
projectors at 120 degrees separation
Another methodology of mapping the 3 dimensional projections upon the cube is
by the use of three primary colour projectors, shown in figure-8 (b), which breaks the
images in three colour textures magenta, cyan and yellow. These three colours project
the images along with the respective colour saturation, the 2 dimensional projection
along with the RGB and the intensity specification of each projector on the different
perspective projection allows the three dimensional projection of the image. A simple
Microsoft Kinect device can help in setting up the geometry of the cube as per the
projections.
So in this case, the projectors will remain stationary while the cube will rotate in
order to mix up the coloured projectors and enables the cubical shaped holographic
illustration of the projected image. The colour projectors can cut down the need for the
axonometric projection of the different faces of the cube.
The Kinect will automatically determine the orientation of the image by the 3
dimensional perspective projections; it stores in its data matrices. This will allow the
22

Name

user to add addition 6x2 matrices for the RGB and Saturation matrices for the respective
image in order to break down into its respective yellow, magenta and cyan formation and
projected by the yellow, magenta and cyan projectors to show the colourful projections
of the respective images.
The concept is just like carving a sculpture, in which the mould is rotated and the
image is carved using different pressure applied upon the sculpture. The 6 degrees of
freedom cube are the mould, which rotates in different orientation and the remaining 6x2
matrice projects the various colour projections of the image (Jackowski, 2006).

23

Name

Chapter 5
REQUIREMENT SPECIFICATION

The requirement specification includes the cubic mapping platform along with the
panoramic viewing by using the different patches of images stored. The methodology is
broken into two major steps.
1. The first step includes the high quality camera then captures the 3 dimensional
image from a particular perspective view
2. The second step includes the perspective projection of the cube to show the 3
dimensional viewing of the entire image.
The first step is discussed briefly in the previous steps by breaking the entire structure in
a 6x6 and then 6x2 matrices to store the 3 dimensional data in the storage (Tsai, 2011).

Figure-8, Tesseract projections of the node of the three dimension cubes


A tesseract, in geometry, is a 4 dimensional alternative of a cube shown in the
figure-8. A tesseract consists of 6 different square faces, connected to the nodes of the
each cube, and forms a 4 polytopes convex to 6 regulatory sides. The tesseract is also
known as hypercubes. The 4 types of polytopes form an additional cube in the same
formation, like regrouping from every node.

24

Name

Figure-9: the stereographic projection of the tesseract


The stereographic projection of the tesseract allows the curvature of the edges to
form a ball like formation as shown in the figure-9. This method greatly enhances the
round shaped projection of the objects mapped in cubical form. By the continuation of
the entire structure, using the same polytopes methodology of regrowth from the nodes,
the development of an entire 3 dimensional structure can be achieved. The fourth
dimension can be used as a perspective view that can be regulated by the computer
graphics to show a regulatory projection of the cubic formation and change the shapes of
the projection mapping onto the screen (Santoro, N., 1985).

Figure-10: The 3-dimensional spherical projection using a 4-polytopes stereographic


projection of the tesseract.

Architectural or chamfer projections mapping can be achieved by net technique


implement on joining the various tesseract in the form of Tetris blocks and building the
25

Name

entire system on the basis of the joining the facing surfaces of the hypercubes and forms
a net like formation that joins the same pattern of the similar building blocks made out of
the tesseract. However, the projection techniques used in the net projections are parallel
graphical projections, which maps edges of the entire structure in the growth using a
standard orthogonal projection.

Figure-11: The net formation with the 4 dimensional tesseract using standard cubes with
the 3 by 1 regrowth technique.
The 3 by 1 block growth of the tesseract net is shown in the figure-11 is a
standard technique used by many graphic designers in order to regrow the entire
structure of a symmetrical or architectural view with definite lines or geometry. The
technique is known as the {3, 1} orthogonal tesseract graphical projections. The
technique in the architectural study is known as the paired trees formation in which a 3
dimensional parallel view projection is achieved by using the formation of {3, 1}
orthogonal tesseract graphical projection. The wireframe structure of the net formation
used by {3, 1} following the paired trees graphical projections.

26

Name

Figure-12, the wireframe of the cross like formation of the paired tree of the
parallel graphical projections using the {3, 1} orthogonal projection
The mapping technique is similar to the orthogonal graphical projections. The
fourth dimension used in the hypercubes parallel graphical projections, the same as the
stereographical projection in rounded shapes, the mapping involves the rotation of the
parallel orthogonal view as per the graphical software installed and projects the image of
the structure or architecture as per the data stored in the map (Fernando, R., 2003).

Figure-13: the truncated tesseract of the {3, 1} orthogonal using the truncated
octagonal cubes.

The truncated tesseract is a hybrid of the orthogonal graphical net formation and
the 3 {4} x {} and the 4{3} x {} net formation as building blocks from both {3, 4} and
{4, 3} tesseract cells.

27

Name

Chapter 6
DESIGN SPECIFICATION

The design specification includes the various steps of obtaining the image from
the lens of the camera. The processing technique involves the it operations storing in the
form of matrices to store the various credentials like the header, 24 bit colour format, an
entire (2048x1800) wide frame image of approximately 3,145,728 pixels. This entire
data is stored in a fast processing volatile memory of approximately 10 GB of random
access memory (two 4 GB ddr3 1600 MHz & one 2 GB ddr3 1600 MHz) will be
sufficient in the processing of the image and. The orthogonal projections of the figure
will be broken into two views as per the images, the stereographic panoramic viewing
for the round shaped imaged that would be determined by the perspective projections of
using the tesseract building blocks. While the architectural projections will be achieved
using the truncated tesseract of the orthogonal projection of the 3 {4} x {} and the 4{3}
x {} net formation discussed earlier. The design specification would be broken into five
steps for the image storage up to the projection with colour and anti-aliasing support
(Greene, N., 1986).

28

Name

Figure-14: The storage of data elements in a 32 bit palette of 256-color format.


The video mode data stored using the standard 16x8 VGA processing of a 256
palette format. However, there are many methods to store the video processing storage.
The small screen data type with 4 bit word is one of the methods described in the figure14. Each pixel on the word is represented by 8 bits (bytes). The index of the pixel
contains the header file of the colour palette that represents 256 colours. The RGB value
is represented by the 0x00FF0000 (hexadecimal) format in which the first two bits
represent the transparency. The 32 bit array is processed by the 4 set of byte processors
broken into 4 word counts starting from 0-31, shown in the figure-15.

Figure-15: The data array of 32-bit processor.


The technique is to set various conditional operators like OR, XOR and NAND
to perform conditional operations upon these word arrays and to arrange the each value
upon the colour of each pixel.
29

Name

The integrated function system (IFS) is a mapping format with some built-in
standard functions in order to formulate a map of a particular structure like sphere, tree,
Tetris, rhombic structures etc. The IFS fractals are the set of 2 dimensional points set on
a planer space that provides the entire IFS map. In simpler words, the IFS fractals are the
building blocks for mapping the entire structure. In our particular design, the IFS fractals
are already divided into small tesseract functions with a fixed coordinate system as per
the graphical orientation viewing. The concept of the IFS is to develop entire linear or
non-linear functions and apply various conditional operators like the union or
intersection between the various fractal elements to develop the relation between the
various fractal elements in order to map the entire IFS system as per the required
perspective view. For-example, for the 6x6 matrices and the 6x2 matrices used to store
the data in the data array can be expanded by applying the union and ADD operators to
form the net formation as described in figure-12. The f (p) = (u, v) is the integrated
function of a square function with two elements in the x and y coordinates with a linear
function of the elements (a, b, c and d).

The word count can be arranged in the square formation of the entire IFS
fractals. In this case the functional elements would be. F (x, y, z, 1, 2, 3) along with the
union of the coloured functions are F (RGB, saturation) which determines the entire
function of a single pixel that contains the information of the smallest 3 dimensional
coloured element of the picture (Berkenkotter C., 1993).

30

Name

Texture mapping is the technique used to extrude the 2 dimensional texture of


pattern upon the 3dimensional formulate or platform (object). This method is used in
many 3D graphics engines to map the entire texture upon the 3 dimensional objects. The
texture mapping involves the mapping of textures like stone, wood or marbles to map
the environment. The texture mapping is the same technique used in 3D modelling
projection in which the 3 dimensional instead of a 2 dimension image is mapped on the
3D object.

Figure-16: The single block of the texture map used in 4-sided polygons used in the
Quake 3D engine
The Quake 3D engine used the trapezoidal decomposition of the polygons and
applying the texture on the 2D polygon will map the textures on the single block, with
the same technique, we develop the textural mapping on the truncated and cantellated
16- cell formation for the projection of the architectural mapping using the orthogonal
graphical projections.
Coxeter
16-Cell

Schlegel
Symbols

Diagram
t0,1{3,3,4}
Truncated
=

B4
diagram

t{3,3,4}

31

Name

t0,2{3,3,4}
Cantellated
rr{3,3,4}

Table-1: the specifications of the truncated and cantellated 16-cells hypercubes


polytopes for texture mapping

For the textural mapping of the S3 and H3 non-compact stereographic projection {p, 3, 3}
viewing of from the perspective view for the curvature or round surface objects shown in
the table-2.
Coxeter
Spatial projection

Schlegel
Symbols

Diagram

Cells
diagram

Finite S3

{4,3,3}

Infinite H3

{7,3,3}

Table-2: the specifications of the finite and infinite polytopes for texture
stereographical projection mapping

Chapter 7

32

Name

EVALUATION

The different texture projection mapping of the cube requires some specific
details and aliasing rendering software that would evaluate the output result from the
projector screen upon the projector screen. The perspective viewing is the technique
used and described in the previous chapters to map the textural structure. The evaluation
of the various camera angles and perspective projection upon the screen will enable to
formulate the additional dimension or degree of freedom in the 3 dimensional viewing of
the 3 dimensional mapped images (Holm, L., 2010).

Figure-17: The perspective view of the 2 dimensional cubes with a single viewpoint.
It is necessary to remember that the perspective view is the eye of the lens of the
particular VGA projector projection the image. Given the particular example used in the
figure, the idea is to develop the textural mapping developed in the previous chapter and
project on the screen with a 3 node viewpoint.
The3projectorsandoneKinectareprojecteduponthescreenfromarooftop
perspectiveview(filmmakingview)andallowthevarioustransitionsofthescreenin
ordertodeveloptheperfectprojectionofthe3dimensionalimagesonthecube.There
aretwoimportantfactorsintheprojectionsmethod,oneisthefocallengthorthelens
zooming,whichisthedistanceoftheprojectedimagefromtheprojector.Thefocal
lengthdeterminestheclarityoftheimageprojectedonthecubes.Theotherfactoristhe
33

Name

coordinatesystemofthefocallengthinthe3dimensionalplaneswhichvarieswith
respect to the source code and graphical projections of the image controlled by the
processorandtheKinect(Mueller, K., 2000).
Aspertheexampleshowninthefigure,thefocallengthconnectsthedifferent
nodesofthecubeandprojectsituponthescreen,nowkeepinginviewthevarious
projectionsandtexturemappingusedinthegraphicalprojectionsoftwareusedinthe
previouschapters,theorthogonalnetformationandthefinitestereographicalprojection
willenablethepanoramicdistributioninthe{3,1}treegrowthformation.Theother
importantaspectofthe3dimensionalprojectionsistherotationofthelensthatcould
imitatethesymmetryoftheentire3dimensionalmappedobjectsandprojectonthe
screen.Therotationalmatricesfortheaboveexampleisshownintheequationsbelow

Now these are the 2 dimensional angled projections with to form a 3x3 matrices,
however,theactualmatriceswouldbethe6x6matricesdiscussedbrieflyintheprevious
chapters.Theeasiestandfastprocessingmethodistheformationoftetrahedralinside
thetesseractandfollowsthesamegraphicalprojectiontechniqueusedformapping.
Therotationangleforthecameralenscanbeusedbythesamerotationalangle
techniqueusedinmappingthe various nodes ofthe 3nodetesseract. This helps in
extractingthebuildingblocksofthefractalsoftheimagesandprojectsontheimage.
ThesourcecodefortheparticularisdepictedintheappendixA.Thediagonalsofthe
34

Name

baseofthenodesandtopsurfacearejoinedtogethertoformthetetrahedralformation.

Figure18:Tetrahedralformationofthe3nodalviewpointofthetesseractcube
The Zbuffering is a fast and most commonly used buffering technique for
improvingthelightening,shadowandothercameraeffectsupontheobjecttogiveita
distancerepresentationoftheeachimage.Thezbufferingfunctionofthefocallength
withthedistanceisshownintheequationbelow.

Wheredisthelineardistancefromthecentretotheimagetothefocallengthand
zistheaveragedistancefromthedifferentnodesoftheprojectedimage.Thezbuffer
representation is the easiest, effective and operative methodology for depicting the
distanceofvariousobjectsfromaparticularperspectiveorparallelprojectionview

35

Name

Figure19:Thezbufferrepresentationofa3dimensionalimaging
TheZbuffercaneasilyrepresentthe facingand orientation oftheparticular
tesseract by easily determining the stored tetrahedral orientation and comparing the
symmetrybytheparticulartetrahedraloftheparticularfractalcubeofthetesseract.The
entiresystemisshowninfigure20.

Figure20:Zbufferingforthetetrahedralpolygonscomparisonwiththegraphic
memory

36

Name

The geometric post process is the graphical anti-aliasing of the geometrical


surfaces of the textural environment. As per the graphic engine used for texture mapping
(quake technology) the MLAA and SRAA anti-aliasing methods are used in the postprocessing step. The source code of the GPAA is used in the appendix-B. The superresolution buffer of the SRAA buffers the geometry by enhancing and rendering the
edges of the textures. The particular example of the anti-aliasing of the textural geometry
is shown in figure-21.

Figure-21: The comparison of the Geometry Buffer Anti-Aliasing (GBAA) with the
Geometry Post-processing Anti-Aliasing (GPAA) in the two pictures
The algorithm technique used in the post-processing involves the rendering of
the edges to smooth the glitches of the projected image and the error generated because
of the projected texture mapping by various data losses in the low resolution textural
mapping. The functional algorithm is shown in figure-22.

37

Name

Figure-22: Geometry Post-processing Anti-Aliasing (GPAA) of the textual data by


rendering the function from the diagonal mid-point
The buffering involves the textural mapping to perform a scaling down from a
particular pixel value stored in the matrices in the texture graphical mapping. The
projected image is increased because of the z-buffering of the focal length by a certain
factor. The scaling down of the textural glitches can be scaled down using a sampling
factor of each texture by joining the entire arrays and then rendering by using the post
processing to increase the detail by regrowth of the perspective and parallel projection
by drawing the dotted line as tolerance from the mid-point of the tesseract (Zwicker, M.,
2006).

38

Name

Chapter 8
CONCLUSION
The 3 dimensional projections on the different objects is a modern day
technology in the field of communication and media to exhibit a particular scene in the 3
dimensional environments, which makes it more real life like. The complex nature of
this technology is easily notable on the limitations that we see all around us. While the
2D technology requires minimal additional effort to view, it is still common for people to
be required to put on special glasses in order to view 3D content that has already been
developed. However, the 3 dimensional glasses do not perform a complete 3
dimensional projection around us.
The 3 dimensional projections on a cube is a technique used by various computer
software and graphic designer to enable various 3 dimensional projections by mapping
planar images. The projection lenses can be arranged in such a way that the 2
dimensional projected image is broken into 3 dimension of the different projectors
assembly and the cubical screen can allow the various rotation angle of the images to
add another dimension in the plane. The 3 dimensional projections used in modern day
technology are based on graphical processors and platforms that enable the projection
mapped on a particular 3D engine, and later on project on the screen. However, an
addition of dimension (rotation) can enable the display of another dimension in the 2
dimensional planar images that rotates among a set of screen and give a complete 3
dimensional viewing of the images.
Similar concept is used in the video processing and video display that allows the
breaking of various components of the video and process it separately based on the
colour specification and then project the video by combing the colours from the
projector. It is as like making an artistic illustration by mixing the primary colours in the
pallets and finalizing the colour of the particular art. The saturation an hue can be set by

39

Name

the stroke of the brushes.


The quality of the video can be improved by the various textural mapping using
the Z-buffer. Thezbufferrepresentationistheeasiest,effectiveandoperative
methodologyfordepictingthedistanceofvariousobjectsfromaparticularperspective
orparallelprojectionview and post processing anti-aliasing factor. The buffering
involves the textural mapping to perform a scaling down from a particular pixel value
stored in the matrices in the texture graphical mapping. The projected image is increased
because of the z-buffering of the focal length by a certain factor.

40

Name

BIBLIOGRAPHY
Berkenkotter, C., & Huckin, T. N. (1993). Rethinking genre from a sociocognitive
perspective. Written communication, 10(4), 475-509.
Bimber, O 2006, Projector-based augmentation, Emerging Technologies of Augmented
Reality: Interfaces and Design, 64-89.
Conturo, T. E., Lori, N. F., Cull, T. S., Akbudak, E., Snyder, A. Z., Shimony, J. S., &
Raichle, M. E. (1999). Tracking neuronal fiber pathways in the living human
brain. Proceedings of the National Academy of Sciences, 96(18), 10422-10427
Ekim, B 2011, a Video Projection Mapping Conceptual Design and Application:
Yekpare, Prof. Dr. Rengin Kkerdoan, 10.
Fehn, C et al. 2002, 3D analysis and image-based rendering for immersive TV
applications, Signal Processing: Image Communication, 17(9), 705-715.
Feldmann, I., Schreer, O., Kauff, P., Schfer, R., Fei, Z., Belt, H. J. W., & Escoda, D
2009, Immersive multi-user 3d video communication, In Proceedings of
international broadcast conference (IBC 2009), Amsterdam, NL.
Fernando, R. & Kilgard M. J. (2003). The CG Tutorial: The Definitive Guide to
Programmable Real-Time Graphics. (1st ed.). Addison-Wesley Longman
Publishing Co., Inc. Boston, MA, USA. Chapter 7: Environment Mapping
Techniques
Greene, N. (1986). Environment mapping and other applications of world projections.
IEEE Computer. Graph. Appl. 6, 11, 21-29.
Holm, L., & Rosenstrm, P. (2010). Dali server: conservation mapping in 3D. Nucleic
acids research, 38(suppl 2), W545-W549.
Huang, Y M R 2008, Advances in Multimedia Information Processing-PCM 2008: 9th
Pacific Rim Conference on Multimedia, Tainan, Taiwan, Springer.

41

Name

Jackowski, C., Lussi, A., Classens, M., Kilchoer, T., Bolliger, S., Aghayev, E., & Thali,
M. J. (2006). Extended CT scale overcomes restoration caused streak artifacts for
dental identification in CT-3D color encoded automatic discrimination of dental
restorations. Journal of computer assisted tomography, 30(3), 510-513.
Landau, M., Mayrose, I., Rosenberg, Y., Glaser, F., Martz, E., Pupko, T., & Ben-Tal, N.
(2005). ConSurf 2005: the projection of evolutionary conservation scores of
residues on protein structures. Nucleic acids research, 33 (suppl 2), W299-W302.
Mueller, K., & Yagel, R. (2000). Rapid 3-D cone-beam reconstruction with the
simultaneous algebraic reconstruction technique (SART) using 2-D texture
mapping hardware. Medical Imaging, IEEE Transactions on, 19(12), 1227-1237.
Rowe, A 2014, 'Designing for engagement in mixed reality experiences that combine
projection mapping and camera-based interaction', Digital Creativity, 25, 2, pp.
155-168, Academic Search Complete, EBSCOhost, viewed 8 December 2014.
Santoro, N., & Khatib, R. (1985). Labelling and implicit routing in networks. The
computer journal, 28(1), 5-8.
Sharpe, J., Ahlgren, U., Perry, P., Hill, B., Ross, A., Hecksher-Srensen, J., & Davidson,
D. (2002). Optical projection tomography as a tool for 3D microscopy and gene
expression studies. Science, 296(5567), 541-545
Tsai, S. S., Chen, H., Chen, D., Schroth, G., Grzeszczuk, R., & Girod, B. (2011,
September). Mobile visual search on printed documents using text and low bitrate features. In Image Processing (ICIP), 2011 18th IEEE International
Conference on (pp. 2601-2604). IEEE.
Zwicker, M., Matusik, W., Durand, F., Pfister, H., & Forlines, C. (2006, July).
Antialiasing for automultiscopic 3D displays. In ACM SIGGRAPH 2006
Sketches (p. 107). ACM.

42

Name

APPENDIX A
#include "App.h"
#include "Util/Tokenizer.h"
#ifndef min
#define min(x,y) (x < y)? x : y;
#endif
App::App(){
memset(keys, 0, sizeof(keys));
lButton = false;
rButton = false;
cursorVisible = true;
speed = 1024;
textTexture = TEXTURE_NONE;
menuSystem = new MenuSystem();
cursorPos = 0;
#ifndef NO_BENCH
demoRecMode = false;
demoPlayMode = false;
demoArray = NULL;
#endif
}
App::~App(){
#ifndef NO_BENCH
if (demoRecMode) stopDemo();
delete demoArray;
if (demoPlayMode) fclose(demoFile);
#endif
delete menuSystem;
delete modes;
}
void App::initDisplayModes(){
modes = new DisplayModeHandler(
#ifdef LINUX
display, screen
#endif
);
modes->filterModes(640, 480, 32);
modes->filterRefreshes(85);
}
bool App::setDisplayMode(int width, int height, int refreshRate){
return modes->setDisplayMode(width, height, 32, refreshRate);
}
bool App::resetDisplayMode(){
return modes->resetDisplayMode();
}
void App::initMenu(){
Menu *menu = menuSystem->getMainMenu();
menu->addMenuItem("Toggle Fullscreen");
configMenu = new Menu();
modesMenu = new Menu();

43

Name

+){

for (unsigned int i = 0; i < modes->getNumberOfDisplayModes(); i+


int w = modes->getDisplayMode(i).width;
int h = modes->getDisplayMode(i).height;
char str[64];
sprintf(str, "%dX%d", w, h);
MenuID item = modesMenu->addMenuItem(str);
if (w == fullscreenWidth && h == fullscreenHeight){
modesMenu->setItemChecked(item, true);
}

}
configMenu->addSubMenu(configMenu->addMenuItem("Set Fullscreen
mode"), modesMenu);
controlsMenu = new Menu();
controlsMenu->addMenuItem("Invert Mouse: ", &invertMouse,
INPUT_BOOL);
controlsMenu->addMenuItem("Forward: ",
&forwardKey,
INPUT_KEY);
controlsMenu->addMenuItem("Backward: ",
&backwardKey,
INPUT_KEY);
controlsMenu->addMenuItem("Left: ",
&leftKey,
INPUT_KEY);
controlsMenu->addMenuItem("Right: ",
&rightKey,
INPUT_KEY);
controlsMenu->addMenuItem("Up: ",
&upKey,
INPUT_KEY);
controlsMenu->addMenuItem("Down: ",
&downKey,
INPUT_KEY);
controlsMenu->addMenuItem("Reset camera: ", &resetKey,
INPUT_KEY);
controlsMenu->addMenuItem("Toggle Fps: ",
&showFpsKey,
INPUT_KEY);
controlsMenu->addMenuItem("Menu: ",
&menuKey,
INPUT_KEY);
controlsMenu->addMenuItem("Console: ",
&consoleKey,
INPUT_KEY);
controlsMenu->addMenuItem("Screenshot: ",
&screenshotKey,
INPUT_KEY);
configMenu->addSubMenu(configMenu->addMenuItem("Controls"),
controlsMenu);
//configMenu->addMenuItem("Options");
menu->addSubMenu(menu->addMenuItem("Configure"), configMenu);
}

menu->addMenuItem("Exit");

bool App::processMenu(){
if (menuSystem->getCurrentMenu() == modesMenu){
MenuID item = modesMenu->getCurrentItem();
fullscreenWidth = modes->getDisplayMode(item).width;
fullscreenHeight = modes->getDisplayMode(item).height;
refreshRate
= modes->getDisplayMode(item).refreshRate;
modesMenu->setExclusiveItemChecked(item);
} else {
return false;
}
}

return true;

void App::showCursor(bool val){

44

Name

if (val != cursorVisible){
#if defined(_WIN32)
ShowCursor(val);
#elif defined(LINUX)
if (val){
XUngrabPointer(display, CurrentTime);
} else {
XGrabPointer(display, window, True, ButtonPressMask,
GrabModeAsync, GrabModeAsync, window, blankCursor, CurrentTime);
}
#endif
cursorVisible = val;
}
}
void App::closeWindow(){
#if defined(_WIN32)
PostMessage(hwnd, WM_CLOSE, 0, 0);
#elif defined(LINUX)
done = true;
#endif
}
void App::toggleScreenMode(){
toggleFullscreen = true;
if (fullscreen){
resetDisplayMode();
showCursor(true);
}
fullscreen = !fullscreen;

void App::initPixelFormat(PixelFormat &pf){


pf.redBits
= 8;
pf.greenBits = 8;
pf.blueBits = 8;
pf.alphaBits = 8;
pf.depthBits
= 24;
pf.stencilBits = 0;
pf.accumBits
= 0;
pf.fsaaLevel = fsaaLevel;
}
void App::controls(){
#ifndef NO_BENCH
if (demoPlayMode){
char str[256];
unsigned int len = 0;
if (demoFrameCounter == 0){
len = sprintf(str, "[Beginning of demo]\r\n");
}
len += sprintf(str + len, "%f\r\n", frameTime);
fwrite(str, len, 1, demoFile);
position = demoArray[demoFrameCounter].pos;
wx = demoArray[demoFrameCounter].wx;
wy = demoArray[demoFrameCounter].wy;
wz = demoArray[demoFrameCounter].wz;
demoFrameCounter++;
if (demoFrameCounter >= demoSize) demoFrameCounter = 0;
} else {
#endif

float sqrLen;

45

Name

vec3 dir(0,0,0);
vec3 dx(modelView.elem[0][0], modelView.elem[0][1],
modelView.elem[0][2]);
vec3 dy(modelView.elem[1][0], modelView.elem[1][1],
modelView.elem[1][2]);
vec3 dz(modelView.elem[2][0], modelView.elem[2][1],
modelView.elem[2][2]);
if (keys[leftKey ]) dir -= dx;
if (keys[rightKey]) dir += dx;
if (keys[backwardKey]) dir -= dz;
if (keys[forwardKey ]) dir += dz;
if (keys[downKey]) dir -= dy;
if (keys[upKey ]) dir += dy;
if (keys[resetKey]) resetCamera();
if ((sqrLen = dot(dir, dir)) != 0){
dir *= 1.0f / sqrtf(sqrLen);
}
processMovement(position + frameTime * speed * dir);
#ifndef NO_BENCH
}
if (demoRecMode) recordDemo();
#endif
}
void App::processMovement(const vec3 &newPosition){
position = newPosition;
}
void App::processKey(unsigned int key){
static bool waitKey = false;
if (waitKey){
getMenuSystem()->getCurrentMenu()->setInputKey(key);
waitKey = false;
} else if (key == menuKey){
if (!showConsole) showMenu = !showMenu;
} else if (key == consoleKey){
if (!showMenu) showConsole = !showConsole;
} else if (showMenu){
MenuSystem *menuSystem = getMenuSystem();
if (key == KEY_DOWN){
menuSystem->getCurrentMenu()->nextItem();
} else if (key == KEY_UP){
menuSystem->getCurrentMenu()->prevItem();
} else if (key == KEY_ESCAPE){
showMenu = menuSystem->stepUp();
} else if (key == KEY_ENTER){
if (!menuSystem->goSubMenu()){
if (menuSystem->getCurrentMenu()>isCurrentItemInput()){
if (menuSystem->getCurrentMenu()>getCurrentInputType() == INPUT_KEY){
waitKey = true;
menuSystem->getCurrentMenu()>setInputWait();
} else {

46

Name

>nextValue();

menuSystem->getCurrentMenu()-

}
} else if (menuSystem->getCurrentMenu() ==
menuSystem->getMainMenu()){
if (strcmp(menuSystem>getCurrentItemString(), "Exit") == 0){
//PostMessage(hwnd, WM_CLOSE, 0,
0);
closeWindow();
} else if (strcmp(menuSystem>getCurrentItemString(), "Toggle Fullscreen") == 0){
toggleScreenMode();
//PostMessage(hwnd, WM_CLOSE, 0,
0);
closeWindow();
} else {
processMenu();
}
} else {
processMenu();
}
}
} else if (key == KEY_LEFT){
if (menuSystem->getCurrentMenu()>getCurrentInputType() != INPUT_KEY){
menuSystem->getCurrentMenu()->prevValue();
}
} else if (key == KEY_RIGHT){
if (menuSystem->getCurrentMenu()>getCurrentInputType() != INPUT_KEY){
menuSystem->getCurrentMenu()->nextValue();
}
}
} else if (showConsole){
if (key == KEY_ESCAPE){
showConsole = false;
} else if (key == KEY_LEFT){
if (cursorPos > 0) cursorPos--;
} else if (key == KEY_RIGHT){
if (cursorPos < console.getLength()) cursorPos++;
} else if (key == KEY_BACKSPACE){
if (cursorPos > 0){
cursorPos--;
console.remove(cursorPos, 1);
}
} else if (key == KEY_DELETE){
console.remove(cursorPos, 1);
} else if (key == KEY_ENTER && console.getLength() > 0){
String results;

command");

consoleHistory.addObjectLast(">" + console);
bool res = processConsole(results);
if (results.getLength() == 0){
results = (res? "Command OK" : "Unknown
}
unsigned int index, i = 0;
while (true){
if (results.find('\n', i, &index)){
String str(((const char *) results) + i,

index - i);

47

Name

consoleHistory.addObjectLast(str);
i = index + 1;
} else {
consoleHistory.addObjectLast(results);
break;
}
}
/*while (consoleHistory.getCount() > 100){
consoleHistory.removeNode(consoleHistory.getFirst());
}*/
cursorPos = 0;
console = "";
}
#ifdef _WIN32
else if ((key == 'C' || key == KEY_INSERT) &&
(GetAsyncKeyState(KEY_CTRL) & 0x8000)){
String str;
ListNode <String> *node = consoleHistory.getFirst();
while (node != NULL){
str += node->object;
str += "\r\n";
node = node->next;
}
if (str.getLength() > 0 && OpenClipboard(hwnd)){
EmptyClipboard();
HGLOBAL handle = GlobalAlloc(GMEM_MOVEABLE |
GMEM_DDESHARE, str.getLength() + 1);
char *mem = (char *) GlobalLock(handle);
if (mem != NULL){
strcpy(mem, str);
GlobalUnlock(handle);
HANDLE hand = SetClipboardData(CF_TEXT,
handle);
}
BOOL b = CloseClipboard();
}
}
#endif

} else if (key == KEY_ESCAPE){


if (!fullscreen && captureMouse){
showCursor(true);
captureMouse = false;
} else {
//PostMessage(hwnd, WM_CLOSE, 0, 0);
closeWindow();
}
} else if (key == showFpsKey){
showFPS = !showFPS;
} else if (key == screenshotKey){
snapScreenshot();
} else setKey(key, true);

void App::processChar(char ch){


if (showConsole){
if (defaultFont.isCharDefined(ch)){

48

Name

console.insert(cursorPos, (const char *) &ch, 1);


cursorPos++;
}

}
#define CHECK_ARG(x) if ((x) == NULL){ results = tooFew; return
false; }
bool App::processConsole(String &results){
static char *tooFew = "Too few arguments";
Tokenizer tok;
tok.setString(console);
char *str = tok.next();
if (str == NULL){
results.sprintf("No command given");
} else if (stricmp(str, "pos") == 0){
results.sprintf("%g, %g, %g", position.x, position.y,
position.z);
} else if (stricmp(str, "setpos") == 0){
CHECK_ARG(str = tok.next());
if (str[0] == '-'){
CHECK_ARG(str = tok.next());
position.x = -(float) atof(str);
} else position.x = (float) atof(str);
CHECK_ARG(str = tok.next());
if (*str == ',') CHECK_ARG(str = tok.next());
if (str[0] == '-'){
CHECK_ARG(str = tok.next());
position.y = -(float) atof(str);
} else position.y = (float) atof(str);
CHECK_ARG(str = tok.next());
if (*str == ',') CHECK_ARG(str = tok.next());
if (str[0] == '-'){
CHECK_ARG(str = tok.next());
position.z = -(float) atof(str);
} else position.z = (float) atof(str);
} else if (stricmp(str, "angles") == 0){
results.sprintf("%g, %g, %g", wx, wy, wz);
} else if (stricmp(str, "setangles") == 0){
CHECK_ARG(str = tok.next());
if (str[0] == '-'){
CHECK_ARG(str = tok.next());
wx = -(float) atof(str);
} else wx = (float) atof(str);
CHECK_ARG(str = tok.next());
if (*str == ',') CHECK_ARG(str = tok.next());
if (str[0] == '-'){
CHECK_ARG(str = tok.next());
wy = -(float) atof(str);
} else wy = (float) atof(str);
CHECK_ARG(str = tok.next());
if (*str == ',') CHECK_ARG(str = tok.next());
if (str[0] == '-'){
CHECK_ARG(str = tok.next());
wz = -(float) atof(str);
} else wz = (float) atof(str);
} else if (stricmp(str, "setspeed") == 0){
CHECK_ARG(str = tok.next());
speed = (float) atof(str);
} else if (stricmp(str, "width") == 0){
results.sprintf("%d", width);

49

Name

} else if (stricmp(str, "height") == 0){


results.sprintf("%d", height);
}
#ifndef NO_BENCH
else if (stricmp(str, "demorec") == 0){
if (demoRecMode){
results = "Demo already being recorded";
} else {
if ((str = tok.nextLine()) != NULL){
if (demoRecMode = beginDemo(str + 1)){
results = "Demo recording initialized";
} else {
results = "Error recording demo";
}
} else {
results = "No filename specified";
}
}
} else if (stricmp(str, "demostop") == 0){
if (demoRecMode){
stopDemo();
results = "Demo recording stopped";
} else if (demoPlayMode){
fclose(demoFile);
demoPlayMode = false;
results = "Demo play stopped";
} else {
results = "No demo active";
}
} else if (stricmp(str, "demoplay") == 0){
if (demoRecMode){
results = "Stop demo recording first";
} else {
if ((str = tok.nextLine()) != NULL){
if (demoPlayMode = loadDemo(str + 1)){
results = "Demo play initialized";
demoFrameCounter = 0;
demoFile = fopen("demo.log", "wb");
} else {
results = "Error playing demo";
}
} else {
results = "No filename specified";
}
}
}
#endif
else {
return false;
}
return true;
}
float App::getFps(){
static float fps[15];
static int currPos = 0;
fps[currPos] = 1.0f / frameTime;
currPos++;
if (currPos > 14) currPos = 0;
// Apply a median filter to get rid of temporal peeks
float min = 0, cmin;

50

Name

for (int i = 0; i < 8; i++){


cmin = 1e30f;
for (int j = 0; j < 15; j++){
if (fps[j] > min && fps[j] < cmin){
cmin = fps[j];
}
}
min = cmin;
}
return min;
}
void App::drawGUI(){
if (textTexture != TEXTURE_NONE){
if (showFPS || showMenu || showConsole){
renderer->setDepthFunc(DEPTH_NONE);
renderer->setMask(COLOR);
renderer->setTextures(textTexture);
renderer->setBlending(SRC_ALPHA,
ONE_MINUS_SRC_ALPHA);
renderer->apply();
}
if (showFPS){
char str[32];
sprintf(str, "%d", (int) (getFps() + 0.5f));
drawText(str, 0.02f, 0.02f, 0.045f, 0.07f);
}
if (showMenu){
Menu *menu = menuSystem->getCurrentMenu();
unsigned int i, n = menu->getItemCount();
float charHeight = min(0.12f, 0.98f / n);
float y = 0.5f * (1 - charHeight * n);
for (i = 0; i < n; i++){
float len = defaultFont.getStringLength(menu>getItemString(i));
float charWidth = min(0.08f, 0.98f / len);
drawText(menu->getItemString(i), 0.5f * (1 len * charWidth), y, charWidth, charHeight, 1, !menu->isItemChecked(i),
i != menu->getCurrentItem());
}

y += charHeight;

}
if (showConsole){
ListNode <String> *node = consoleHistory.getLast();
float y = 0.85f;
while (node != NULL && y > -0.05f){
drawText((char *) (const char *) node->object,
0, y, 0.035f, 0.05f);
node = node->prev;
y -= 0.05f;
}

51

Name

String str = ">" + console;


char *st = (char *) (const char *) str;
drawText(st, 0, 0.9f, 0.07f, 0.10f);
float r = 1;
if (cursorPos < console.getLength()){
r = defaultFont.getStringLength(st + cursorPos
+ 1, 1) / defaultFont.getStringLength("_", 1);
}
drawText("_", 0.07f * defaultFont.getStringLength(st,
cursorPos + 1), 0.9f, 0.07f * r, 0.10f);
}
}
}
bool App::setDefaultFont(const char *fontFile, const char *textureFile)
{
if (!defaultFont.loadFromFile(fontFile)) return false;
textTexture = renderer->addTexture(textureFile);
}

return (textTexture != TEXTURE_NONE);

#ifdef _WIN32
#include <shlobj.h>
#endif
void App::snapScreenshot(){
char path[256];
#if defined(_WIN32)
SHGetSpecialFolderPath(NULL, path, CSIDL_DESKTOPDIRECTORY,
FALSE);
#elif defined(LINUX)
strcpy(path, getenv("HOME"));
strcat(path, "/Desktop");
#endif
FILE *file;
int pos = strlen(path);
strcpy(path + pos, "/Screenshot00."
#if !defined(NO_PNG)
"png"
#elif !defined(NO_TGA)
"tga"
#else
"dds"
#endif
);
pos += 11;
int i = 0;
do {
path[pos]
= '0' + (i / 10);
path[pos + 1] = '0' + (i % 10);
if ((file = fopen(path, "r")) != NULL){
fclose(file);
} else {
Image img;

52

Name

if (getScreenshot(img)) img.saveImage(path, true);


break;

}
i++;
} while (i < 100);

#ifndef NO_BENCH
bool App::beginDemo(char *fileName){
return ((demoFile = fopen(fileName, "wb")) != NULL);
}
void App::recordDemo(){
static DemoNode node;
if (node.pos != position || node.wx != wx || node.wy != wy ||
node.wz != wz){
node.pos = position;
node.wx = wx;
node.wy = wy;
node.wz = wz;
fwrite(&node, sizeof(node), 1, demoFile);
}
}
void App::stopDemo(){
fclose(demoFile);
demoRecMode = false;
}
bool App::loadDemo(char *fileName){
if (demoArray != NULL) delete demoArray;
if ((demoFile = fopen(fileName, "rb")) == NULL) return false;
fseek(demoFile, 0, SEEK_END);
demoSize = ftell(demoFile) / sizeof(DemoNode);
fseek(demoFile, 0, SEEK_SET);
demoArray = new DemoNode[demoSize];
fread(demoArray, demoSize * sizeof(DemoNode), 1, demoFile);
fclose(demoFile);
return true;
}
#endif

53

Name

APPENDIX B
#include "App.h"
#include "../Framework3/Util/Hash.h"
BaseApp *app = new App();
void App::moveCamera(const float3 &dir)
{
float3 newPos = camPos + dir * (speed * frameTime);
float3 point;
const BTri *tri;
if (m_BSP.intersects(camPos, newPos, &point, &tri))
{
newPos = point + tri->plane.xyz();
}
m_BSP.pushSphere(newPos, 35);
}

camPos = newPos;

void App::resetCamera()
{
camPos = vec3(1992, 30, -432);
wx = 0.078f;
wy = 0.62f;
}
bool App::onKey(const uint key, const bool pressed)
{
if (D3D10App::onKey(key, pressed)) return true;
if (pressed)
{
if (key == KEY_F5)
{
m_UseGPAA->setChecked(!m_UseGPAA->isChecked());
}
}
}

return false;

void App::onSize(const int w, const int h)


{
D3D10App::onSize(w, h);
if (renderer)
{
// Make sure render targets are the size
renderer->resizeRenderTarget(m_BaseRT,
renderer->resizeRenderTarget(m_NormalRT,
renderer->resizeRenderTarget(m_DepthRT,
}
}
bool App::init()
{
// No framework created depth buffer
depthBits = 0;

54

of
w,
w,
w,

the window
h, 1, 1, 1);
h, 1, 1, 1);
h, 1, 1, 1);

Name

// Load map
m_Map = new Model();
if (!m_Map->loadObj("../Models/Corridor2/MapFixed.obj")) return
false;

m_Map->scale(0, float3(1, 1, -1));


// Create BSP for collision detection
uint nIndices = m_Map->getIndexCount();
float3 *src = (float3 *) m_Map->getStream(0).vertices;
uint *inds = m_Map->getStream(0).indices;
for (uint i
{
const
const
const

= 0; i < nIndices; i += 3)
float3 &v0 = src[inds[i]];
float3 &v1 = src[inds[i + 1]];
float3 &v2 = src[inds[i + 2]];

m_BSP.addTriangle(v0, v1, v2);


}
m_BSP.build();
m_Map->computeTangentSpace(true);
// Create light sphere for deferred shading
m_Sphere = new Model();
m_Sphere->createSphere(3);
// Initialize all lights
m_Lights[ 0].position = float3( 576,
m_Lights[ 0].radius = 640.0f;
m_Lights[ 1].position = float3( 0,
m_Lights[ 1].radius = 640.0f;
m_Lights[ 2].position = float3(-576,
m_Lights[ 2].radius = 640.0f;
m_Lights[ 3].position = float3( 0,
m_Lights[ 3].radius = 640.0f;
m_Lights[ 4].position = float3(1792,
m_Lights[ 4].radius = 550.0f;
m_Lights[ 5].position = float3(1792,
m_Lights[ 5].radius = 550.0f;
m_Lights[ 6].position = float3(-192,
m_Lights[ 6].radius = 550.0f;
m_Lights[ 7].position = float3(-832,
m_Lights[ 7].radius = 550.0f;
m_Lights[ 8].position = float3(1280,
m_Lights[ 8].radius = 450.0f;
m_Lights[ 9].position = float3(1280,
m_Lights[ 9].radius = 450.0f;
m_Lights[10].position = float3(-320,
m_Lights[10].radius = 450.0f;
m_Lights[11].position = float3(-704,
m_Lights[11].radius = 450.0f;
m_Lights[12].position = float3( 960,
m_Lights[12].radius = 450.0f;
m_Lights[13].position = float3( 960,
m_Lights[13].radius = 450.0f;
m_Lights[14].position = float3( 640,
m_Lights[14].radius = 450.0f;
m_Lights[15].position = float3(-640,
m_Lights[15].radius = 450.0f;
m_Lights[16].position = float3(-960,
m_Lights[16].radius = 450.0f;

55

96,

0);

96,

576);

96,

0);

96, -576);
96,

320);

96, -320);
96, 1792);
96, 1792);
32,

192);

32, -192);
32, 1280);
32, 1280);
32,

640);

32, -640);
32, -960);
32, -960);
32,

640);

Name

m_Lights[17].position = float3(-960, 32, -640);


m_Lights[17].radius = 450.0f;
m_Lights[18].position = float3( 640, 32, 960);
m_Lights[18].radius = 450.0f;
// Init GUI components
int tab = configDialog->addTab("GPAA");
configDialog->addWidget(tab, m_UseGPAA = new CheckBox(0, 0, 150,
36, "Use GPAA", true));
}

return true;

void App::exit()
{
delete m_Sphere;
delete m_Map;
}
bool App::initAPI()
{
// Override the user's MSAA settings
return D3D10App::initAPI(DXGI_FORMAT_R8G8B8A8_UNORM_SRGB,
DXGI_FORMAT_UNKNOWN, 1, NO_SETTING_CHANGE | SAMPLE_BACKBUFFER);
}
void App::exitAPI()
{
D3D10App::exitAPI();
}
bool App::load()
{
// Shaders
if ((m_FillBuffers =
SHADER_NONE) return false;
if ((m_Ambient
=
SHADER_NONE) return false;
if ((m_Lighting
=
SHADER_NONE) return false;
if ((m_AntiAlias
=
SHADER_NONE) return false;

renderer->addShader("FillBuffers.shd")) ==
renderer->addShader("Ambient.shd"

)) ==

renderer->addShader("Lighting.shd"

)) ==

renderer->addShader("GPAA.shd"

)) ==

// Samplerstates
if ((m_BaseFilter = renderer->addSamplerState(TRILINEAR_ANISO,
WRAP, WRAP, WRAP)) == SS_NONE) return false;
if ((m_PointClamp = renderer->addSamplerState(NEAREST, CLAMP,
CLAMP, CLAMP)) == SS_NONE) return false;
// Main render targets
if ((m_BaseRT
= renderer->addRenderTarget(width, height, 1, 1,
1, FORMAT_RGBA8, 1, SS_NONE, SRGB)) == TEXTURE_NONE) return false;
if ((m_NormalRT = renderer->addRenderTarget(width, height, 1, 1,
1, FORMAT_RGBA8S, 1, SS_NONE)) == TEXTURE_NONE) return false;
if ((m_DepthRT = renderer->addRenderDepth (width, height, 1,
FORMAT_D16,
1, SS_NONE, SAMPLE_DEPTH)) == TEXTURE_NONE) return
false;
// Textures
if ((m_BaseTex[0] = renderer->addTexture
("../Textures/wood.dds",
true, m_BaseFilter, SRGB))
== TEXTURE_NONE) return false;

56

Name

if ((m_BumpTex[0] = renderer>addNormalMap("../Textures/woodBump.dds", FORMAT_RGBA8S, true,


m_BaseFilter
)) == TEXTURE_NONE) return false;
if ((m_BaseTex[1] = renderer->addTexture
("../Textures/Tx_imp_wall_01_small.dds",
true,
m_BaseFilter, SRGB)) == TEXTURE_NONE) return false;
if ((m_BumpTex[1] = renderer>addNormalMap("../Textures/Tx_imp_wall_01Bump.dds", FORMAT_RGBA8S,
true, m_BaseFilter
)) == TEXTURE_NONE) return false;
if ((m_BaseTex[2] = renderer->addTexture
("../Textures/floor_wood_3.dds",
true, m_BaseFilter,
SRGB)) == TEXTURE_NONE) return false;
if ((m_BumpTex[2] = renderer>addNormalMap("../Textures/floor_wood_3Bump.dds", FORMAT_RGBA8S, true,
m_BaseFilter
)) == TEXTURE_NONE) return false;
if ((m_BaseTex[3] = renderer->addTexture
("../Textures/light2.dds",
true, m_BaseFilter,
SRGB)) == TEXTURE_NONE) return false;
if ((m_BumpTex[3] = renderer>addNormalMap("../Textures/light2Bump.dds", FORMAT_RGBA8S, true,
m_BaseFilter
)) == TEXTURE_NONE) return false;
if ((m_BaseTex[4] = renderer->addTexture
("../Textures/floor_wood_4.dds",
true, m_BaseFilter,
SRGB)) == TEXTURE_NONE) return false;
if ((m_BumpTex[4] = renderer>addNormalMap("../Textures/floor_wood_4Bump.dds", FORMAT_RGBA8S, true,
m_BaseFilter
)) == TEXTURE_NONE) return false;
// Blendstates
if ((m_BlendAdd = renderer->addBlendState(ONE, ONE)) == BS_NONE)
return false;
// Depth states - use reversed depth (1 to 0) to improve
precision
if ((m_DepthTest = renderer->addDepthState(true, true, GEQUAL))
== DS_NONE) return false;
// Rassterizer states
if ((m_CullBack_DepthBias = renderer>addRasterizerState(CULL_BACK, SOLID, false, false, -1.0f, -1.0f)) ==
DS_NONE) return false;
// Upload map to vertex/index buffer
if (!m_Map->makeDrawable(renderer, true, m_FillBuffers)) return

false;

if (!m_Sphere->makeDrawable(renderer, true, m_Lighting)) return

false;

// Vertex format for edges


FormatDesc formatDesc[] =
{
{ 0, TYPE_VERTEX, FORMAT_FLOAT, 3 },
{ 0, TYPE_VERTEX, FORMAT_FLOAT, 3 },
};
if ((m_EdgesVF = renderer->addVertexFormat(formatDesc,
elementsOf(formatDesc), m_AntiAlias)) == VF_NONE) return false;

57

Name

// Finds all edges to use for GPAA. This code makes sure each
edge is added only once
// and that internal edges (such as the diagonal in a quad) are
not used.
uint index_count = m_Map->getIndexCount();
uint *indices = m_Map->getStream(0).indices;
float3 *src_vertices = (float3 *) m_Map->getStream(0).vertices;
Hash hash(2, index_count >> 3, index_count);
Array<Edge>
for (uint i
{
const
const
const

edges;
= 0; i < index_count; i += 3)
uint i0 = indices[i];
uint i1 = indices[i + 1];
uint i2 = indices[i + 2];

const float3 &v0 = src_vertices[i0];


const float3 &v1 = src_vertices[i1];
const float3 &v2 = src_vertices[i2];
vec3 normal = normalize(cross(v1 - v0, v2 - v0));
uint index;
uint last_n = 2;
for (int n = 0; n < 3; n++)
{
Edge edge(indices[i + last_n], indices[i + n]);
if (hash.insert(&edge.index0, &index))
{
if (dot(normal, edges[index].normal) > 0.99f)
{
edges[index].remove = true;
}
}
else
{
edge.normal = normal;
edge.remove = false;
edges.add(edge);
}
last_n = n;
}
}
// Create the edge buffer for post-process antialiasing.
uint edge_count = edges.getCount();
Vertex *vertices = new Vertex[edge_count * 2];
Vertex *dest = vertices;
for (uint i = 0; i < edge_count; i++)
{
if (!edges[i].remove)
{
dest->v0 = src_vertices[edges[i].index0];
dest->v1 = src_vertices[edges[i].index1];
dest++;
dest->v0 = src_vertices[edges[i].index1];
dest->v1 = src_vertices[edges[i].index0];
dest++;

58

Name

m_EdgesVertexCount = dest - vertices;


if ((m_EdgesVB = renderer->addVertexBuffer(m_EdgesVertexCount *
sizeof(Vertex), STATIC, vertices)) == VB_NONE) return false;
delete [] vertices;
}

return true;

void App::unload()
{
}
void App::drawFrame()
{
const float near_plane = 20.0f;
const float far_plane = 4000.0f;
// Reversed depth
float4x4 projection = toD3DProjection(perspectiveMatrixY(1.2f,
width, height, far_plane, near_plane));
float4x4 view = rotateXY(-wx, -wy);
view.translate(-camPos);
float4x4 viewProj = projection * view;
// Pre-scale-bias the matrix so we can use the screen position
directly
float4x4 viewProjInv = (!viewProj) * (translate(-1.0f, 1.0f,
0.0f) * scale(2.0f, -2.0f, 1.0f));
TextureID bufferRTs[] = { m_BaseRT, m_NormalRT };
renderer->changeRenderTargets(bufferRTs, elementsOf(bufferRTs),
m_DepthRT);
renderer->clear(false, true, false, NULL, 0.0f);
/*
Main scene pass.
This is where the buffers are filled for the later
deferred passes.
*/
renderer->reset();
renderer->setRasterizerState(m_CullBack_DepthBias);
renderer->setShader(m_FillBuffers);
renderer->setShaderConstant4x4f("ViewProj", viewProj);
renderer->setSamplerState("Filter", m_BaseFilter);
renderer->setDepthState(m_DepthTest);
renderer->apply();
const uint batch_count = m_Map->getBatchCount();
for (uint i = 0; i < batch_count; i++)
{
renderer->setTexture("Base", m_BaseTex[i]);
renderer->setTexture("Bump", m_BumpTex[i]);
renderer->applyTextures();
m_Map->drawBatch(renderer, i);
}

59

Name

renderer->changeToMainFramebuffer();
/*

Deferred ambient pass.


*/
renderer->reset();
renderer->setRasterizerState(cullNone);
renderer->setDepthState(noDepthTest);
renderer->setShader(m_Ambient);
renderer->setTexture("Base", m_BaseRT);
renderer->setSamplerState("Filter", m_PointClamp);
renderer->apply();
device>IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
device->Draw(3, 0);
/*
Deferred lighting pass.
*/
renderer->reset();
renderer->setDepthState(noDepthTest);
renderer->setShader(m_Lighting);
renderer->setRasterizerState(cullFront);
renderer->setBlendState(m_BlendAdd);
renderer->setShaderConstant4x4f("ViewProj", viewProj);
renderer->setShaderConstant4x4f("ViewProjInv", viewProjInv *
scale(1.0f / width, 1.0f / height, 1.0f));
renderer->setShaderConstant3f("CamPos", camPos);
renderer->setTexture("Base", m_BaseRT);
renderer->setTexture("Normal", m_NormalRT);
renderer->setTexture("Depth", m_DepthRT);
renderer->apply();
float2 zw = projection.rows[2].zw();
for (uint i = 0; i < LIGHT_COUNT; i++)
{
float3 lightPos = m_Lights[i].position;
float radius = m_Lights[i].radius;
float invRadius = 1.0f / radius;
// Compute z-bounds
float4 lPos = view * float4(lightPos, 1.0f);
float z1 = lPos.z + radius;
if (z1 > near_plane)
{
float z0 = max(lPos.z - radius, near_plane);
float2 zBounds;
zBounds.y = saturate(zw.x + zw.y / z0);
zBounds.x = saturate(zw.x + zw.y / z1);
renderer->setShaderConstant3f("LightPos", lightPos);
renderer->setShaderConstant1f("Radius", radius);
renderer->setShaderConstant1f("InvRadius",
invRadius);

renderer->setShaderConstant2f("ZBounds", zBounds);
renderer->applyConstants();
}

m_Sphere->draw(renderer);

60

Name

}
if (m_UseGPAA->isChecked())
{
renderer->changeRenderTarget(FB_COLOR, m_DepthRT);

renderer;

// Copy backbuffer to a texture


Direct3D10Renderer *d3d10_renderer = (Direct3D10Renderer *)

device->CopyResource(d3d10_renderer->getResource(m_BaseRT),
d3d10_renderer->getResource(backBufferTexture));
/*
GPAA antialiasing pass
*/
renderer->reset();
renderer->setDepthState(m_DepthTest);
renderer->setShader(m_AntiAlias);
renderer->setRasterizerState(cullNone);
renderer->setShaderConstant4x4f("ViewProj", viewProj);
renderer->setShaderConstant4f("ScaleBias", float4(0.5f *
width, -0.5f * height, 0.5f * width, 0.5f * height));
renderer->setShaderConstant2f("PixelSize", float2(1.0f /
width, 1.0f / height));
renderer->setTexture("BackBuffer", m_BaseRT);
renderer->setSamplerState("Filter", linearClamp);
renderer->setVertexFormat(m_EdgesVF);
renderer->setVertexBuffer(0, m_EdgesVB);
renderer->apply();
renderer->drawArrays(PRIM_LINES, 0, m_EdgesVertexCount);
}

61

Você também pode gostar