Escolar Documentos
Profissional Documentos
Cultura Documentos
The RGB Subsidiary is an open source, portable, and extendible system
for the processing and editing of unstructured 3D triangular meshes. The RGB Subsidiary
permits to edit the level of detail of a mesh in different ways:
Ô? ë portion of the mesh can be selected and either refined or coarsened by setting
the desired LOD inside and outside selection. LOD may be set in terms of levels
of subsidiary and maximal length of edges;
Ô? ë brush tool can be used to adjust LOD locally: the region swept by the brush is
either refined or coarsened by changing the LOD according to the parameters set
for the brush;
Ô? ëtomic primitives that either refine or coarsen the mesh locally can be applied
individually for fine editing.
Direct and reverse subsidiary may thus be used together, to take any mesh and
automatically generate a whole hierarchy of LOD s, coarser as well as more refined than
the base mesh. The adaptivity implies that cells at different levels of subsidiary are
combined in the context of a single mesh.
j jj
Subsidiary surfaces are becoming more and more popular in computer graphics
and CëD. During the last decade, they found major applications in the entertainment
industry and in simulation. RGB is a mechanism for the adaptive subsidiary of triangle
meshes that supports fully dynamic selective refinement and it is compliant with the
Loop subsidiary scheme in the limit surface. Our scheme supports dynamic selective
refinement, as in Continuous Level of Detail models, and it generates conforming meshes
at all intermediate steps.
cc cj
!cj
Ô? "
#j
Ô?
Ô? !$$"j
#
$
Ô?
$
? %"
? %"
? %"
Ô?
j
! cj
&cj
j
Red-green triangulations were introduced in the context of finite-element methods
and have become popular in the common practice, as an empirical way to obtain
conforming adaptive meshes from hierarchies of triangle meshes generated from one-to-
four triangle split.
The resulting triangles can be regarded as being of green and blue types in our
terminology. ë closed form solution of the subdivision rule permits to compute control
points correctly for a vertex at any level, at the cost of some over refinement. The 4-8
subdivision is based on edge split, as in our case, applied to a special case of triangle
meshes, called tri-quad meshes. The correct position of control points is addressed and
resolved also in this case with a certain amount of over refinement.
!ccjcj
ë triangle mesh is a triple c= (V, e, T) where V is a set of points in 3D space,
called vertices; T is a set of triangles having their vertices in V and such that any two
triangles of T either are disjoint or share exactly either one vertex or one edge, e is the set
of edges of the triangles in T. Standard topological incidence and adjacency relations are
defined over the entities of c.
It assume to deal always with manifold meshes either with or without boundary,
i.e., each edge of E is incident at either one or two triangles of T; and the star of a vertex
(i.e., the set of entities incident at it) is homeomorphic either to an open disc or to a
closed half-plane. Edges that are incident at just one triangle and vertices that have a star
homeomorphic to a half-plane form the boundary of the mesh, and they are called
boundary edges and boundary vertices, respectively. The remaining edges and vertices
are said to be internal. ë mesh with an empty boundary is said to be watertight. ë mesh is
regular if all its vertices have a valence of six. Vertices with a valence different from six
are called extraordinary.
Consider a base mesh _0. We assign level zero to all its vertices and edges and
color green to all its edges. ës a general rule, the level of a triangle is defined to be the
lowest among the levels of its edges; and the color of a triangle is defined to be: green if
all its edges are at the same level; red if two of its edges are at the same level l and the
third edge is at level l þ 1; and blue if two of its edges are at the same level l þ 1 and the
third edge is at level l. It follows that all triangles in the base mesh are green at level zero.
Edge split operations applied to boundary edges will affect just one triangle. The
resulting configuration depends only on the color of the triangle incident at e.
! j
The meshes composed of equilateral triangles, right triangles, and isosceles
triangles to depict the three different types of triangles that may appear in an RGB
triangulation. ëctually, the shape of triangles is totally irrelevant in the subdivision
process, while just level and color codes matter.
ëll rules defined in this section are purely topological. Just for the sake of clarity,
in the figures, we will use meshes composed of equilateral triangles, right triangles, and
isosceles triangles to depict the three different types of triangles that may appear in an
RGB triangulation. ëctually, the shape of triangles is totally irrelevant in the subdivision
process, while just level and color codes matter.
%"
The t0 and t1 are both green. The bisection of each triangle t0 and t1 at the
midpoint of e generates two red triangles at level l. Each such triangle will have: one
green edge at level l (the one common with old triangle t), one green edge at level l + l
(one half of e), and one red edge at level l (the new edge inserted to split t).
%"
The t0 is green and t1 is red. Triangle t0 is bisected and edge e is split as above.
The bisection of t1 generates one blue triangle at level l and one green triangle at level l +
1. The green triangle is incident at the green edge at level l + 1 of old triangle t1 and also
its other two edges are at level l + 1 (the edge inserted to subdivide t1, and one half of e).
The blue triangle is incident at the red edge of old triangle t1 and has also two green
edges at level l + 1 (the edge inserted to subdivide t1, and the other half of e).
%"
The t0 and t1 are both red. Triangles t0 and t1 are both bisected as triangle t1 in
the previous case and each of them generates the same configuration made of a blue
triangle at level l and a green triangle at level l þ 1. This case may come in two variants:
RR1-split and RR2-split. Each variant can be recognized by the cycle of colors of edges
on the boundary of the diamond formed by t0 and t1: this may be either red-green-red-
green for RR1-split, or red-red-green-green for RR2-split.
j
j
The RGB subdivision is now derived by studying the geometry of vertices. The
basic idea here is to adapt the rules of Loop subdivision to the topology of RGB
triangulations, so that the limit surfaces of the two subdivision schemes become
coincident.
Note that in RGB subdivision both refinement and coarsening operations are
allowed; therefore, updates to control points must be made for odd vertices during
refinement and for even vertices both during refinement and during coarsening.
When a vertex v is removed, its neighbors are also checked and the contribution
of v is subtracted from its neighbors that received it. For a given neighbor vi, if the
minimum level of incident edges has become lower, then its current control point is also
updated accordingly.
j'j"
$
j$j'()j$ ($
$
3.? Rë 256 B
'"'
$$$
The objective of this project is to develop an online book store. When the user
types in the URL of the Book Store in the address field of the browser, a Web Server is
contacted to get the requested information. In the .NET Framework, IIS (Internet
Information Service) acts as the Web Server. The sole task of a Web Server is to accept
incoming HTTu requests and to return the requested resource in an HTTu response. The
first thing IIS does when a request comes in is to decide how to handle the request. Its
decision is based upon the requested file's extension. For example, if the requested file
has the . extension, IIS will route the request to be handled by .dll. If it has the extension
to be handled by .NET Engine.
$'
$j#
*j
IISis a set of Internet based services for Windows machines. Originally supplied
as part of the Option uack for Windows NT, they were subsequently integrated with
Windows 2000 and Windows Server 2003). The current (Windows 2003) version is IIS
6.0 and includes servers for (a software standard for transferring computer files
between machines with widely different operating systems), j (Simple ail
Transfer urotocol, is the de facto standard for email transmission across the Internet) and
/j(is the secure version of HTTu, the communication protocol of the World
Wide Web) .
The web server itself cannot directly perform server side processing but can
delegate the task to ISëuI (ëpplication urogramming Interface of IIS) applications on the
server. icrosoft provides a number of these including ones for ëctive Server uage and
.NET.
$'"
Internet Information Services is designed to run on Windows server operating
systems. ë restricted version that supports one web site and a limited number of
connections is also supplied with Windows Xu urofessional.
icrosoft has also changed the server account that IIS runs on. In versions of IIS
before 6.0, all the features were run on the System account, allowing exploits to run wild
on the system. Under 6.0 many of the processes have been brought under a Network
Services account that has fewer privileges. In particular this means that if there were an
exploit on that feature, it would not necessarily compromise the entire system.
c
.NET is a programming framework built on the common language runtime that
can be used on a server to build powerful Web applications. .NET has many advantages ±
both for programmers and for the end users because it is compatible with the .NET
Framework. This compatibility allows the users to use the following features through
.NET:
$
There are two separate authentication layers in an.NET application. ëll requests
flow through IIS before they are handed to .NET, and IIS can decide to deny access
before .NET even knows about the request. Here is how the process works:
1.? IIS checks to see if an incoming request is coming from an Iu address that is
allowed access to the domain. If not, the request is denied.
2.? IIS performs its own user authentication, if it is configured to do so. By default,
IIS allows anonymous access and requests are authenticated automatically.
3.? When a request is passed from IIS to .NET with an authenticated user, .NET
checks to see whether impersonation is enabled. If so, .NET acts as though it were
the authenticated user. If not, .NET acts with its own configured account.
4.? Finally, the identity is used to request resources from the operating system. If all
the necessary resources can be obtained, the user's request is granted; otherwise
the request is denied.
j
When a request comes into IIS Web server its extension is examined and, based
on this extension, the request is either handled directly by IIS or routed to an ISëuI
extension. ën ISëuI extension is a compiled class that is installed on the Web server and
whose responsibility is to return the markup for the requested file type. By default, IIS
handles the request, and simply returns the contents of the requested file. This makes
sense for static files, like images, HT L pages, CSS files, external JavaScript files, and
so on. For example, when a request is made for .html file, IIS simply returns the
contents of the requested HT L file.
For files whose content is dynamically generated, the ISëuI extension configured
for the file extension is responsible for generating the content for the requested file. For
example, a Web site that serves up classic pages has the . extension mapped to the .dll
ISëuI extension. The .dll ISëuI extension executes the requested page and returns its
generated HT L markup. If the Web site serves up .NET Web pages, IIS has mapped the
. x to net_isapi.dll, an ISëuI extension that starts off the process of generating the
rendered HT L for the requested .NET Web page. The net_isapi.dll ISëuI extension is
a piece of | . That is, it is not code that runs in the .NET Framework.
When IIS routes the request to the net_isapi.dll ISëuI extension, the ISëuI extension
routes the request onto the .NET engine, which is written in - managed
code is code that runs in the .NET Framework.
The .NET engine is strikingly similar to IIS in many ways. Just like IIS has a
directory mapping file extensions to ISëuI extensions, the .NET engine maps file
extensions to
. ën HTTu handler is a piece of managed code that is
responsible for generating the markup for a particular file type.
,$''
"
$
ën e-commerce organization can create data-based Web pages by using server
side and client-side processing technologies or a hybrid of the two. With server-side
processing, the Web server receives the dynamic Web page request, performs all
processing necessary to create the page, and then sends it to the client for display in the
client¶s browser. Client-side processing is done on the client workstation by having the
client browser execute a program that interacts directly with the database.
Figure outlines commonly used server-side, client-side, and hybrid Web and data
processing technologies; client-side scripts are in dashed lines to indicate they are unable
to interact directly with a database or file but are used to validate user input on the client,
then send the validated inputs to the server for further processing.
j#%
"$
Generally dynamic or data-driven Web pages use HT L forms to collect user
inputs, submitting them to a Web server. ë program running on the server processes the
form inputs, dynamically composing a Web page reply. This program, which is called,
servicing program, can be either a compiled executable program or a script interpreted
into machine language each time it is run. Compiled server programs. When a user
submits HT L- form data for processing by a compiled server program, the Web Server
invokes the servicing program. The servicing program is not part of the Web server but it
is an independent executable program running on the Web server; it processes the user
input, determines the action which must be taken, interacts with any external sources (Eg:
database) and finally produces an HT L document and terminates. The Web server then
sends the HT L document back to the user¶s browser where it is displayed. Figure 23
shows the flow of HTTu request from the client to the Web server, which is sent to the
servicing program. The program creates an HT L document to be sent to the client
browser.
uopular languages for creating compiled server programs are Java, Visual Basic, and
C++, but almost any language that can create executable programs can be used, provided
that it supports commands used by one of the protocols that establish guidelines for
communication between Web servers and servicing programs. The first such protocol,
introduced in 1993, for use with HT L forms was the Common Gateway Interface
(CGI); many servicing programs on Web sites still use CGI programs. However, is
advantage of using CGI-based servicing programs is that each form submitted to a Web
server starts its own copy of the servicing program on the Web server?
ë busy Web server is likely to run out of memory when it services many forms
simultaneously; thus, as interactive Web sites have gained popularity, Web server
vendors have developed new technologies to process form inputs without starting a new
copy of the servicing program for each browser input. Examples of these technologies for
communicating with Web servers include Java Servlets and icrosoft¶s .NET; they allow
a single copy of the servicing program to service multiple users without starting multiple
instances of the program.
.NET has introduced many new capabilities to server-side Web programming,
including a new category of elements called server controls that generate as many as 200
HT L tags and one or more JavaScript [9] functions from a single server control tag.
Server controls support the processing of user events, such as clicking a mouse or
entering text at either the client browser or the Web server. Server controls also
encourage the separation of programming code into different files and/or areas from the
HT L tags and text of a Web page, thus allowing HT L designers and programmers to
work together more effectively. Server-side scripts°Web-based applications can also use
server-side scripts to create dynamic Web pages that are able to retrieve and display
information from a backend database and modify data records. The processing
architecture is the same as the processing architecture used for compiled server programs
(Figure 21), except the Web server processing is performed through and interpreted script
rather than a compiled program.
If needed, a developer can have a single Web server process a variety of scripts
written with any or all of these technologies. The Web server knows which script
interpreter to invoke by taking note of the requesting script¶s file extension. Table below
demonstrates some commonly used extensions and the related technologies.
%$
$$ $
c-
$
c-
$"$
$$
")$'"
urograms created through .NET are not backward compatible with scripts
created through the original server-side scripting technology [10]; upgrading older
scripts to .NET requires substantial revision. and .NET programs can, however, run on
the same Web server, as .NET programs are distinguished with °
file extensions.
à °Compiled server-side programs offer two main advantages: First,
they are compiled and stored in a machine-readable format; so they usually run faster
than scripts. Second, compiled programs are usually created in integrated development
environments that provide debugging utilities. The advantage of using scripts is that their
modification requires only a text editor rather than installation of an associated
development environment.
ë DataSet can be used to interact with data dynamically such as binding to a Web
Form, cache locally in the application, provide hierarchical X L view of the data, etc. If
such functionalities are not required by the application, a DataReader can be used to
improve the performance of the application. By using a DataReader, the memory can be
saved that is used by the DataSet, as well as the processing required to fill the contents of
a DataSet.
When a DataReader is used, a Dataëdapter is not required to send the data to the
application. In this project, DataReader is used to read the data and Command object
called ExecuteNonQuery is used to write into the database.
c## (
ù? ulatform substrate
ù? ëpplication Service
ù? icrosoft ëDO.NET
ù? .NET
ù? X L Web services
ù? User Interfaces
ù? Languages
$
c'($+
The benefits of using the .Net Framework for developing application include:
The .NET Framework provides the unified type system, which can be used
by any .NET-compatible language. In the unified type system, all language elements
are objects. These objects can be used by any .Net applications written in any .NET-
based language.
4. Extensible classes:
The hierarchy of the .Net Framework is not hidden from the developer. You
can access and extend .Net classes through inheritance.
&j !jc
&
c
Visual Basic .NET is one of the languages that are directed towards meeting
the objectives of the .NET initiative of creating distributed applications. Visual Basic
.NET is the successor to Visual Basic 6. It is a software language used to build
applications targeted for the icrosoft .NET platform. Visual Basic .NET is a powerful
object-oriented language that provides features such as abstraction, encapsulation,
inheritance and polymorphism.
Some of the key features introduced in Visual Basic .NET are as follows:
? Overriding
? Overloading
? Inheritance
? Structured Exception handling
? ultithreading
? Constructors and Destructors
c
$ c
$'
ën object designed primarily for data input or display or for control; of application
execution. You use forms to customize the presentation of data that your application
extracts for queries or tables. You can also print forms. You can design a form to run a
macro or a Visual Basic procedure in response to any of a number of events- for example,
to run a procedure when the value of data changes.
"$
ën object designed for formatting, calculating, printing, and summarizing selected data.
You can view a report on your screen before you print it.
ën object that includes an HT L file and supporting files to provide custom access to
your data from icrosoft Internet Explorer. You can publish these files on your company
intranet to allow other users on your network who also have Office 2000 and Internet
Explorer version 5 or later to view, search, and edit your data.
$
acro is an object that is a structured definition of one or more actions that you want
ëccess to perform in response to a defined event. For example, you might design a macro
that opens a second form in response to the selection of an item on a main form. You
might have another macro that validate the content of a field whenever the value in the
field changes. You can include simple conditions in macros to specify when one or more
actions in the macro should be performed or skipped. You can use macros to open and
execute queries, to open tables, or to print or view reports. You can also run other macros
or visual Basic procedures from within a macro.
$
It is an object containing custom procedures that you code using Visual Basic. odules
provide a more discrete flow of actions and allow you to trap errors something you can't
do with macros. odules can be stand-alone objects containing functions that can be
called from anywhere in your application, or they can be directly associated with a form
or a report to respond to events on the associated form or report.
Table stores the data that you can extract with queries and display in reports or that you
can display and update in forms or data access pages. Notice that forms, reports, and data
access pages can use data either directly from tables or from a filtered "view" of the data
created by using queries. Queries can use Visual; Basic functions to provide customized
calculations on data in your database. ëccess also has many built-in functions that allow
you to summarize and format your data in queries.
c#
Opening a form
Closing a form
Entering a new row on a form
Changing data in the current record
$$
Control is an object on a form or report that contains data. You can even design a macro
or a Visual Basic procedure that responds to the user pressing individual keys on the
keyboard when entering data.
j jc ! jj
jccc
This RGB subsidiary is applied up to a certain level and the resulting mesh is used
for further processing. Even when users are interested in rendering the limit surface,
subdivided meshes can be useful in intermediate computations. One possibility is using
three dynamic arrays, for vertices, edges, and triangles, respectively, with a garbage
collection mechanism to manage reuse of locations freed because of coarsening
operators.
c/j j jc
Ô? Finite-element methods cannot be possible for Red-green triangulations.
Ô? Requires hierarchical data structures.
Ô? uartially Dynamic Selective refinement.
jcj jc
Ô? Red-green triangulations were introduced in the context of finite-element methods
and have become popular in the common practice.
Ô? It is better adaptive than previously known schemes based on the one-to-four
triangle split pattern; it does not require hierarchical data structures.
Ô? It supports fully dynamic selective refinement while remaining compatible with
the Loop subsidiary scheme.
j jccj
c !
RGB Triangulation
RGB
GG RG
Subdivision
RR
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
!, &c
j
&j
?
Image
odel
Background
Elimination
Grey Scaling
(Generate ëerage Color
odel)
Skeleton Imaging
Outlining
riangle e Select
e
ype
oop Su di i ion Of
e
RGB riangulation
?
ö
ö
ë öö
Tracked
objects
Skeleton
urocess
Raw image
O/u
ö
!#0
Tool Creation
!#1
ulti- esh
Tools
!#2
erging of
Tools & colors
Final O/u of
esh
j j c cj
$
$
Software Testing is the process of executing a program or system with the
intent of finding errors. This is the major quality measure employed during the software
engineering development. Its basic function is to detect error in the software. Testing is
necessary for the proper functioning of the system.
Testing is usually performed for the following purposes:
Ô? To improve quality.
Ô? For Verification & Validation (V&V).
Ô? For reliability estimation.
3
#
ës the uhilosophy behind testing is to find errors that are created with the express
intent of determining whether the system will process correctly. There are several rules
that can serve as testing objectives. They are:
Ô? Testing is a process of executing a program with the intent of
finding an error.
Ô? ë good test case is one that has a high probability of finding an
undiscovered error.
Ô? ë successful test is one that uncovers a discovered error.
If testing is conducted successfully according to the objectives as started above, it
would uncover error in the software. ëlso testing demonstrates that software functions
appear to the working according to specification, that performance requirements appear
to have been met.
c#
$
In general, testing is finding out how well something works. In terms of
human beings, testing tells what level of knowledge or skill has been acquired. In
computer hardware and software development, testing is used at key checkpoints in the
overall process to determine whether objectives are being met. For example, in software
development, product objectives are sometimes tested by product user representatives.
When the design is complete, coding follows and the group of programmers involved;
and at the system level then test the finished code at the unit or module level by each
programmer; at the component level when all components are combined together.
Evaluation is the process of determining significance or worth, usually by careful
appraisal and study.
Evaluation is the analysis and comparison of actual progress vs. prior
plans, oriented toward improving plans for future implementation. It is part of a
continuing management process consisting of planning, implementation, and evaluation;
ideally with each following the other in a continuous cycle until successful completion of
the activity. Evaluation is the process of determining the worth or value of something.
j $ (
j
ë strategy for software testing integrates software test case design
techniques into a well planned series of steps that result in the successful construction of
software. ëny testing strategy must incorporate test planning, test case design, test
execution and the resultant data collection and evaluation. The various software testing
strategies are;
Ô? Unit Testing
Ô? Functional Testing
Ô? Stress Testing
Ô? Integration testing
Ô? User ëcceptance testing.
Unit testing focuses on the verification effort on the smallest unit of each
module in the system. It comprises the set of tests performed by an individual
programmer prior to the integration of the unit into a larger system. It involves various
tests that a programmer will perform in a program unit.
Using the unit test plans prepared in the design of the system development
as a guide, the control paths are tested to uncover errors within the boundary of the
module. In this testing, each module is tested and it was found to be working
satisfactorily as per the expected output from the module.
The unit test considerations that were taken into account are,
Ô? Interfacing errors
Ô? Integrity of local data structure
Ô? Boundary Condition
Ô? Independent uaths
Ô? Error Handling uaths
$
Functional test involves exercising the code with correct input values for
which the expected results are known. The system bring developed is tested with all the
nominal input values, so that the expected results are received. The system is also tested
with the boundary values.
j
Stress tests are designed to overload a system in various ways. The
system being developed is tested like attempting to sign on more than the maximum
number of allowed terminals, inputting mismatched data types, processing more than the
allowed number of identifiers.
$
This is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with the interface. In this
testing all the modules are integrated and the entire system is tested as a whole and the
possibility of occurring error are rare since it has already been unit tested. In case if any
error occurs, it is found out in this step and rectified as a whole before passing to the next
step.
"
The user acceptance is the key factor in the development of a successful
system. The system under consideration is tested for acceptance by constantly keeping in
touch with the prospective users at the time of development and the changes are made as
and when required.
#
Ô? Simple and easy to use.
Ô? Easy to manage due to the rigidity of the model ± each phase has specific
deliverables and a review process.
Ô? uhases are processed and completed one at a time.
Ô? Works well for smaller projects where requirements are very well understood.
#
Ô? ëdjusting scope during the life cycle can kill a project
Ô? No working software is produced until late during the life cycle.
Ô? High amounts of risk and uncertainty.
Ô? uoor model for complex and object-oriented projects.
Ô? uoor model for long and ongoing projects.
Ô? uoor model where requirements are at a moderate to high risk of changing.
!j
The RGB subdivision scheme has several advantages over both classical and
adaptive subdivision schemes, as well as over CLOD models: it supports fully dynamic
selective refinement while remaining compatible with the Loop subdivision scheme; it is
better adaptive than previously known schemes based on the one-to-four triangle split
pattern; it does not require hierarchical data structures; selective refinement can be
implemented efficiently by plugging faces inside the mesh, according to rules encoded in
lookup tables, thus avoiding cumbersome procedural updates.
c'
Our prototype integrated in eshLab can be already used for interactive editing
of LOD. However, a more careful implementation of our data structures should provide a
much more efficient engine, suitable for tasks such as real-time, view-dependent
rendering, or integration in a solid modeler. In future these data structure features can be
limited and so that the performance can be improved in next version.