Você está na página 1de 39

c 

      



 The RGB Subsidiary is an open source, portable, and extendible system
for the processing and editing of unstructured 3D triangular meshes. The RGB Subsidiary
permits to edit the level of detail of a mesh in different ways:

Ô? ë portion of the mesh can be selected and either refined or coarsened by setting
the desired LOD inside and outside selection. LOD may be set in terms of levels
of subsidiary and maximal length of edges;
Ô? ë brush tool can be used to adjust LOD locally: the region swept by the brush is
either refined or coarsened by changing the LOD according to the parameters set
for the brush;
Ô? ëtomic primitives that either refine or coarsen the mesh locally can be applied
individually for fine editing.

Direct and reverse subsidiary may thus be used together, to take any mesh and
automatically generate a whole hierarchy of LOD s, coarser as well as more refined than
the base mesh. The adaptivity implies that cells at different levels of subsidiary are
combined in the context of a single mesh.
j jj

The RGB subsidiary, is a mechanism for the adaptive subdivision of triangle


meshes that supports fully dynamic selective refinement and it is compliant with the
Loop subdivision scheme in the limit

Our scheme supports dynamic selective refinement, as in Continuous Level of


Detail models, and it generates conforming meshes at all intermediate steps. The RGB
subsidiary is encoded in a standard topological data structure, extended with few
attributes, which can be used directly for further processing.

Subsidiary surfaces are becoming more and more popular in computer graphics
and CëD. During the last decade, they found major applications in the entertainment
industry and in simulation. RGB is a mechanism for the adaptive subsidiary of triangle
meshes that supports fully dynamic selective refinement and it is compliant with the
Loop subsidiary scheme in the limit surface. Our scheme supports dynamic selective
refinement, as in Continuous Level of Detail models, and it generates conforming meshes
at all intermediate steps.
cc cj

!cj

Ô? " #j  

Ô?    

Ô? !$$"j  # $

Ô? 
  $ 

˜? % " 

˜?  % " 

˜? % " 

Ô? 
j  











! cj

 &cj
j  

Red-green triangulations were introduced in the context of finite-element methods
and have become popular in the common practice, as an empirical way to obtain
conforming adaptive meshes from hierarchies of triangle meshes generated from one-to-
four triangle split.

ë variant of red-green triangulations was used in to support multiresolution


editing of meshes based on the Loop subdivision scheme. ëdaptive meshes are computed
by reverse subdivision, starting at the finest level and pruning over refined triangles.
ëlso, in this case, a restricted nonconforming mesh is computed first, which is fixed next
by further bisection of some triangles. Correct relocation of vertices is treated by using a
hierarchical data structure. Recently, another variant, called incremental subdivision, was
presented for both the Loop and the butterfly schemes. In this case, the correct
computation of geometry of control points is addressed by using a larger support area for
refinement.

The resulting triangles can be regarded as being of green and blue types in our
terminology. ë closed form solution of the subdivision rule permits to compute control
points correctly for a vertex at any level, at the cost of some over refinement. The 4-8
subdivision is based on edge split, as in our case, applied to a special case of triangle
meshes, called tri-quad meshes. The correct position of control points is addressed and
resolved also in this case with a certain amount of over refinement.






  !ccjcj

ë triangle mesh is a triple c= (V, e, T) where V is a set of points in 3D space,
called vertices; T is a set of triangles having their vertices in V and such that any two
triangles of T either are disjoint or share exactly either one vertex or one edge, e is the set
of edges of the triangles in T. Standard topological incidence and adjacency relations are
defined over the entities of c.






It assume to deal always with manifold meshes either with or without boundary,
i.e., each edge of E is incident at either one or two triangles of T; and the star of a vertex
(i.e., the set of entities incident at it) is homeomorphic either to an open disc or to a
closed half-plane. Edges that are incident at just one triangle and vertices that have a star
homeomorphic to a half-plane form the boundary of the mesh, and they are called
boundary edges and boundary vertices, respectively. The remaining edges and vertices
are said to be internal. ë mesh with an empty boundary is said to be watertight. ë mesh is
regular if all its vertices have a valence of six. Vertices with a valence different from six
are called extraordinary.

ë nonconforming mesh is a structure similar to a mesh, in which triangles may


violate the rule of edge sharing: there may exist adjacent triangles t and t0 such that an
edge of t overlaps just a portion of the corresponding edge of t0.

!j
&j
The Loop subdivision is an approximating scheme that converges to a C2 surface
if applied to a regular mesh. The subdivision pattern is one-to-four triangle split




Consider a base mesh _0. We assign level zero to all its vertices and edges and
color green to all its edges. ës a general rule, the level of a triangle is defined to be the
lowest among the levels of its edges; and the color of a triangle is defined to be: green if
all its edges are at the same level; red if two of its edges are at the same level l and the
third edge is at level l þ 1; and blue if two of its edges are at the same level l þ 1 and the
third edge is at level l. It follows that all triangles in the base mesh are green at level zero.

Edge split operations applied to boundary edges will affect just one triangle. The
resulting configuration depends only on the color of the triangle incident at e.







  ! j

The meshes composed of equilateral triangles, right triangles, and isosceles
triangles to depict the three different types of triangles that may appear in an RGB
triangulation. ëctually, the shape of triangles is totally irrelevant in the subdivision
process, while just level and color codes matter.

ëll rules defined in this section are purely topological. Just for the sake of clarity,
in the figures, we will use meshes composed of equilateral triangles, right triangles, and
isosceles triangles to depict the three different types of triangles that may appear in an
RGB triangulation. ëctually, the shape of triangles is totally irrelevant in the subdivision
process, while just level and color codes matter.
% " 
The t0 and t1 are both green. The bisection of each triangle t0 and t1 at the
midpoint of e generates two red triangles at level l. Each such triangle will have: one
green edge at level l (the one common with old triangle t), one green edge at level l + l
(one half of e), and one red edge at level l (the new edge inserted to split t).
 % " 
The t0 is green and t1 is red. Triangle t0 is bisected and edge e is split as above.
The bisection of t1 generates one blue triangle at level l and one green triangle at level l +
1. The green triangle is incident at the green edge at level l + 1 of old triangle t1 and also
its other two edges are at level l + 1 (the edge inserted to subdivide t1, and one half of e).
The blue triangle is incident at the red edge of old triangle t1 and has also two green
edges at level l + 1 (the edge inserted to subdivide t1, and the other half of e).
% " 
The t0 and t1 are both red. Triangles t0 and t1 are both bisected as triangle t1 in
the previous case and each of them generates the same configuration made of a blue
triangle at level l and a green triangle at level l þ 1. This case may come in two variants:
RR1-split and RR2-split. Each variant can be recognized by the cycle of colors of edges
on the boundary of the diamond formed by t0 and t1: this may be either red-green-red-
green for RR1-split, or red-red-green-green for RR2-split.

j
j  

The RGB subdivision is now derived by studying the geometry of vertices. The
basic idea here is to adapt the rules of Loop subdivision to the topology of RGB
triangulations, so that the limit surfaces of the two subdivision schemes become
coincident.

Note that in RGB subdivision both refinement and coarsening operations are
allowed; therefore, updates to control points must be made for odd vertices during
refinement and for even vertices both during refinement and during coarsening.

When a vertex v is removed, its neighbors are also checked and the contribution
of v is subtracted from its neighbors that received it. For a given neighbor vi, if the
minimum level of incident edges has become lower, then its current control point is also
updated accordingly.




j 'j"  $

j$j '()j$ ($  $ 

1.? urocessor Intel uentium !V(2.8GHz) and upwards

2.? Operating System Windows 2000 & Xu

3.? Rë 256 B

4.? Hard Disk Size 80 GB and above

5.? Front End C#.Net

6.? urogramming Interface VB.Net

7.? Web Servers Internet Information Server
















j$ (  " $%

'"' $$$  
The objective of this project is to develop an online book store. When the user
types in the URL of the Book Store in the address field of the browser, a Web Server is
contacted to get the requested information. In the .NET Framework, IIS (Internet
Information Service) acts as the Web Server. The sole task of a Web Server is to accept
incoming HTTu requests and to return the requested resource in an HTTu response. The
first thing IIS does when a request comes in is to decide how to handle the request. Its
decision is based upon the requested file's extension. For example, if the requested file
has the . extension, IIS will route the request to be handled by .dll. If it has the extension
to be handled by .NET Engine.

 $ (jc


The .NET Engine then gets the requested file, and if necessary contacts the
database through ëDO.NET for the required file and then the information is sent back to
the Client¶s browser. Figure 21 shows how a client browser interacts with the Web server
and how the Web server handles the request from client.



 $' $j#  *j
IISis a set of Internet based services for Windows machines. Originally supplied
as part of the Option uack for Windows NT, they were subsequently integrated with
Windows 2000 and Windows Server 2003). The current (Windows 2003) version is IIS
6.0 and includes servers for  (a software standard for transferring computer files
between machines with widely different operating systems), j (Simple ail
Transfer urotocol, is the de facto standard for email transmission across the Internet) and
/j(is the secure version of HTTu, the communication protocol of the World
Wide Web) .

 
The web server itself cannot directly perform server side processing but can
delegate the task to ISëuI (ëpplication urogramming Interface of IIS) applications on the
server. icrosoft provides a number of these including ones for ëctive Server uage and
.NET.

$'"  
Internet Information Services is designed to run on Windows server operating
systems. ë restricted version that supports one web site and a limited number of
connections is also supplied with Windows Xu urofessional.
icrosoft has also changed the server account that IIS runs on. In versions of IIS
before 6.0, all the features were run on the System account, allowing exploits to run wild
on the system. Under 6.0 many of the processes have been brought under a Network
Services account that has fewer privileges. In particular this means that if there were an
exploit on that feature, it would not necessarily compromise the entire system.






c
.NET is a programming framework built on the common language runtime that
can be used on a server to build powerful Web applications. .NET has many advantages ±
both for programmers and for the end users because it is compatible with the .NET
Framework. This compatibility allows the users to use the following features through
.NET:

a)? $(   % #  $ 


 .NET allows programmers to develop web applications that interface with
a database. The advantage of .NET is that it is object-oriented and has many
programming tools that allow for faster development and more functionality.
b)?  ( ""  $ 
Two ects of .NET make it fast ± compiled code and caching. In .NET the
code is compiled into "machine language" à  a visitor ever comes to the
website. Caching is the storage of information in memory for faster access in
the future. .NET allows programmers to set up pages or areas of pages that are
commonly reused to be cached for a set period of time to improve the
performance of web applications. In addition, .NET allows the caching of data
from a database so the website is not slowed down by frequent visits to a
database when the data does not change very often.
c)? '$+ "$ $
.NET automatically recovers from memory leaks and errors to make sure
that the website is always available to the visitors. .NET also supports code
written in more than 25 .NET languages (including VB.NET, C#, and
Jscript.Net). This is achieved by the Common Language Runtime (CLR)
compiler that supports multiple languages.





  $
There are two separate authentication layers in an.NET application. ëll requests
flow through IIS before they are handed to .NET, and IIS can decide to deny access
before .NET even knows about the request. Here is how the process works:

1.? IIS checks to see if an incoming request is coming from an Iu address that is
allowed access to the domain. If not, the request is denied.
2.? IIS performs its own user authentication, if it is configured to do so. By default,
IIS allows anonymous access and requests are authenticated automatically.
3.? When a request is passed from IIS to .NET with an authenticated user, .NET
checks to see whether impersonation is enabled. If so, .NET acts as though it were
the authenticated user. If not, .NET acts with its own configured account.
4.? Finally, the identity is used to request resources from the operating system. If all
the necessary resources can be obtained, the user's request is granted; otherwise
the request is denied.

 j
When a request comes into IIS Web server its extension is examined and, based
on this extension, the request is either handled directly by IIS or routed to an ISëuI
extension. ën ISëuI extension is a compiled class that is installed on the Web server and
whose responsibility is to return the markup for the requested file type. By default, IIS
handles the request, and simply returns the contents of the requested file. This makes
sense for static files, like images, HT L pages, CSS files, external JavaScript files, and
so on. For example, when a request is made for .html file, IIS simply returns the
contents of the requested HT L file.
For files whose content is dynamically generated, the ISëuI extension configured
for the file extension is responsible for generating the content for the requested file. For
example, a Web site that serves up classic pages has the . extension mapped to the .dll
ISëuI extension. The .dll ISëuI extension executes the requested page and returns its
generated HT L markup. If the Web site serves up .NET Web pages, IIS has mapped the
. x to net_isapi.dll, an ISëuI extension that starts off the process of generating the
rendered HT L for the requested .NET Web page. The net_isapi.dll ISëuI extension is
a piece of |     . That is, it is not code that runs in the .NET Framework.
When IIS routes the request to the net_isapi.dll ISëuI extension, the ISëuI extension
routes the request onto the .NET engine, which is written in      - managed
code is code that runs in the .NET Framework.
The .NET engine is strikingly similar to IIS in many ways. Just like IIS has a
directory mapping file extensions to ISëuI extensions, the .NET engine maps file
extensions to  
   . ën HTTu handler is a piece of managed code that is
responsible for generating the markup for a particular file type.

 ,   


Customers ordering from an e-commerce website need to be able to get
information about a vendor¶s products and services, ask questions, select items they wish
to purchase, and submit payment information. Vendors need to be able to track customer
inquiries and preferences and process their orders. So a well organized database is
essential for the development and maintenance of an e-commerce site. In a static Web
page, content is determined at the time when the page is created. ës users access a static
page, the page always displays the same information. Example of a static Web page is the
page displaying company information. In a dynamic Web page, content varies based on
user input and data received from external sources. We use the term ³data-based Web
pages´ to refer to dynamic Web pages deriving some or all of their content from data files
or databases.
ë data-based Web page is requested when a user clicks a hyperlink or the submit
button on a Web page form. If the request comes from clicking a hyperlink, the link
specifies either a Web server program or a Web page that calls a Web server program. In
some cases, the program performs a static query, such as ³Display all items from the
Inventory´. ëlthough this query requires no user input, the results vary depending on
when the query is made. If the request is generated when the user clicks a form¶s submit
button, instead of a hyperlink, the Web server program typically uses the form inputs to
create a query. For example, the user might select five books to be purchased and then
submit the input to the Web server program. The Web server program then services the
order, generating a dynamic Web page response to confirm the transaction. In either case,
the Web server is responsible for formatting the query results by adding HT L tags. The
Web server program then sends the program¶s output back to the client¶s browser as a
Web page.

, $'' " $ 
ën e-commerce organization can create data-based Web pages by using server
side and client-side processing technologies or a hybrid of the two. With server-side
processing, the Web server receives the dynamic Web page request, performs all
processing necessary to create the page, and then sends it to the client for display in the
client¶s browser. Client-side processing is done on the client workstation by having the
client browser execute a program that interacts directly with the database.



Figure outlines commonly used server-side, client-side, and hybrid Web and data
processing technologies; client-side scripts are in dashed lines to indicate they are unable
to interact directly with a database or file but are used to validate user input on the client,
then send the validated inputs to the server for further processing.



j#% "$ 
Generally dynamic or data-driven Web pages use HT L forms to collect user
inputs, submitting them to a Web server. ë program running on the server processes the
form inputs, dynamically composing a Web page reply. This program, which is called,
servicing program, can be either a compiled executable program or a script interpreted
into machine language each time it is run. Compiled server programs. When a user
submits HT L- form data for processing by a compiled server program, the Web Server
invokes the servicing program. The servicing program is not part of the Web server but it
is an independent executable program running on the Web server; it processes the user
input, determines the action which must be taken, interacts with any external sources (Eg:
database) and finally produces an HT L document and terminates. The Web server then
sends the HT L document back to the user¶s browser where it is displayed. Figure 23
shows the flow of HTTu request from the client to the Web server, which is sent to the
servicing program. The program creates an HT L document to be sent to the client
browser.


uopular languages for creating compiled server programs are Java, Visual Basic, and
C++, but almost any language that can create executable programs can be used, provided
that it supports commands used by one of the protocols that establish guidelines for
communication between Web servers and servicing programs. The first such protocol,
introduced in 1993, for use with HT L forms was the Common Gateway Interface
(CGI); many servicing programs on Web sites still use CGI programs. However, is
advantage of using CGI-based servicing programs is that each form submitted to a Web
server starts its own copy of the servicing program on the Web server?
ë busy Web server is likely to run out of memory when it services many forms
simultaneously; thus, as interactive Web sites have gained popularity, Web server
vendors have developed new technologies to process form inputs without starting a new
copy of the servicing program for each browser input. Examples of these technologies for
communicating with Web servers include Java Servlets and icrosoft¶s .NET; they allow
a single copy of the servicing program to service multiple users without starting multiple
instances of the program.
.NET has introduced many new capabilities to server-side Web programming,
including a new category of elements called server controls that generate as many as 200
HT L tags and one or more JavaScript [9] functions from a single server control tag.
Server controls support the processing of user events, such as clicking a mouse or
entering text at either the client browser or the Web server. Server controls also
encourage the separation of programming code into different files and/or areas from the
HT L tags and text of a Web page, thus allowing HT L designers and programmers to
work together more effectively. Server-side scripts°Web-based applications can also use
server-side scripts to create dynamic Web pages that are able to retrieve and display
information from a backend database and modify data records. The processing
architecture is the same as the processing architecture used for compiled server programs
(Figure 21), except the Web server processing is performed through and interpreted script
rather than a compiled program.

If needed, a developer can have a single Web server process a variety of scripts
written with any or all of these technologies. The Web server knows which script
interpreter to invoke by taking note of the requesting script¶s file extension. Table below
demonstrates some commonly used extensions and the related technologies.






 %$ $$ $  c- $ 
 c- $"$ $$ ")$'" 

. icrosoft ëctive Server uage


.x icrosoft .NET web page
.js icrosoft Scripting Language ³JScript´ files extension
.php uHu Script
.vbp Visual Basic uroject

urograms created through .NET are not backward compatible with scripts
created through the original server-side scripting technology [10]; upgrading older
scripts to .NET requires substantial revision. and .NET programs can, however, run on
the same Web server, as .NET programs are distinguished with ° file extensions.   
 
à  °Compiled server-side programs offer two main advantages: First,
they are compiled and stored in a machine-readable format; so they usually run faster
than scripts. Second, compiled programs are usually created in integrated development
environments that provide debugging utilities. The advantage of using scripts is that their
modification requires only a text editor rather than installation of an associated
development environment.

When a .NET application needs to access the database, it submits an appropriate


request to ëDO.NET through a Dataëdapter object, which in turn sends a command to
the Connection object. The Connection object establishes a connection to the database
and submits the request sent by Dataëdapter.

The Connection object connects to the database through a urovider such as


ODBC.NET. the urovider acts as a translator between the Connection object and the
database. It translates the request for data to database¶s language and brings back the
data, if needed.
The urovider sends the data back to the Dataëdapter through the Connection
object and Dataëdapter places the data in a DataSet object residing in application¶s
memory. Instead of storing data in a DataSet, a DataReader can be used to retrieve data
from the database. Results are returned in a resultset which is stored in the network buffer
on the client until a request is made to Read method of the DataReader. Using the
DataReader can increase the application performance by retrieving as soon as the data is
available, rather than waiting for the entire results of the query to be returned.

ë DataSet can be used to interact with data dynamically such as binding to a Web
Form, cache locally in the application, provide hierarchical X L view of the data, etc. If
such functionalities are not required by the application, a DataReader can be used to
improve the performance of the application. By using a DataReader, the memory can be
saved that is used by the DataSet, as well as the processing required to fill the contents of
a DataSet.

When a DataReader is used, a Dataëdapter is not required to send the data to the
application. In this project, DataReader is used to read the data and Command object
called ExecuteNonQuery is used to write into the database.

c## (

. NET is the icrosoft¶s development model in which software becomes


platform and device independent and data becomes available over the Internet. The .Net
Framework is the infrastructure of .NET. . NET is built from the group up on open
architecture. . NET is a platform that can be used for building and running the next
generation of icrosoft Windows and Web applications. The goal of the icrosoft .NET
platform is to simplify web development. The .Net Framework provides the foundation
upon which application and X L web services are build and executed the unified Nature
of the .Net Framework means that all applications, whether they are windows
applications, web applications are X L web services are developer by using a common
set tools and code, and are easily integrated with one another.
c'($+$  $ 

The Common Language Runtime: The runtime handles runtime services,
including language integration, security and memory management. During development,
the runtime provides features that are needed to simplify development.
Class Libraries: Class libraries provide reusable code for most common task,
including data access, X L web service development, and web and windows forms.

,$ ' $  $#.



 The .Net Framework was developed to overcome several limitations that
developers have to deal with when developing web applications and it makes strong use
of the Internet as means for solving these limitations.
Even with the advent of a global, easily accessible network of sharing
information (The Internet), few application works on more than one type of client or the
ability to seamlessly interact with other applications. This limitation leads to two major
problems that developers must confront:
â? Developers typically have to limit their scope.
â? Developers spent the majority of their time rewriting applications to
work on each type of platform and client rather than spent their time
designing new applications.
The .Net Framework solves the preceding two problems by providing the
runtime, which is language-independent and platform independent, and by making use of
the industry-standard X L. Language independence in .Net allows developers to build
an application in any. Net-based language and know that the web application will work
on client that supports .Net. X L web services use X L to send data, there by ensuring
that any X L-capable client can receive that data. Since X L is an open standard, most
modern clients such as computer operating systems, cellular telephones, personal digital
assistants (uDë¶s), and game from consoles, can accept X L data.
c'($+$'"$ 

The .NET Framework provides the compile time and runtime
foundation to build and run .Net-based applications. The .NET Framework consists of
different components that help to build and run. Net-based applications:

ù? ulatform substrate

ù? ëpplication Service

ù? . Net Framework Class Library

ù? Common Language Runtime

ù? icrosoft ëDO.NET

ù? .NET

ù? X L Web services

ù? User Interfaces

ù? Languages

  $  c'($+

The benefits of using the .Net Framework for developing application include:

1. Based on Web standards and practices:

The .Net framework fully supports existing Internet technologies, including


HT L, HTTu, X L, SOëu and other Web standards.

2. Design using unified application models:

The functionality of a .Net class is available from any .Net compatible


languages are programming model. Therefore, the same piece of code can be used by
windows applications, web applications and X L web services.

3. Easy for developers to use:

The .NET Framework provides the unified type system, which can be used
by any .NET-compatible language. In the unified type system, all language elements
are objects. These objects can be used by any .Net applications written in any .NET-
based language.

4. Extensible classes:
The hierarchy of the .Net Framework is not hidden from the developer. You
can access and extend .Net classes through inheritance.

&j !jc

The Visual Studio .NET Integrated Development Environment (IDE) provides


with a common interface for developing various kinds of projects for the .NET
Framework. The IDE provides with a centralized location for designing the user interface
for the applications, writing code and compiling and debugging the application. The
Visual Studio .NET IDE is an available to all programmer who use the language in the
Visual Studio .NET suite.


&
c

Visual Basic .NET is one of the languages that are directed towards meeting
the objectives of the .NET initiative of creating distributed applications. Visual Basic
.NET is the successor to Visual Basic 6. It is a software language used to build
applications targeted for the icrosoft .NET platform. Visual Basic .NET is a powerful
object-oriented language that provides features such as abstraction, encapsulation,
inheritance and polymorphism.

 $ & 


 c

Some of the key features introduced in Visual Basic .NET are as follows:

? Overriding
? Overloading
? Inheritance
? Structured Exception handling
? ultithreading
? Constructors and Destructors

 c

ëDO.NET is all about data access. Data is generally stored in a relational


database in the form of related tables. Retrieving and manipulating data directly from a
database requires the knowledge of database commands to access the data.

 $  c

1. Disconnected data architecture- ëDO.NET uses the disconnected data


architecture. ëpplications connect to the database only while retrieving and updating
data. ëfter data is retrieved, the connection with the database closed. When the database
needs to be updated, the connection is re-established. Working with applications that to
do not follow a disconnected architecture leads to a wastage of valuable system
resources, since the application connect to the database and keeps the connection open
until it stops running, but does not actually interact with the database can cater to the
needs of several applications simultaneously since the interaction is for a shorter
duration.
2. Data cached in datasets- ë dataset is the most common method of
accessing data since it implements a disconnected architecture. Since ëDO.NET is based
on a disconnected data structure, it is not possible for the application to interact with the
database for processing each record. Therefore, the data is retrieved and stored in
datasets. ë dataset is a cached set of database records. We can work with the records
stored in a dataset as we work with real data; the only difference being that the dataset is
independent of data source and we remain disconnected from the data source.
3. ëDO.NET supports scalability by working with datasets. Datasets
operations are performed on the datasets instead of on the database. ës a result, resources
are saved, and the database can meet the increasing demands of users more efficiently.
4. Data transfer in X L format- X L is the fundamental format for data
transfer in ëDO.NET. Data is transferred from a database into a dataset and from the
dataset to another component by using X L. We can even use an X L file as a data
source and store data from it in a dataset. Using X L as the data transfer language is
beneficial as X L is an industry standard format for exchanging information between
different types of applications. The knowledge of X L is not required for working with
ëDO.NET since data conversion in the X L and any component that can read the
dataset structure from and to X L is hidden from the user. Since a dataset is stored can
process the data.
5. Interaction with the database is done through data commands ± ëll
operations on the database are performed by using data commands. ë data command can
be a SQL statement or a stored procedure. We can retrieve, insert, delete or modify data
from a database by executing data commands.


$'

ën object designed primarily for data input or display or for control; of application
execution. You use forms to customize the presentation of data that your application
extracts for queries or tables. You can also print forms. You can design a form to run a
macro or a Visual Basic procedure in response to any of a number of events- for example,
to run a procedure when the value of data changes.

"$

ën object designed for formatting, calculating, printing, and summarizing selected data.
You can view a report on your screen before you print it.

  

ën object that includes an HT L file and supporting files to provide custom access to
your data from icrosoft Internet Explorer. You can publish these files on your company
intranet to allow other users on your network who also have Office 2000 and Internet
Explorer version 5 or later to view, search, and edit your data.

$

acro is an object that is a structured definition of one or more actions that you want
ëccess to perform in response to a defined event. For example, you might design a macro
that opens a second form in response to the selection of an item on a main form. You
might have another macro that validate the content of a field whenever the value in the
field changes. You can include simple conditions in macros to specify when one or more
actions in the macro should be performed or skipped. You can use macros to open and
execute queries, to open tables, or to print or view reports. You can also run other macros
or visual Basic procedures from within a macro.


$

It is an object containing custom procedures that you code using Visual Basic. odules
provide a more discrete flow of actions and allow you to trap errors something you can't
do with macros. odules can be stand-alone objects containing functions that can be
called from anywhere in your application, or they can be directly associated with a form
or a report to respond to events on the associated form or report.

Table stores the data that you can extract with queries and display in reports or that you
can display and update in forms or data access pages. Notice that forms, reports, and data
access pages can use data either directly from tables or from a filtered "view" of the data
created by using queries. Queries can use Visual; Basic functions to provide customized
calculations on data in your database. ëccess also has many built-in functions that allow
you to summarize and format your data in queries.

c#

Event is any change in state of an ëccess object.


For example, you can write macros or Visual Basic procedures to respond to

‡ Opening a form
‡ Closing a form
‡ Entering a new row on a form
‡ Changing data in the current record

$$

Control is an object on a form or report that contains data. You can even design a macro
or a Visual Basic procedure that responds to the user pressing individual keys on the
keyboard when entering data.
j jc  ! jj

jccc

This RGB subsidiary is applied up to a certain level and the resulting mesh is used
for further processing. Even when users are interested in rendering the limit surface,
subdivided meshes can be useful in intermediate computations. One possibility is using
three dynamic arrays, for vertices, edges, and triangles, respectively, with a garbage
collection mechanism to manage reuse of locations freed because of coarsening
operators.

c/j j jc

Ô? Finite-element methods cannot be possible for Red-green triangulations.
Ô? Requires hierarchical data structures.
Ô? uartially Dynamic Selective refinement.

jcj jc

Ô? Red-green triangulations were introduced in the context of finite-element methods
and have become popular in the common practice.
Ô? It is better adaptive than previously known schemes based on the one-to-four
triangle split pattern; it does not require hierarchical data structures.
Ô? It supports fully dynamic selective refinement while remaining compatible with
the Loop subsidiary scheme.
j jccj 

c !  

ëdaptive Triangle Triangle


Subdivision eshes eshes

RGB Triangulation

RGB
GG RG
Subdivision
RR

?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
!,  &c
j
&j
?

Image
odel

Background
Elimination

Grey Scaling
(Generate ëerage Color
odel)
Skeleton Imaging

Outlining


riangle e Select
e

  ype
oop Su di i ion Of
e

RGB riangulation

?
ö
ö
ë  öö

Image Gray RGB Background


Loading Scaling calculation Removal

Tracked
objects

Skeleton
urocess

Raw image
O/u

ö
!#0



Raw Image Optional Triangle esh


eshing

Tool Creation


!#1


Tool of Loop Triangulation


Triangle mesh Segregation of esh

ulti- esh
Tools

!#2



esh Tool RGB Color Choosing of Color


calculation [RG, GG, RB]

erging of
Tools & colors

Final O/u of
esh

j j c   cj    



      System testing is actually a serious of different tests whose primary purpose is to


fully exercise the computer-based system. This is to verify that all system elements have
been properly integrated and performed allocated functions. Design error-handling paths
that test all information coming from other elements of the system. Conduct a series of
tests that simulate bad data or other potential errors in the software interface.

  $    $  

Software Testing is the process of executing a program or system with the
intent of finding errors. This is the major quality measure employed during the software
engineering development. Its basic function is to detect error in the software. Testing is
necessary for the proper functioning of the system.
Testing is usually performed for the following purposes:
Ô? To improve quality.
Ô? For Verification & Validation (V&V).
Ô? For reliability estimation.

 3   # 

ës the uhilosophy behind testing is to find errors that are created with the express
intent of determining whether the system will process correctly. There are several rules
that can serve as testing objectives. They are:
Ô? Testing is a process of executing a program with the intent of
finding an error.
Ô? ë good test case is one that has a high probability of finding an
undiscovered error.
Ô? ë successful test is one that uncovers a discovered error.
If testing is conducted successfully according to the objectives as started above, it
would uncover error in the software. ëlso testing demonstrates that software functions
appear to the working according to specification, that performance requirements appear
to have been met.

    c#    $  
In general, testing is finding out how well something works. In terms of
human beings, testing tells what level of knowledge or skill has been acquired. In
computer hardware and software development, testing is used at key checkpoints in the
overall process to determine whether objectives are being met. For example, in software
development, product objectives are sometimes tested by product user representatives.
When the design is complete, coding follows and the group of programmers involved;
and at the system level then test the finished code at the unit or module level by each
programmer; at the component level when all components are combined together.
Evaluation is the process of determining significance or worth, usually by careful
appraisal and study.
Evaluation is the analysis and comparison of actual progress vs. prior
plans, oriented toward improving plans for future implementation. It is part of a
continuing management process consisting of planning, implementation, and evaluation;
ideally with each following the other in a continuous cycle until successful completion of
the activity. Evaluation is the process of determining the worth or value of something.











j $  (       j     
ë strategy for software testing integrates software test case design
techniques into a well planned series of steps that result in the successful construction of
software. ëny testing strategy must incorporate test planning, test case design, test
execution and the resultant data collection and evaluation. The various software testing
strategies are;
Ô? Unit Testing
Ô? Functional Testing
Ô? Stress Testing
Ô? Integration testing
Ô? User ëcceptance testing.

        
Unit testing focuses on the verification effort on the smallest unit of each
module in the system. It comprises the set of tests performed by an individual
programmer prior to the integration of the unit into a larger system. It involves various
tests that a programmer will perform in a program unit.
Using the unit test plans prepared in the design of the system development
as a guide, the control paths are tested to uncover errors within the boundary of the
module. In this testing, each module is tested and it was found to be working
satisfactorily as per the expected output from the module.
The unit test considerations that were taken into account are,
Ô? Interfacing errors
Ô? Integrity of local data structure
Ô? Boundary Condition
Ô? Independent uaths
Ô? Error Handling uaths




   $         

Functional test involves exercising the code with correct input values for
which the expected results are known. The system bring developed is tested with all the
nominal input values, so that the expected results are received. The system is also tested
with the boundary values.

j          

Stress tests are designed to overload a system in various ways. The
system being developed is tested like attempting to sign on more than the maximum
number of allowed terminals, inputting mismatched data types, processing more than the
allowed number of identifiers.

    $        

This is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with the interface. In this
testing all the modules are integrated and the entire system is tested as a whole and the
possibility of occurring error are rare since it has already been unit tested. In case if any
error occurs, it is found out in this step and rectified as a whole before passing to the next
step.

    "          

The user acceptance is the key factor in the development of a successful
system. The system under consideration is tested for acceptance by constantly keeping in
touch with the prospective users at the time of development and the changes are made as
and when required.


 #       
Ô? Simple and easy to use.
Ô? Easy to manage due to the rigidity of the model ± each phase has specific
deliverables and a review process.
Ô? uhases are processed and completed one at a time.
Ô? Works well for smaller projects where requirements are very well understood.

   #     
Ô? ëdjusting scope during the life cycle can kill a project
Ô? No working software is produced until late during the life cycle.
Ô? High amounts of risk and uncertainty.
Ô? uoor model for complex and object-oriented projects.
Ô? uoor model for long and ongoing projects.
Ô? uoor model where requirements are at a moderate to high risk of changing.

























!j

The RGB subdivision scheme has several advantages over both classical and
adaptive subdivision schemes, as well as over CLOD models: it supports fully dynamic
selective refinement while remaining compatible with the Loop subdivision scheme; it is
better adaptive than previously known schemes based on the one-to-four triangle split
pattern; it does not require hierarchical data structures; selective refinement can be
implemented efficiently by plugging faces inside the mesh, according to rules encoded in
lookup tables, thus avoiding cumbersome procedural updates.























c'

Our prototype integrated in eshLab can be already used for interactive editing
of LOD. However, a more careful implementation of our data structures should provide a
much more efficient engine, suitable for tasks such as real-time, view-dependent
rendering, or integration in a solid modeler. In future these data structure features can be
limited and so that the performance can be improved in next version.


Você também pode gostar