Você está na página 1de 11

3D Searching

Definition
From computer-aided design (CAD) drawings of complex engineering parts to digital representations of proteins and complex molecules, an increasing amount of 3D information is making its way onto the Web and into corporate databases. Because of this, users need ways to store, index, and search this information. Typical Web-searching approaches, such as Google's, can't do this. Even for 2D images, they generally search only the textual parts of a file, noted Greg Notess, editor of the online Search Engine Showdown newsletter. However, researchers at universities such as Purdue and Princeton have begun developing search engines that can mine catalogs of 3D objects, such as airplane parts, by looking for physical, not textual, attributes. Users formulate a query by using a drawing application to sketch what they are looking for or by selecting a similar object from a catalog of images. The search engine then finds the items they want. The company must make it again, wasting valuable time and money 3D SEARCHING Advances in computing power combined with interactive modeling software, which lets users create images as queries for searches, have made 3Dsearch technology possible. Methodology used involves the following steps " Query formulation " Search process " Search result QUERY FORMULATION True 3D search systems offer two principal ways to formulate a query: Users can select objects from a catalog of images based on product groupings, such as gears or sofas; or they can utilize a drawing program to create a picture of the object they are looking for. or example, Princeton's 3D search engine uses an application to let users draw a 2D or 3D representation of the object they want to find. The above picture shows the query interface of a 3D search system. SEARCH PROCESS The 3D-search system uses algorithms to convert the selected or drawn image-based query into a mathematical model that describes the features of the object being sought. This converts drawings and objects into a form that computers can work with. The search system then compares the mathematical description of the drawn or selected object to those of 3D objects stored in a database, looking for similarities in the described features. The key to the way computer programs look for 3D objects is the voxel (volume pixel). A voxel is a set of graphical data-such as position, color, and density-that defines the smallest cubeshaped building block of a 3D image. Computers can display 3D images only in two dimensions. To do this, 3D rendering software takes an object and slices it into 2D cross sections. The cross sections consist of pixels (picture elements), which are single points in a 2D image. To render the 3D image on a 2D screen, the computer determines how to display the 2D cross sections stacked on top of each other, using the

applicable interpixel and interslice distances to position them properly. The computer interpolates data to fill in interslice gaps and create a solid image.

Wireless LAN Security

INTRODUCTION Wireless local area networks (WLANs) based on the Wi-Fi (wireless fidelity) standards are one of today's fastest growing technologies in businesses, schools, and homes, for good reasons. They provide mobile access to the Internet and to enterprise networks so users can remain connected away from their desks. These networks can be up and running quickly when there is no available wired Ethernet infrastructure. They can be made to work with a minimum of effort without relying on specialized corporate installers. Some of the business advantages of WLANs include: " Mobile workers can be continuously connected to their crucial applications and data; " New applications based on continuous mobile connectivity can be deployed; " Intermittently mobile workers can be more productive if they have continuous access to email, instant messaging, and other applications; " Impromptu interconnections among arbitrary numbers of participants become possible. " But having provided these attractive benefits, most existing WLANs have not effectively addressed security-related issues. THREATS TO WLAN ENVIRONMENTS All wireless computer systems face security threats that can compromise its systems and services. Unlike the wired network, the intruder does not need physical access in order to pose the following security threats: Eavesdropping This involves attacks against the confidentiality of the data that is being transmitted across the network. In the wireless network, eavesdropping is the most significant threat because the attacker can intercept the transmission over the air from a distance away from the premise of the company. Tampering

The attacker can modify the content of the intercepted packets from the wireless network and this results in a loss of data integrity. Unauthorized access and spoofing The attacker could gain access to privileged data and resources in the network by assuming the identity of a valid user. This kind of attack is known as spoofing. To overcome this attack, proper authentication and access control mechanisms need to be put up in the wireless network. Denial of Service In this attack, the intruder floods the network with either valid or invalid messages affecting the availability of the network resources. The attacker could also flood a receiving wireless station thereby forcing to use up its valuable battery power. Other security threats The other threats come from the weakness in the network administration and vulnerabilities of the wireless LAN standards, e.g. the vulnerabilities of the Wired Equivalent Privacy (WEP), which is supported in the IEEE 802.11 wireless LAN standard.

Biological Computers

INTRODUCTION Biological computers have emerged as an interdisciplinary field that draws together molecular biology, chemistry, computer science and mathematics. The highly predictable hybridization chemistry of DNA, the ability to completely control the length and content of oligonucleotides, and the wealth of enzymes available for modification of the DNA, make the use of nucleic acids an attractive candidate for all of these nanoscale applications A 'DNA computer' has been used for the first time to find the only correct answer from over a million possible solutions to a computational problem. Leonard Adleman of the University of Southern California in the US and colleagues used different strands of DNA to represent the 20 variables in their problem, which could be the most complex task ever solved without a conventional computer. The

researchers believe that the complexity of the structure of biological molecules could allow DNA computers to outperform their electronic counterparts in future. Scientists have previously used DNA computers to crack computational problems with up to nine variables, which involves selecting the correct answer from 512 possible solutions. But now Adleman's team has shown that a similar technique can solve a problem with 20 variables, which has 220 - or 1 048 576 - possible solutions. Adleman and colleagues chose an 'exponential time' problem, in which each extra variable doubles the amount of computation needed. This is known as an NP-complete problem, and is notoriously difficult to solve for a large number of variables. Other NP-complete problems include the 'travelling salesman' problem - in which a salesman has to find the shortest route between a number of cities - and the calculation of interactions between many atoms or molecules. Adleman and co-workers expressed their problem as a string of 24 'clauses', each of which specified a certain combination of 'true' and 'false' for three of the 20 variables. The team then assigned two short strands of specially encoded DNA to all 20 variables, representing 'true' and 'false' for each one. In the experiment, each of the 24 clauses is represented by a gel-filled glass cell. The strands of DNA corresponding to the variables - and their 'true' or 'false' state - in each clause were then placed in the cells. Each of the possible 1,048,576 solutions were then represented by much longer strands of specially encoded DNA, which Adleman's team added to the first cell. If a long strand had a 'subsequence' that complemented all three short strands, it bound to them. But otherwise it passed through the cell. To move on to the second clause of the formula, a fresh set of long strands was sent into the second cell, which trapped any long strand with a 'subsequence' complementary to all three of its short strands. This process was repeated until a complete set of long strands had been added to all 24 cells, corresponding to the 24 clauses. The long strands captured in the cells were collected at the end of the experiment, and these represented the solution to the problem. THE WORLD'S SMALLEST COMPUTER The world's smallest computer (around a trillion can fit in a drop of water) might one day go on record again as the tiniest medical kit. Made entirely of biological molecules, this computer was successfully programmed to identify - in a test tube - changes in the balance of molecules in the body that indicate the presence of certain cancers, to diagnose the type of cancer, and to react by producing a drug molecule to fight the cancer cells. DOCTOR IN A CELL In previous biological computers produced input, output and "software" are all composed of DNA, the material of genes, while DNA-manipulating enzymes are used as "hardware." The newest version's input apparatus is designed to assess concentrations of specific RNA molecules, which may be overproduced or under produced, depending on the type of cancer. Using pre-programmed medical knowledge, the computer then makes its diagnosis based on the detected RNA levels. In response to a cancer diagnosis, the output unit of the computer can initiate the controlled release of a single-stranded DNA molecule that is known to interfere with the cancer cell's activities, causing it to self-destruct.

Linux Virtual Server

INTRODUCTION With the explosive growth of the Internet and its increasingly important role in our daily lives, traffic on the Internet is increasing dramatically, more than doubling every year. However, as demand and traffic increases, more and more sites are challenged to keep up, literally, particularly during peak periods of activity. Downtime or even delays can be disastrous, forcing customers and profits to go elsewhere. The solution? Redundancy, redundancy, and redundancy. Use hardware and software to build highly-available and highly-scalable network services. Started in 1998, the Linux Virtual Server (LVS) project combines multiple physical servers into one virtual server, eliminating single points of failure (SPOF). Built with off-the-shelf components, LVS is already in use in some of the highest-trafficked sites on the Web. As more and more companies move their mission-critical applications onto the Internet, the demand for always-on services is growing. So too is the need for highly-available and highly-scalable network services. Yet the requirements for always-on service are quite onerous: " The service must scale: when the service workload increases, the system must scale up to meet the requirements. " The service must always be on and available, despite transient partial hardware and software failures. " The system must be cost-effective: the whole system must be economical to build and expand. " Although the whole system may be big in physical size, it should be easy to manage. Clusters of servers, interconnected by a fast network, are emerging as a viable architecture for building a high-performance and highly-available service. This type of loosely-coupled architecture is more scalable, more cost-effective, and more reliable than a single processor system or a tightlycoupled multiprocessor system. However, there are challenges, including transparency and efficiency. The Linux Virtual Server(LVS) is one solution that meets the requirements and challenges of providing an always-on service. In LVS, a cluster of Linux servers appear as a single (virtual) server on a single IP address. Client applications interact with the cluster as if it were a single, high-performance, and highly-available server. Inside the virtual server, LVS directs incoming network connections to the different servers according to scheduling algorithms. Scalability is achieved by transparently adding or removing nodes in the cluster. High availability is provided by detecting node or daemon failures and reconfiguring the system accordingly, on-the-fly. LINUX VIRTUAL SERVER ARCHITECTURE The three-tier architecture consists of: " A load balancer, which serves as the front-end of the whole cluster system. It distributes requests from clients among a set of servers, and monitors the backend servers and the other, backup load balancer. " A set of servers, running actual network services, such as Web, email, FTP and DNS. " Shared storage, providing a shared storage space for the servers, making it easy for the servers to have the same content and provide consistent services. " The load balancer, servers, and shared storage are usually connected by a high-speed network, such as 100 Mbps Ethernet or Gigabit Ethernet, so that the intranetwork does not become a bottleneck of the system as the cluster grows.

Smart Client Application Development using .NET

INTRODUCTION Organizations are seeking to extend their enterprises and provide knowledge workers with ever-greater mobility and access to information and applications. Powerful new computing and communications devices along with wireless networks are helping provide that mobility. This has sparked the creation of "smart clients," or applications and devices that can take advantage of the power of local processing but have the flexibility of Web-based computing. Smart clients are computers, devices, or applications that can provide: 1. The best aspects of traditional desktop applications, including highly responsive software, sophisticated features for users, and great opportunities for developers to enhance existing applications or create new ones. 2. The best aspects of "thin clients," including a small form factor, economical use of computing resources such as processors and memory, ease of deployment, and easy manageability. 3. A natural, easily understood user interface (UI) that is high quality and designed for occasional connectivity with other systems. 4. Interoperability with many different types of devices. 5. The ability to consume Web services. Organizations can start building and using smart client applications today with a rich array of Microsoft products and tools that eliminate barriers to developing and deploying smart clients. These tools include: 1. .NET Framework 2. Compact Framework. 3. Visual Studio .NET 4. Windows client operating systems 5. Windows Server 2003 RICH CLIENTS, THIN CLIENTS AND SMART CLIENTS Rich Clients Rich clients are the usual programs running on a PC locally. They take advantage of the local hardware resources and the features of the client operating system platform. They have the following advantages: 1. use local Resources 2. provides rich user interface 3. offline capable 4. high productivity 5. responsive and flexible Despite the impressive functionality of many of these applications, they have limitations. Many of these applications are stand-alone and operate on the client computer, with little or no awareness of the environment in which they operate. This environment includes the other computers and any services on the network, as well as any other applications on the user's computer. Very often, integration between applications is limited to using the cut or copy and paste features provided by Windows to transfer small amounts of data between applications. They have the following limitations: 1. Tough to deploy and update: Since no network connection is available the applications have to be installed separately on each system using a removable storage device. 2. "DLL Hell" (Application Fragility): When a new application is installed, it may replace a shared DLL with a newer version which is incompatible to an existing application, thereby breaking it.

Thin Clients The Internet provides an alternative to the traditional rich client model that solves many of the problems associated with application deployment and maintenance. Thin client, browser-based applications are deployed and updated on a central Web server; therefore, they remove the need to explicitly deploy and manage any part of the application to the client computer. Thin clients have the following advantages: 1. Easy to deploy and update: The application can be downloaded over the internet if the URL is provided. Updating can also be done at regular intervals over the internet. 2. Easy to manage: All the data is managed on a single server, with thin clients accessing the data over the internet - providing ease of data management and administration. Despite the distributed functionality provided by thin clients, they also have some disadvantages. These are: 1. Network dependency: The browser must have a network connection at all times. This means that mobile users have no access to applications if they are disconnected, so they must reenter data when they return to the office. 2. Poor user experience: Common application features such as drag-and-drop, undo-redo, and contextsensitive help may be unavailable, which can reduce the usability of the application. Because the vast majority of the application logic and state lives on the server, thin clients make frequent requests back to the server for data and processing. The browser must wait for a response before the user can continue to use the application; therefore, the application will typically be much less responsive than an equivalent rich client application. This problem is exacerbated in low bandwidth or high latency conditions, and the resulting performance problems can lead to a significant reduction in application usability and user efficiency

Virtual Campus

INTRODUCTION Today, the Internet and web-technologies have become commonplace and are rapidly decreasing in cost to such an extent that technologies such as Application Servers and Enterprise Portals are fast becoming products and commodities of tomorrow. Web enabling any information access and interaction - from Enterprise Portals, Education to E-Governance or for Healthcare, and every thing that one can imagine - is fast becoming all pervasive to such an extent that almost everyone one the world is affected by it, or contributing something in it. However, much of it does not create significasnt value until and unless, the architecture and services over it are suited to the needs of the organizational and institutional framework of the relevant domains. While IT is getting perfected, there is inadequate work in perfecting the large and complex distributed information systems, associated information sciences and the much needed institutional changes in every organization to effectively benefit from these developments. The Kerala Education Grid is a project specifically addressed to the Higher Education sector of the state to set in place effective IT infrastructure and methodologies and thereby improve quality and standards of learning imparted in all the colleges.

Under the aegis of its Department of Higher Education, the State Government of Kerala has taken a major initiative in establishing an Education Grid across all colleges, universities and premier institutions of research and development (R&D). The Project is called Education Grid for two important reasons: Firstly, it aims to equip the colleges with necessary IT infrastructure, network them among themselves and with premier institutions of R&D. The second and more important reason is that the online assisted programmes planned to be put in place over the Education Grid enable the knowledge base, and associated benefits of experience and expertise to flow from where it is available - the better institutions and organizations to where it is needed - the teachers and students in the numerous colleges. The issues of how exactly this project helps, and how it is to be made a part of regular college or university is explained next. The articulated vision of this project is to provide "Quality Education to all students irrespective of which college they are studying, or, where it is located". Having set this objective, one needs to probe in some depth the key factors that really ail our college education today. Firstly, formal education is conducted in a mechanical way of syllabus - classrooms - lectures - practical - examinations, with little enthusiastic involvement by teachers, or, education administrators alike. Students attend the classes and taking examinations with an aim of getting some marks or grade and a degree. In this process, the primary aim that education should impart scholarship, learning, earning for leaning and capacity for self-learning hardly get the attention they deserve in the formal education system. In this context one may quote Alvin Toffler, "The illiterates of tomorrow are not those who can not read and write, but those who can not learn, unlearn and relearn". This brings the key question of what exactly are the attributes of knowledge, scholarship and learning that we wish to impart through our educational institutions. The key to India becoming a successful knowledge society lies in the rejuvenation of our formal higher education system. Education Grid approach appears to be the most practical, cost-effective, and perhaps the enlightened and realistic way to achieve this. With the completion of the State Information Infrastructure, and with the implementation of projects such as `Kerala Education Grid' and `Education Server,' schools and colleges can start offering quality e-resources to students, irrespective of the geographic location of students and teachers. Piggybacking on virtual campuses, school and college education in Kerala is poised to fashion citizens for the knowledge society of tomorrow. With the completion of the State Information Infrastructure, and with the implementation of projects such as `Kerala Education Grid' and ` Education Server', schools and colleges can start offering quality e-resources to students, irrespective of the geographic location of students and teachers.

AJAX - A New Approach to Web Applications

Introduction

Web application designing has by far evolved in a number of ways since the time of its birth. To make web pages more interactive various techniques have been devised both at the browser level and at the server level. The introduction of XMLHttpRequest class in the Internet Explorer 5 by Microsoft paved the way for interacting with the server using JavaScript, asynchronously. AJAX, a shorthand for Asynchronous Java And XML, is a technique which uses this MLHttpRequest object of the browser features plus the Document Object Model and DHTML and provides for making highly interactive web applications in which the entire web page need not be changed by a user action, only parts of the page are loaded dynamically by exchanging information with the server. This approach has been able to enhance the interactivity and speed of the web applications to a great extent. Interactive applications such as Google Maps, Orkut, Instant Messengers are making extensive use of this technique. This report presents an overview of the basic concepts of AJAX and how it is used in making web applications. Creating Web applications has been considered as one of the most exciting jobs under current interaction design. But, Web interaction designers can't help feel a little envious of their colleagues who create desktop software. Desktop applications have a richness and responsiveness that has seemed out of reach on the Web. The same simplicity that enabled the Web's rapid proliferation also creates a gap between the experiences that can be provided through web applications and the experiences users can get from a desktop application. In the earliest days of the Web, designers chafed against the constraints of the medium. The entire interaction model of the Web was rooted in its heritage as a hypertext system: click the link, request the document, wait for the server to respond. Designers could not think of changing the basic foundation of the web that is, the call-response model, to improve on the web applications because of the various caveats, restrictions and compatibility issues associated with it. But the urge to enhance the responsiveness of the web applications, made the designers take up the task of making the Web work the best it could within the hypertext interaction model, developing new conventions for Web interaction that allowed their applications to reach audiences who never would have attempted to use desktop applications designed for the same tasks. The designers' came up with a technique called AJAX, shorthand for Asynchronous Java And XML, which is a web development technique for creating interactive web applications. The intent of this is to make web pages feel more responsive by exchanging small amounts of data with the server behind the scenes, so that the entire web page does not have to be reloaded each time the user makes a change. This is meant to increase the web page's interactivity, speed, and usability. AJAX is not a single new technology of its own but is a bunch of several technologies, each ourishing in its own right, coming together in powerful new ways.

What is AJAX?
AJAX is a set of technologies combined in an efficient manner so that the web application runs in a better way utilizing the benefits of all these simultaneously. AJAX incorporates: 1. standards-based presentation using XHTML and CSS; 2. dynamic display and interaction using the Document Object Model; 3. data interchange and manipulation using XML and XSLT;

4. asynchronous data retrieval using XMLHttpRequest; 5. and JavaScript binding everything together.

IP spoofing

Definition
Criminals have long employed the tactic of masking their true identity, from disguises to aliases to caller-id blocking. It should come as no surprise then, that criminals who conduct their nefarious activities on networks and computers should employ such techniques. IP spoofing is one of the most common forms of on-line camouflage. In IP spoofing, an attacker gains unauthorized access to a computer or a network by making it appear that a malicious message has come from a trusted machine by "spoofing" the IP address of that machine. In the subsequent pages of this report, we will examine the concepts of IP spoofing: why it is possible, how it works, what it is used for and how to defend against it. Brief History of IP Spoofing The concept of IP spoofing was initially discussed in academic circles in the 1980's. In the April 1989 article entitled: "Security Problems in the TCP/IP Protocol Suite", author S. M Bellovin of AT & T Bell labs was among the first to identify IP spoofing as a real risk to computer networks. Bellovin describes how Robert Morris, creator of the now infamous Internet Worm, figured out how TCP created sequence numbers and forged a TCP packet sequence. This TCP packet included the destination address of his "victim" and using an IP spoofing attack Morris was able to obtain root access to his targeted system without a User ID or password. Another infamous attack, Kevin Mitnick's Christmas Day crack of Tsutomu Shimomura's machine, employed the IP spoofing and TCP sequence prediction techniques. While the popularity of such cracks has decreased due to the demise of the services they exploited, spoofing can still be used and needs to be addressed by all security administrators. A common misconception is that "IP spoofing" can be used to hide your IP address while surfing the Internet, chatting on-line, sending e-mail, and so forth. This is generally not true. Forging the source IP address causes the responses to be misdirected, meaning you cannot create a normal network connection. However, IP spoofing is an integral part of many network attacks that do not need to see responses (blind spoofing). 2. TCP/IP PROTOCOL Suite

IP Spoofing exploits the flaws in TCP/IP protocol suite. In order to completely understand how these attacks can take place, one must examine the structure of the TCP/IP protocol suite. A basic understanding of these headers and network exchanges is crucial to the process. 2.1 Internet Protocol - IP The Internet Protocol (or IP as it generally known), is the network layer of the Internet. IP provides a connection-less service. The job of IP is to route and send a packet to the packet's destination. IP provides no guarantee whatsoever, for the packets it tries to deliver. The IP packets are usually termed datagrams. The datagrams go through a series of routers before they reach the destination. At each node that the datagram passes through, the node determines the next hop for the datagram and routes it to the next hop. Since the network is dynamic, it is possible that two datagrams from the same source take different paths to make it to the destination. Since the network has variable delays, it is not guaranteed that the datagrams will be received in sequence. IP only tries for a best-effort delivery. It does not take care of lost packets; this is left to the higher layer protocols. There is no state maintained between two datagrams; in other words, IP is connection-less.

Você também pode gostar