Você está na página 1de 80

CHAPTER 1

INTRODUCTION

In cloud computing plays an important role in our daily life, because it


provides efficient, reliable and scalable resources for data storage and
computational activities at a very low price.
However, the direct access of the cloud to the sensitive information of its
users threatens their privacy. Searchable encryption is a cryptographic primitive
which is useful for designing a secure data storage and cloud computing. There
are two variants of searchable encryption: symmetric searchable encryption and
Public key encryption with keyword search (PEKS). Proposed the first symmetric
searchable encryption scheme.
Their work was pursued by many other researchers. In the symmetric
variants, the encryption key and the key which is used for generating the search
token are the same. Therefore, just the users who stores the searchable cipher text
in the cloud can generate a valid search token. Moreover, l et al. have proposed
an efficient search scheme on encrypted data.
The computational cost of the Encryption algorithm is linear with respect
to the number of the intended attributes, the number of required pairings in the
Search algorithm is also linear with respect to the number of involved attributes.
One of the salient features of our proposed temporary keyword search scheme is
that the number of required pairings in the search algorithm is independent of the
number of time units which are considered in the search token by the data user.
Selective security against chosen keyword attack and this requirement
implies that the adversary cannot infer any information about the keyword from
its cipher text in the selective security model without being given any matching
search trapdoor. This property is formalized via a selectively chosen keyword
attack game. Keyword secrecy and this security requirement implies that the

1
adversary, A cannot determine the keyword from the related cipher text and valid
search tokens with a probability more than a random keyword guess. This
property is formalized via the keyword secrecy game. The cloud server based on
the received search token can find the encrypted documents which contain the
intended keyword and are generated in the specified time interval. Then, it returns
the search result to the data users whose attributes satisfy the access control policy
enforced by the data owner.

Today, cloud computing plays an important role in our daily life, because
it provides efficient, reliable and scalable resources for data storage and
computational activities at a very low price.

However, the direct access of the cloud to the sensitive information of its
users threatens their privacy. A trivial solution to address this problem is
encrypting data before outsourcing it to the cloud. However, searching on the
encrypted data is very difficult.

Public key encryption with keyword search (PEKS) is a cryptographic


primitive which was first introduced by Boneh et al. [1] to facilitate searching on
the encrypted data. In PEKS, each data owner who knows the public key of the
intended data user generates a searchable ciphertext by means of his/her public
key, and outsources it to the cloud. Then, the data user extracts a search token
related to an arbitrary keyword by using his/her secret key, and issues it to the
cloud. The cloud service provider (CSP) runs the search operation by using the
received search token on behalf of the data user to find the relevant results to the
intended keywords.

2
CHAPTER 2

2. SYSTEM ANAYSIS AND DESIGN

2.1 EXISTING SYSTEM


Cloud Computing have data storage and Sharing. Searching on the
encrypted data is very difficult. Public key encryption with keyword search
(PEKS) is a cryptographic primitive In PEKS, each data owner who knows
the public key of the intended data user generates a searchable cipher text
by means of public key, and outsources it to the cloud. The complexity of
the encryption algorithm is linear with respect to the number of the
involved attributes. Performance evaluation shows our schemes
practicality

DRAWBACKS IN EXISTING SYSTEM


 The search tokens can be used to extract all the cipher texts which are
produced at any time and contain the corresponding keyword.
 May lead to some information leakage.
 Less security.

2.2 PROPOSED SYSTEM


The key-policy attribute-based temporary keyword search
(KPABTKS) schemes, the authorized users can generate some search
tokens and send them to the cloud for running the search operation. The
data owner selects an attribute set, and runs the encryption algorithm with
regard to it. If a data user’s attributes set satisfies the access tree of the data
owner, then he/she can generate a valid search token. The cloud applies the
generated search token to find the corresponding cipher texts which have
been encrypted in a time interval specified by the data user.

3
ADVANTAGES IN PROPOSED SYSTEM
 It uses Trusted Third Party (TTP) users to create an Temporary key to
Secure and Retrieve the Data.
 The Data is More Secured.

2.3 ALGORITHM USED

The proposed KP-ABTKS scheme consists of five algorithms:

 Setup,
 KeyGen ,
 Enc,
 TokenGen,
 Search.

These algorithms are described as follows:

• (msk,pp) ← Setup(1λ): This algorithm is run by the TTP. It takes the security
parameter λ as input and generates the master secret key msk and the public
parameter pp.

• sk ← KeyGen(msk,Tr): This algorithm generates a secret key sk for the user


with the access tree, Tr. The TTP determines the access tree Tr and runs this
algorithm.

• cph ← Enc(ω,ti,Atts,pp): This algorithm generates a searchable ciphertext


related to the keyword ω and time of encrypting ti according to an attribute set,
Atts which is determined by the data owner.

• st ← TokenGen(sk,ω,[ts,te]): The data user runs this algorithm to generate the


search token st for searching the ciphertexts which are encrypted in the time
interval [ts,te], and contain the keyword ω, according to its secret key sk.

4
• {0,1} := Search(cph,st): For each stored ciphertext cph and the received search
token st which is associated with specific keyword ω and attribute set Atts, this
algorithm

2.4 SYSTEM DESIGN


2.4.1 SYSTEM ARCHITECTURE

Fig:2.4.1: System Architecture

5
2.4.2 USE CASE DIAGRAM

A use case diagram is a type of behavioural diagram created from a


Use-case analysis. The purpose of use case is to present overview of the
functionality provided by the system in terms of actors, their goals and any
dependencies between those use cases protected visibility allows child classes to
access information they inherited from a parent class.

Fig:2.4.2: Use Case Diagram

6
2.4.3 CLASS DIAGRAM

A class diagram in the UML is a type of static structure diagram that


describes the structure of a system by showing the system’s classes, their
attributes, and the relationships between the classes. Private visibility hides
information from anything outside the class partition. Public visibility allows all
other classes to view the marked information. Protected visibility allows child
classes to access information they inherited from a parent class.

Fig:2.4.3: Class Diagram

7
2.4.4 SEQUENCE DIAGRAM

A sequence diagram in UML is a kind of interaction diagram that shows


how the processes operate with one another and in what order. It is a construct of
a message sequence chart. Sequence diagrams are sometimes called Event-trace
diagrams, event scenarios, and timing diagrams. The below diagram shows the
sequence flow shows how the process occurs in this project

Fig:2.4.4: Sequence Diagram

8
2.4.5 STATE DIAGRAM:

A state diagram is a type of diagram used in computer science and related


fields to describe the behaviour of systems. State diagrams require that the system
described is composed of a finite number of states; sometimes, this is indeed the
case, while at other times this is a reasonable abstraction. There are many forms
of state diagrams, which differ slightly.

Fig 2.2.5. State Diagram

9
2.2.6 COMPONENT DIAGRAM:

Components are wired together by using an assembly connector to connect


the required interface of one component with the provided interface of another
component. This illustrates the service consumer - service provider relationship
between the two components.

An assembly connector is a "connector between two


components that defines that one component provides the services that another
component requires. An assembly connector is a connector that is defined from a
required interface or port to a provided interface or port".

Fig 2.2.6 Component Diagram

10
CHAPTER 3

3. SYSTEM REQUIREMENTS AND DESCRIPTION

3.1 HARDWARE REQUIREMENT


 Processor : Pentium Dual Core 2.00GHZ

 Hard disk : 128 GB

 Mouse : Logitech.

 RAM : 2 GB (minimum)

 Keyboard : 110 keys enhanced

3.2 SOFTWARE REQUIREMENTS

 Operating system : Windows 7, 8 and Win 10

 Front End : Microsoft Visual Studio 2017

 Coding Language : C#

 Backend : MSSQL Server 2012

11
3.3 SOFTWARE DESCRIPTION

3.3.1 DOTNET

INTRODUCTION TO DOTNET

Microsoft .NET is a set of Microsoft software technologies for


rapidly building and integrating XML Web services, Microsoft Windows-based
applications, and Web solutions. The .NET Framework is a language-neutral
platform for writing programs that can easily and securely interoperate. There’s
no language barrier with .NET: there are numerous languages available to the
developer including Managed C++, C#, Visual Basic and Java Script. The .NET
framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data
types and communications protocols so that components created in different
languages can easily interoperate.

“.NET” is also the collective name given to various software


components built upon the .NET platform. These will be both products (Visual
Studio.NET and Windows.NET Server, for instance) and services (like Passport,
.NET My Services, and so on).

THE .NET FRAMEWORK


The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the


environment within which programs run.

12
The most important features are

 Conversion from a low-level assembler-style language, called


Intermediate Language (IL), into code native to the platform being
executed on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
 Loading and executing programs, with version control and other
such features.
 The following features of the .NET framework are also worth
description
MANAGE CODE
The code that targets .NET, and which contains certain extra Information -
“metadata” - to describe itself. Whilst both managed and unmanaged code can
run in the runtime, only managed code contains the information that allows the
CLR to guarantee, for instance, safe execution and interoperability.

MANAGE DATA

With Managed Code comes Managed Data. CLR provides memory


allocation and Deal location facilities, and garbage collection. Some .NET
languages use Managed Data by default, such as C#, Visual Basic.NET and
JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending
on the language you’re using, impose certain constraints on the features available.
As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications - data that doesn’t get garbage collected
but instead is looked after by unmanaged code.

13
COMMON TYPE SYSTEM (CTS)

The CLR uses something called the Common Type System (CTS) to
strictly enforce type-safety. This ensures that all classes are compatible with each
other, by describing types in a common way. CTS define how types work within
the runtime, which enables types in one language to interoperate with types in
another language, including cross-language exception handling. As well as
ensuring that types are only used in appropriate ways, the runtime also ensures
that code doesn’t attempt to access memory that hasn’t been allocated to it.

COMMON LANGUAGE SPECIFICATION


The CLR provides built-in support for language interoperability. To ensure
that you can develop managed code that can be fully used by developers using
any programming language, a set of language features and rules for using them
called the Common Language Specification (CLS) has been defined.
Components that follow these rules and expose only CLS features are considered
CLS-compliant.

THE CLASS LIBRARY


.NET provides a single-rooted hierarchy of classes, containing over
7000 types. The root of the namespace is called System; this contains basic types
like Byte, Double, Boolean, and String, as well as Object. All objects derive from
System. Object. As well as objects, there are value types. Value types can be
allocated on the stack, which can provide useful flexibility. There are also
efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections,


file, screen, and network I/O, threading, and so on, as well as XML and database
connectivity. The class library is subdivided into a number of sets (or

14
namespaces), each providing distinct areas of functionality, with dependencies
between the namespaces kept to a minimum.

LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual


Studio .NET enables developers to use their existing programming skills to build
all types of applications and XML Web services. The .NET framework supports
new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and
Managed C++), but there are also a number of new additions to the family.

Visual Basic .NET has been updated to include many new and
improved language features that make it a powerful object-oriented programming
language. These features include inheritance, interfaces, and overloading, among
others. Visual Basic also now supports structured exception handling, custom
attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any
CLS-compliant language can use the classes, objects, and components you create
in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just


some of the enhancements made to the C++ language. Managed Extensions
simplify the task of migrating existing C++ applications to the new .NET
Framework.

C# is Microsoft’s new language. It’s a C-style language that is


essentially “C++ for Rapid Application Development”. Unlike other languages,
its specification is just the grammar of the language. It has no standard library of
its own, and instead has been designed with the intention of using the .NET
libraries as its own.

15
Microsoft Visual J# .NET provides the easiest transition for Java-
language developers into the world of XML Web Services and dramatically
improves the interoperability of Java-language programs with existing software
written in a variety of other programming languages.

Active State has created Visual Perl and Visual Python, which
enable .NET-aware applications to be built in either Perl or Python. Both products
can be integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.

Fig3.1 .Net Frame work

16
3.3.2 FEATURES OF C#

1. C# is a simple, modern, object oriented language derived from C++ and Java.
2. It aims to combine the high productivity of Visual Basic and the raw power of
C++.
3. Visual studio supports Vb, VC++, C++, Vbscript, Jscript. All of these

languages provide access to the Microsoft .NET platform.


5. .NET includes a Common Execution engine and a rich class library.
6. Microsoft's JVM equiv is Common language run time (CLR).
7. CLR accommodates more than one language such as C#, VB.NET, Jscript,

ASP.NET,C++.
8. Source code --->Intermediate Language code (IL) ---> (JIT Compiler) Native

code.
9. The classes and data types are common to all of the .NET languages.
10. We may develop Console application, Windows application, and Web
application using C#.
11. In C# Microsoft has taken care of C++ problems such as Memory
management , pointers etc.
12.It supports garbage collection, automatic me

TYPESAFE
1. In C# we cannot perform unsafe casts like convert double to a Boolean.
2. Value types (primitive types) are initialized to zeros and reference types
(objects and classes are initialized to null by the compiler automatically.
3.Arrays are zero base indexed and are bound checked.
4. Overflow of types can be checked.

17
INTEROPERABILITY
1. C# includes native support for the COM and windows-based applications.
2. Allowing restricted use of native pointers

3. Users no longer have to explicitly implement the unknown and other COM
interfaces those features are built in.

4. 9C# allows the users to use pointers as unsafe code blocks to manipulate your
old
Objectives of .NET Framework

1) Platform dependent

2) Language Independent

3) Language Interoperability

4) Security

5) Database Connectivity

6) Globalization of Application

1) Platform Independent: As all or exe files are executable in any operating


system with the help of the CLR (common language runtime), hence .net is called
as platform independent.

2) Language Independent: As .net application logic can be developed in any .net


framework compatible languages, hence it is called as Language Independent.

Specification in ASP.net

It provides set of rules to be followed while integrating with the language.

18
Language Interoperability: The code written in one language should be used
from the application developed using other language.

Security: The .net applications attain high level of security.

Database Connectivity: A new Database connectivity model to connect


Database

Globalization of Application: Designing the applications for supporting


multiple languages and cultures.

COMPONETS OF .NET FRMEWORK

The .NET Framework is an integral Windows component that supports


building and running the next generation of applications and XML Web services.
The .NET Framework is designed to fulfil the following objectives:

 To provide a consistent object-oriented programming environment


whether object code is stored and executed locally, executed locally but
Internet-distributed, or executed remotely.
 To provide a code-execution environment that minimizes software
deployment and versioning conflicts.
 To provide a code-execution environment that promotes safe execution of
code, including code created by an unknown or semi-trusted third party.
 To provide a code-execution environment that eliminates the performance
problems of scripted or interpreted environments.
 To make the developer experience consistent across widely varying types
of applications, such as Windows-based applications and Web-based
applications.
 To build all communication on industry standards to ensure that code based
on the .NET Framework can integrate with any other code.

19
The .NET Framework has two main components: the common language
runtime and the .NET Framework class library. The common language runtime
is the foundation of the .NET Framework. You can think of the runtime as an
agent that manages code at execution time, providing core services such as
memory management, thread management, and remoting, while also enforcing
strict type safety and other forms of code accuracy that promote security and
robustness. In fact, the concept of code management is a fundamental principle
of the runtime. Code that targets the runtime is known as managed code, while
code that does not target the runtime is known as unmanaged code. The class
library, the other main component of the .NET Framework, is a comprehensive,
object-oriented collection of reusable types that you can use to develop
applications ranging from traditional command-line or graphical user interface
(GUI) applications to applications based on the latest innovations provided by
ASP.NET, such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load


the common language runtime into their processes and initiate the execution of
managed code, thereby creating a software environment that can exploit both
managed and unmanaged features. The .NET Framework not only provides
several runtime hosts, but also supports the development of third-party runtime
hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side


environment for managed code. ASP.NET works directly with the runtime to
enable ASP.NET applications and XML Web services, both of which are
discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the


runtime (in the form of a MIME type extension). Using Internet Explorer to host
the runtime enables you to embed managed components or Windows Forms
controls in HTML documents. Hosting the runtime in this way makes managed

20
mobile code (similar to Microsoft® ActiveX® controls) possible, but with
significant improvements that only managed code can offer, such as semi-trusted
execution and isolated file storage.

COMMON LANGUAGE RUNTIME


The common language runtime manages memory, thread execution,
code execution, code safety verification, compilation, and other system services.
These features are intrinsic to the managed code that runs on the common
language runtime. With regards to security, managed components are awarded
varying degrees of trust, depending on a number of factors that include their
origin (such as the Internet, enterprise network, or local computer). This means
that a managed component might or might not be able to perform file-access
operations, registry-access operations, or other sensitive functions; even if it is
being used in the same active application. The runtime enforces code access
security. For example, users can trust that an executable embedded in a Web page
can play an animation on screen or sing a song, but cannot access their personal
data, file system, or network. The security features of the runtime thus enable
legitimate Internet-deployed software to be exceptionally feature rich.
The runtime also enforces code robustness by implementing a strict type-
and-code-verification infrastructure called the common type system (CTS). The
CTS ensures that all managed code is self-describing. The various Microsoft and
third-party language compilers generate managed code that conforms to the CTS.
This means that managed code can consume other managed types and instances,
while strictly enforcing type fidelity and type safety.
BASE CLASS LIBRARY
The .NET Framework class library is a collection of reusable types that
tightly integrate with the common language runtime. The class library is object
oriented, providing types from which your own managed code can derive
functionality. This not only makes the .NET Framework types easy to use, but

21
also reduces the time associated with learning new features of the .NET
Framework. In addition, third-party components can integrate seamlessly with
classes in the .NET Framework.
For example, the .NET Framework collection classes implement a set of
interfaces that you can use to develop your own collection classes. Your
collection classes will blend seamlessly with the classes in the .NET
Framework.As you would expect from an object-oriented class library, the
.NET Framework types enable you to accomplish a range of common
programming tasks, including tasks such as string management, data collection,
database connectivity, and file access. In addition to these common tasks, the
class library includes types that support a variety of specialized development
scenarios. For example, you can use the .NET Framework to develop the
following types of applications and services:

 Console applications.
 Windows GUI applications (Windows Forms).
 ASP.NET applications.
 XML Web services.
 Windows services.

For example, the Windows Forms classes are a comprehensive set of reusable
types that vastly simplify Windows GUI development. If you write an ASP.NET
Web Form application, you can use the Web Forms classes.

FEATURES OF THE COMMON LANGUAGE RUNTIME


Common Language Runtime is a heart of the .net framework. It actually manages
the code during Execution. The Code that runs under the CLR is called “Managed
Code”. The code that is executed under .net runtime gets benefits like
cross language inheritance, cross language exception handling, enhanced

22
Security, Versioning and development support, a simplified model for
component interaction, debugging and Profiling services.

FEATURES PROVIDED BY CLR

Automatic memory management: - The CLR provides the Garbage


Collection feature for managing the life time of object. This relives a
programmer from memory management task.

Standard Type System: - The CLR Implement a formal Specification


called the Common Type System (CTS). CTS is important part of rules that
ensures that objects written in different language can interact with each other.

Language interoperability: - It is the ability of an application to interact


with another application written in a different programming language.
Language interoperability helps maximum code reuse. The CLR provides
support for language interoperability by specifying and enforcing CTS and by
providing metadata.

Platform Independence: - The Compiler compiles code language, which


is CPU-independent. This means that the code can be executed from any
platform that supports the .Net CLR.

Security Management: - In .net platform, Security is achieved through


the code access Security (CAS) model. In the model, CLR enforces the
restriction an managed code through the object called “permissions”. The CLR
allows the code to perform only that task for which it has permissions. In other
words, the CAS model specifies what the code can access instead of specifies
who can access resources.

23
Type Safety: - This feature ensures that object is always accessed in
compatible ways. Therefore the CLR will prohibit a code from assign a 10-byte
value to an object that occupies 8 bytes

BENEFITS OF CLR

 Performance improvement
 The ability to easily use components developed in other languages.
 Extensible types provided by library.
 New Language features such as inheritance, interfaces etc.
 Complete Object-Oriented design.
 Very Strong Type Safety.
 A good blend of Visual Basic simplicity and c++ power.
 Syntax and keywords similar to c and c++.
 Use of delegates rather than function pointers for increased type safety and
security.

ASP.NET OVERVIEW

ASP.Net is a web development platform, which provides a programming


model, a comprehensive software infrastructure and various services required to
build up robust web application for PC, as well as mobile devices.

ASP.Net works on top of the HTTP protocol and uses the HTTP commands
and policies to set a browser-to-server two-way communication and cooperation.

ASP.Net is a part of Microsoft .Net platform. ASP.Net applications are


compiled codes, written using the extensible and reusable components or objects
present in .Net framework. These codes can use the entire hierarchy of classes in
.Net framework.

24
The ASP.Net application codes could be written in either of the following
languages:

 C#

 Visual Basic .Net

 Jscript

 J#

ASP.Net is used to produce interactive, data-driven web applications over the


internet. It consists of a large number of controls like text boxes, buttons and
labels for assembling, configuring and manipulating code to create HTML pages.

ASP.Net Web Forms Model:

ASP.Net web forms extend the event-driven model of interaction to the


web applications. The browser submits a web form to the web server and the
server returns a full markup page or HTML page in response.

All client-side user activities are forwarded to the server for stateful
processing. The server processes the output of the client actions and triggers the
reactions.

Now, HTTP is a stateless protocol. ASP.Net framework helps in storing the


information regarding the state of the application, which consists of:

 Page state

 Session state

The page state is the state of the client, i.e., the content of various input
fields in the web form. The session state is the collective obtained from various

25
pages the user visited and worked with, i.e., the overall session state. To clear the
concept, let us take up an example of a shopping cart as follows.

User adds items to a shopping cart. Items are selected from a page, say the items
page, and the total collected items and price are shown in a different page, say the
cart page. Only HTTP cannot keep track of all the information coming from
various pages. ASP.Net session state and server-side infrastructure keeps track of
the information collected globally over a session.

The ASP.Net runtime carries the page state to and from the server across page
requests while generating the ASP.Net runtime codes and incorporates the state
of the server-side components in hidden fields.

This way the server becomes aware of the overall application state and operates
in a two-tiered connected way.

ASP.Net Component Model:

The ASP.Net component model provides various building blocks of ASP.Net


pages. Basically, it is an object model, which describes:

 Server-side counterparts of almost all HTML elements or tags, like <form>


and <input>.

 Server controls, which help in developing complex user-interface for


example the Calendar control or the Gridview control.

ASP.Net is a technology, which works on the .Net framework that contains all
web-related functionalities. The .Net framework is made of an object-oriented
hierarchy. An ASP.Net web application is made of pages. When a user requests
an ASP.Net page, the IIS delegates the processing of the page to the ASP.Net
runtime system.

26
The ASP.Net runtime transforms the .aspx page into an instance of a class, which
inherits from the base class Page of the .Net framework. Therefore, each ASP.Net
page is an object and all its components i.e., the server-side controls are also
objects.

ASP.NET ARCHITECTURE

ASP.NET is based on the fundamental architecture of .NET Framework.


Visual studio provides a uniform way to combine the various features of this
Architecture

Architecture is
explained form bottom to top in the following discussion.

 At the bottom of the Architecture is Common Language Runtime. NET


Framework common language runtime resides on top of the operating
system services. The common language runtime loads and executes code
that targets the runtime. This code is therefore called managed code. The
runtime gives you, for example, the ability for cross-language integration.
 .NET Framework provides a rich set of class libraries. These include base
classes, like networking and input/output classes, a data class library for
data access, and classes for use by programming tools, such as debugging
27
services. All of them are brought together by the Services Framework,
which sits on top of the common language runtime.
 ADO.NET is Microsoft’s ActiveX Data Object (ADO) model for the .NET
Framework. ADO.NET is not simply the migration of the popular ADO
model to the managed environment but a completely new paradigm for
data access and manipulation.
 ADO.NET is intended specifically for developing web applications. This
is evident from its two major design principles:
 Disconnected Datasets—In ADO.NET, almost all data
manipulation is done outside the context of an open database
connection.
 Effortless Data Exchange with XML—Datasets can converse
in the universal data format of the Web, namely XML.

 The 4th layer of the framework consists of the Windows application model
and, in parallel, the Web application model. The Web application model-
in the slide presented as ASP.NET-includes Web Forms and Web
Services. ASP.NET comes with built-in Web Forms controls, which are
responsible for generating the user interface. They mirror typical HTML
widgets like text boxes or buttons. If these controls do not fit your needs,
you are free to create your own user controls. Web Services brings you a
model to bind different applications over the Internet. This model is based
on existing infrastructure and applications and is therefore standard-based,
simple, and adaptable. Web Services are software solutions delivered via
Internet to any device. Today, that means Web browsers on computers, for
the most part, but the device-agnostic design of .NET will eliminate this
limitation.
 One of the obvious themes of .NET is unification and interoperability
between various programming languages. In order to achieve this; certain

28
rules must be laid and all the languages must follow these rules. In other
words we cannot have languages running around creating their own
extensions and their own fancy new data types. CLS is the collection of the
rules and constraints that every language (that seeks to achieve .NET
compatibility) must follow.
 The CLR and the .NET Frameworks in general, however, are designed in
such a way that code written in one language can not only seamlessly be
used by another language. Hence ASP.NET can be programmed in any of
the .NET compatible language whether it is VB.NET, C#, Managed C++
or JScript.NET.

ASP.NET PAGE LIFE CYCLE

ASP.Net life cycle specifies, how:

 ASP.Net processes pages to produce dynamic output

 The application and its pages are instantiated and processed

 ASP.Net compiles the pages dynamically

The ASP.Net life cycle could be divided into two groups:

 Application Life Cycle

 Page Life Cycle

ASP.Net Application Life Cycle:

The application life cycle has the following stages:

 User makes a request for accessing application resource, a page. Browser


sends this request to the web server.

29
 A unified pipeline receives the first request and the following events take
place:

 An object of the ApplicationManager class is created.

 An object of the HostingEnvironment class is created to provide


information regarding the resources.

 Top level items in the application are compiled.

 Response objects are created .the application objects: HttpContext,


HttpRequest and HttpResponse are created and initialized.

 An instance of the HttpApplication object is created and assigned to the


request. The request is processed by the HttpApplication class. Different
events are raised by this class for processing the request.

ASP.Net Page Life Cycle:

When a page is requested, it is loaded into the server memory,


processed and sent to the browser. Then it is unloaded from the memory. At each
of this steps, methods and events are available, which could be overridden
according to the need of the application. In other words, you can write your own
code to override the default code.

The Page class creates a hierarchical tree of all the controls on the page.
All the components on the page, except the directives are part of this control tree.
You can see the control tree by adding trace= "true" to the Page directive. We
will cover page directives and tracing under 'directives' and 'error handling'.

The page life cycle phases are:

 Initialization

30
 Instantiation of the controls on the page

 Restoration and maintenance of the state

 Execution of the event handler codes

 Page rendering

Understanding the page cycle helps in writing codes for making some specific
thing happen at any stage of the page life cycle. It also helps in writing custom
controls and initializing them at right time, populate their properties with view-
state data and run control behaviour code.

Following are the different stages of an ASP.Net page:

 Page request: when ASP.Net gets a page request, it decides whether to


parse and compile the page or there would be a cached version of the page;
accordingly, the response is sent
 Starting of page life cycle: At this stage, the Request and Response
objects are set. If the request is an old request or post back, the IsPostBack
property of the page is set to true. The UICulture property of the page is
also set.
 Page initialization:At this stage, the controls on the page are assigned
unique ID by setting the Unique ID property and themes are applied. For a
new request post back data is loaded and the control properties are restored
to the view-state values.
 Page load: At this stage, control properties are set using the view state and
control state values.
 Validation: Validate method of the validation control is called and if it
runs successfully, the Is Valid property of the page is set to true.
 Post back event handling: If the request is a post back (old request), the
related event handler is called.

31
 Page rendering: At this stage, view state for the page and all controls are
saved. The page calls the Render method for each control and the output of
rendering is written to the OutputStream class of the Page's Response
property.
 Unload: The rendered page is sent to the client and page properties, such
as Response and Request are unloaded and all cleanup done.
ASP.Net Page Life Cycle Events:

 At each stage of the page life cycle, the page raises some events, which
could be coded. An event handler is basically a function or subroutine,
bound to the event, using declarative attributes like onclick or handle.

 Following are the page life cycle events:

 PreInit: PreInit is the first event in page life cycle. It checks the
IsPostBack property and determines whether the page is a post back. It sets
the themes and master pages, creates dynamic controls and gets and sets
profile property values. This event can be handled by overloading the
OnPreInit method or creating a Page_PreInit handler.
 Init: Init event initializes the control property and the control tree is built.
This event can be handled by overloading the OnInit method or creating a
Page_Init handler.
 InitComplete : InitComplete event allows tracking of view state. All the
controls turn on view-state tracking.
 LoadViewState: LoadViewState event allows loading view state
information into the controls.
 LoadPostData: During this phase, the contents of all the input fields
defined with the <form> tag are processed.

32
 Preload: Preload occurs before the post back data is loaded in the controls.
This event can be handled by overloading the On Preload method or
creating a Page Preload handler.
 Load: The Load event is raised for the page first and then recursively for
all child controls. The controls in the control tree are created. This event
can be handled by overloading the On Load method or creating a Page
Load handler.
 Load Complete: The loading process is completed, control event handlers
are run and page validation takes place. This event can be handled by
overloading the Onload Complete method or creating a
Page_LoadComplete handler.
 PreRender: The PreRender event occurs just before the output is
rendered. By handling this event, pages and controls can perform any
updates before the output is rendered.
 PreRenderComplete: As the PreRender event is recursively fired for all
child controls, this event ensures the completion of the pre-rendering
phase.
 SaveStateComplete: State of control on the page is saved.
Personalization, control state and view state information is saved. The
HTML markup is generated. This stage can be handled by overriding the
Render method or creating a Page Render handler.
 Unload: The Unload phase is the last phase of the page life cycle. It raises
the Unload event for all controls recursively and lastly for the page itself.
Final cleanup is done and all resources and references, such as database
connections, are freed. This event can be handled by modifying the On
Unload method or creating a Page_UnLoad handler.

An ASP.Net page is made of number of server controls along with the HTML
controls, text and images. Sensitive data from the page and the states of different

33
controls on the page are stored in hidden fields and forms the context of that page
request.

ASP.Net runtime controls all association between a page instance and its state.
An ASP.Net page is an object of the Page Class or inherited from it.

All the controls on the pages are also objects of the related control class inherited
from a parent Control class. When a page is run an instance of the page object is
created along with all its content controls.

An ASP.Net page is also a server side file saved with the .aspx extension. It is
modular in nature and can be divided into the following core sections:

 Page directives

 Code Section

 Page Layout

 Page directives:

The page directives set up the environments for the page to run. The @Page
directive defines page-specific attributes used by the ASP.Net page parser and
compiler. Page directives specify how the page should be processed, and which
assumptions are to be taken about the page.

It allows importing namespaces, loading assemblies and registering new controls


with custom tag names and namespace prefixes. We will discuss all of these
concepts in due time.

Code Section:

The code section provides the handlers for the page and control events along with
other functions required. We mentioned that, ASP.Net follows an object model.

34
Now, these objects raises events when something happens on the user interface,
like a user clicks a button or moves the cursor. How these events should be
handled? That code is provided in the event handlers of the controls, which are
nothing but functions bound to the controls.

The code section or the code behind file provides all these event handler routines,
and other functions used by the developer. The page code could be precompiled
and deployed in the form of a binary assembly.

Page Layout:

 The page layout provides the interface of the page. It contains the server
controls, text, inline JavaScript and HTML tags:.

SQL SERVER 2012:

SQL Server 2005 will be soon reaching its three-year mark, which in terms of
software life-cycle translates into fairly advanced maturity. While this is still far
from retirement age, the name of its successor, SQL Server 2012, suggests that it
might be time for you to start looking into what the new generation has to offer.
The release of SQL Server 2012, originally introduced as Yukon, has already
been postponed, but its current Beta 2 implementation (with several incremental
Community Technical Previews expected before Beta 3 becomes available early
next year) brings promise of a timely RTM stage (planned for summer next year).
In this series of articles, we will look into functional highlights of the new
incarnation of the Microsoft database management system, focusing on those that
are likely to remain unchanged in the final product.

Improvements to the database engine, the details of which are not published by
Microsoft, and the corresponding changes to the main infrastructure components
are reflected by a substantial number of new features as well as enhancements to

35
existing ones. The most relevant ones can be grouped into several categories, such
as high availability and scalability, security, data management, administration
and maintenance, and development.

The demand for high availability is becoming increasingly common and is no


longer limited to major corporate and governmental clients. This results not only
from a growing level of customer expectations, but also from the new political
climate associated with more stringent legislative and regulatory requirements, in
which disaster recovery and business continuity are more relevant than ever.
However, businesses are also, at the same time, extremely interested in keeping
their costs to a minimum. Microsoft tries to address these expectations by
implementing scalability enhancements, which ensure that SQL Server can
perform equally well in environments of any size, and by the introduction of
several versions of SQL Server 2012 (geared towards more specialized needs)
such as:

 SQL Server Standard Edition - offering the most diverse set of features and
intended for the majority of clients.
 SQL Server 2012 Express Edition - serving as the replacement for
Microsoft Data Engine (MSDE) and available for download from t. Like
its predecessor, it was designed with developers in mind, however, unlike
the previous version, it also includes a Web based management interface.
 SQL Server 2012 Mobile Edition - as a successor to SQL Server 2012
Windows CE Edition, it is intended for Windows mobile-based devices,
such as Tablet PCs, Pocket PCs, and Smart phones

36
FEATURES OF SQL SERVER

Microsoft SQL Server 2012

The following is a list of the new features provided in SQL Server 2012:

 Database mirroring
 Database snapshots
 CLR integration
 Service Broker
 Database Mail
 User-defined functions
 Indexed views
 Distributed partitioned views
 INSTEAD OF and AFTER triggers
 New data types
 Cascading RI constraints
 Multiple SQL Server instances
 XML support
 Log shipping

Database mirroring
Database mirroring is a new high-availability feature in SQL Server 2012.
It's similar to server clustering in that failover is achieved by the use of a stand-
by server; the difference is that the failover is at the database level rather than the
server level. The primary database continuously sends transaction logs to the
backup database on a separate SQL Server instance. A third SQL Server instance
is then used as a witness database to monitor the interaction between the primary
and the mirror databases.

37
Database snapshots
A database snapshot is essentially an instant read-only copy of a database,
and it is a great candidate for any type of reporting solution for your company. In
addition to being a great reporting tool, you can revert control from your primary
database to your snapshot database in the event of an error. The only data loss
would be from the point of creation of the database snapshot to the event of
failure.
CLR integration
With SQL Server 2012, you now have the ability to create custom .NET
objects with the database engine. For example, stored procedures, triggers, and
functions can now be created using familiar .NET languages such as VB and
C#. Exposing this functionality gives you tools that you never had access to
before such as regular expressions.
Service Broker
This feature gives you the ability to create asynchronous, message-based
applications in the database entirely through TSQL. The database engine
guarantees message delivery, message order consistency, and handles message
grouping. In addition, Service Broker gives you the ability to send messages
between different SQL Server instances. Server Broker is also used in several
other features in SQL Server 2012. For example, you can define Event
Nonfictions in the database to send a message to a Queue in the database when
someone attempts to alter a table structure, of if there is a string of login failures.
Database Mail
Database Mail, the eventual successor to SQL Mail, is greatly enhanced e-
mail solution available in the database engine. With Database Mail, there is no
longer a dependency on Microsoft Outlook or MAPI e-mail clients. Database
Mail uses standard SMTP to send e-mail messages. These messages may contain
query results, attachments (which can be governed by the DBA), and is fully

38
cluster aware. In addition, the e-mail process runs outside of the database engine
space, which means that messages can continue to be queued even when the
database engine has stopped.

User-Defined Functions

SQL Server has always provided the ability to store and execute SQL code
routines via stored procedures. In addition, SQL Server has always supplied a
number of built-in functions. Functions can be used almost anywhere an
expression can be specified in a query. This was one of the shortcomings of stored
procedures—they couldn't be used in line in queries in select lists, where clauses,
and so on. Perhaps you want to write a routine to calculate the last business day
of the month. With a stored procedure, you have to exec the procedure, passing
in the current month as a parameter and returning the value into an output
variable, and then use the variable in your queries. If only you could write your
own function that you could use directly in the query just like a system function.
In SQL Server 2012, you can.

SQL Server 2012 introduces the long-awaited support for user-defined


functions. User-defined functions can take zero or more input parameters and
return a single value—either a scalar value like the system-defined functions, or
a table result. Table-valued functions can be used anywhere table or view
expressions can be used in queries, and they can perform more complex logic
than is allowed in a view.

Indexed Views

Views are often used to simplify complex queries, and they can contain joins and
aggregate functions. However, in the past, queries against views were resolved to
queries against the underlying base tables, and any aggregates were recalculated

39
each time you ran a query against the view. In SQL Server 2012 Enterprise or
Developer Edition, you can define indexes on views to improve query
performance against the view. When creating an index on a view, the result set of
the view is stored and indexed in the database. Existing applications can take
advantage of the performance improvements without needing to be modified.

Indexed views can improve performance for the following types of queries:

 Joins and aggregations that process many rows


 Join and aggregation operations that are performed frequently within many
queries
 Decision support queries that rely on summarized, aggregated data that is
infrequently updated

Distributed Partitioned Views

SQL Server 7.0 provided the ability to create partitioned views using the
UNION ALL statement in a view definition. It was limited, however, in that all
the tables had to reside within the same SQL Server where the view was defined.
SQL Server 2012 expands the ability to create partitioned views by allowing you
to horizontally partition tables across multiple SQL Servers. The feature helps
you scale out one database server to multiple database servers, while making the
data appear as if it comes from a single table on a single SQL Server. In addition,
partitioned views are now able to be updated.

INSTEAD OF and AFTER Triggers

In versions of SQL Server prior to 7.0, a table could not have more than one
trigger defined for INSERT, UPDATE, and DELETE. These triggers only fired
after the data modification took place. SQL Server 7.0 introduced the ability to

40
define multiple AFTER triggers for the same operation on a table. SQL Server
2012 extends this capability by providing the ability to define which AFTER
trigger fires first and which fires last.

SQL Server 2012 also introduces the ability to define INSTEAD OF


triggers. INSTEAD OF triggers can be specified on both tables and views.
(AFTER triggers can still only be specified on tables.) If an INSTEAD OF trigger
is defined on a table or view, the trigger will be executed in place of the data
modification action for which it is defined. The data modification is not executed
unless the SQL code to perform it is included in the trigger definition.

New Data types

SQL Server 2012 introduces three new data types. Two of these can be used as
datatypes for local variables, stored procedure parameters and return values, user-
defined function parameters and return values, or table columns:

 bigint—An 8-byte integer that can store values from –263 (–


9223372036854775808) through 263-1 (9223372036854775807).
 sql_variant—A variable-sized column that can store values of various SQL
Server-supported data types, with the exception of text, next, timestamp,
and sql_variant.

The third new datatype, the table datatype, can be used only as a local variable
datatype within functions, stored procedures, and SQL batches. The table
datatype cannot be passed as a parameter to functions or stored procedures, nor
can it be used as a column datatype. A variable defined with the table datatype
can be used to store a result set for later processing. A table variable can be used
in queries anywhere a table can be specified.

41
Text in Row Data

In previous versions of SQL Server, text and image data was always stored
on a separate page chain from where the actual data row resided. The data row
contained only a pointer to the text or image page chain, regardless of the size of
the text or image data. SQL Server 2012 provides a new text in row table option
that allows small text and image data values to be placed directly in the data row,
instead of requiring a separate data page. This can reduce the amount of space
required to store small text and image data values, as well as reduce the amount
of I/O required to retrieve rows containing small text and image data values.

Cascading RI Constraints

In previous versions of SQL Server, referential integrity (RI) constraints


were restrictive only. If an insert, update, or delete operation violated referential
integrity, it was aborted with an error message. SQL Server 2012 provides the
ability to specify the action to take when a column referenced by a foreign key
constraint is updated or deleted. You can still abort the update or delete if related
foreign key records exist by specifying the NO ACTION option, or you can
specify the new CASCADE option, which will cascade the update or delete
operation to the related foreign key records.

Multiple SQL Server Instances

Previous versions of SQL Server supported the running of only a single


instance of SQL Server at a time on a computer. Running multiple instances or
multiple versions of SQL Server required switching back and forth between the
different instances, requiring changes in the Windows registry. (The SQL Server
Switch provided with 7.0 to switch between 7.0 and 6.5 performed the registry
changes for you.)

42
SQL Server 2012 provides support for running multiple instances of SQL
Server on the same system. This allows you to simultaneously run one instance
of SQL Server 6.5 or 7.0 along with one or more instances of SQL Server 2012.
Each SQL Server instance runs independently of the others and has its own set of
system and user databases, security configuration, and so on. Applications can
connect to the different instances in the same way they connect to different SQL
Servers on different machines.

XML Support

Extensible Markup Language has become a standard in Web-related


programming to describe the contents of a set of data and how the data should be
output or displayed on a Web page. XML, like HTML, is derived from the
Standard Generalize Markup Language (SGML). When linking a Web
application to SQL Server, a translation needs to take place from the result set
returned from SQL Server to a format that can be understood and displayed by a
Web application. Previously, this translation needed to be done in a client
application.

SQL Server 2012 provides native support for XML. This new feature provides
the ability to do the following:

 Return query result sets directly in XML format.


 Retrieve data from an XML document as if it were a SQL Server table.
 Access SQL Server through a URL using HTTP. Through Internet
Information Services (IIS), you can define a virtual root that gives you
HTTP access to the data and XML functionality of SQL Server 2012.

43
Log Shipping

The Enterprise Edition of SQL Server 2012 now supports log shipping, which
you can use to copy and load transaction log backups from one database to one
or more databases on a constant basis. This allows you to have a primary
read/write database with one or more read-only copies of the database that are
kept synchronized by restoring the logs from the primary database. The
destination database can be used as a warm standby for the primary database, for
which you can switch users over in the event of a primary database failure.
Additionally, log shipping provides a way to offload read-only query processing
from the primary database to the destination database.

This capability was available in previous versions of SQL Server, but it


required the DBA to manually set up the process and schedule the jobs to copy
and restore the log backups. SQL Server 2012 officially supports log shipping
and has made it easier to set up via the Database Maintenance Plan Wizard. This
greatly simplifies the process by automatically generating the jobs and
configuring the databases to support log shipping.

DDL triggers
In previous articles, I outlined how you can use data definition language (DDL)
triggers in SQL Server 2012 to implement custom database and server auditing
solutions for Sarbanes-Oxley compliance (here are part one and part two of my
SOX articles). DDL triggers are defined at the server or database level and fire
when DDL statements occur. This gives you the ability to audit when new tables,
stored procedures, or logins are created.

44
Ranking functions
SQL Server 2012 provides you with the ability to rank result sets returned from
the database engine. This allows you to customize the manner in which result sets
are returned, such as creating customized paging functions for Web site data.
Row versioning-based isolation levels
This new database engine feature improves database read concurrency by
reducing the amount of locks being used in your database. There are two versions
of this feature (both of which must be enabled at the database level):
Read Committed Isolation Using Row Versioning is used at the individual
statement level, and guarantees that the data is consistent for the duration of the
statement.

45
CHAPTER 4
4. LITERATURE SURVEY

TITLE:Public key encryption with keyword search.


AUTHOR: D. Boneh, G. Di Crescenzo, R. Ostrovsky, and G. Persiano.
YEAR: 2004
In this paper, we use public key encryption with keyword search (PEKS),
an indispensable part of searchable encryption, is stock-in-trade for both
protecting data and providing operability of encrypted data. So far most of PEKS
schemes have been established on Identity-Based Cryptography (IBC) with key
escrow problem inherently. Such problem severely restricts the promotion of
IBC-based Public Key Infrastructure including PEKS component. Hence,
Certificateless Public Key Cryptography (CLPKC) is efficient to remove such
problem. CLPKC is introduced into PEKS, and a general model of Certifcateless
PEKS (CLPEKS) is formalized. In addition, a practical CLPEKS scheme is
constructed with security and efficiency analyses. The proposal is secure channel
free, and semantically secure against adaptive chosen keyword attack and
keyword guessing attack. To illustrate the superiority, massive experiments are
conducted on Enron Email dataset which is famous in information retrieval field.
Compared with existed constructions, CLPEKS improves the efficiency in theory
and removes the key escrow problem.

46
TITLE: Anonymous hierarchical identity-based encryption.
AUTHOR:X. Boyen and B. Waters.
YEAR: 2014

In this paper, we present an identity-based cryptosystem that features


fully anonymous cipher texts and hierarchical key delegation. We give a proof
of security in the standard model, based on the mild Decision Linear complexity
assumption in bilinear groups. The system is efficient and practical, with small
cipher texts of size linear in the depth of the hierarchy. Applications include
search on encrypted data, fully private communication, etc.Our results resolve
two open problems pertaining to anonymous identity-based encryption, our
scheme being the first to offer provable anonymity in the standard model, in
addition to being the first to realize fully anonymous HIBE at all levels in the
hierarchy.

47
TITLES: Efficient public key encryption with revocable keyword search.

AUTHOR: Y. Yu, J. Ni, H. Yang, Y. Mu, and W. Susilo.

YEAR: 2015

In this paper, we propose Public key encryption with keyword search is a


novel cryptographic primitive enabling one to search on the encrypted data
directly. In the known schemes, once getting a trapdoor, the server can search
associated data without any restrictions. However, in reality, it is sometimes
essential to prevent the server from searching the data all the time because the
server is not fully trusted. In this paper, we propose the notion of public key
encryption with revocable keyword search to address the issue. We also develop
a concrete construction by dividing the whole life of the system into distinct times
to achieve our goals. The proposed scheme achieves the properties of the
indistinguishability of ciphertexts against an adaptive chosen keywords attack
security under the co-decisional bilinear Diffie-Hellman assumption in our
security model. Compared with two somewhat schemes, ours offers much better
performance in terms of computational cost.

48
TITLE: A secure and dynamic multi-keyword ranked search scheme over
encrypted cloud data
AUTHOR: Z. Xia, X. Wang, X. Sun, and Q. Wang.
YEAR:2016
In this paper, we propose increasing popularity of cloud computing, more
and more data owners are motivated to outsource their data to cloud servers for
great convenience and reduced cost in data management. However, sensitive data
should be encrypted before outsourcing for privacy requirements, which
obsoletes data utilization like keyword-based document retrieval. In this paper,
we present a secure multi-keyword ranked search scheme over encrypted cloud
data, which simultaneously supports dynamic update operations like deletion and
insertion of documents. Specifically, the vector space model and the widely-used
TF x IDF model are combined in the index construction and query generation.
We construct a special tree-based index structure and propose a “Greedy Depth-
first Search” algorithm to provide efficient multi-keyword ranked search. The
secure kNN algorithm is utilized to encrypt the index and query vectors, and
meanwhile ensure accurate relevance score calculation between encrypted index
and query vectors. In order to resist statistical attacks, phantom terms are added
to the index vector for blinding search results. Due to the use of our special tree-
based index structure, the proposed scheme can achieve sub-linear search time
and deal with the deletion and insertion of documents flexibly. Extensive
experiments are conducted to demonstrate the efficiency of the proposed scheme.

49
TITLE: Towards privacy preserving storage and retrieval in multiple clouds.
AUTHOR: J. Li, D. Lin, A. C. Squicciarini, J. Li, and C. Jia,
YEAR:2017
In this paper, we propose Cloud computing is growing exponentially,
whereby there are now hundreds of cloud service providers (CSPs) of various
sizes. While the cloud consumers may enjoy cheaper data storage and
computation offered in this multi-cloud environment, they are also in face of more
complicated reliability issues and privacy preservation problems of their
outsourced data. Though searchable encryption allows users to encrypt their
stored data while preserving some search capabilities, few efforts have sought to
consider the reliability of the searchable encrypted data outsourced to the clouds.
In this paper, we propose a privacy-preserving Storage and REtrieval (STRE)
mechanism that not only ensures security and privacy but also provides reliability
guarantees for the outsourced searchable encrypted data. The STRE mechanism
enables the cloud users to distribute and search their encrypted data across
multiple independent clouds managed by different CSPs, and is robust even when
a certain number of CSPs crash. Besides the reliability, STRE also offers the
benefit of partially hidden search pattern. We evaluate the STRE mechanism on
Amazon EC2 using a real world dataset and the results demonstrate both
effectiveness and efficiency of our approach.

50
TITLE: Protecting your right: Attribute-based keyword search with fine-grained
owner-enforced search authorization in the cloud.
AUTHOR:W. Sun, S. Yu, W. Lou, Y. T. Hou, and H. Li.
YEAR: 2016
Search over scrambled information is a basically critical empowering
system in distributed computing, where encryption-before outsourcing is a
principal answer for securing client information protection in the untrusted cloud
server environment. Many secure hunt plans have been concentrating on the
single-donor situation, where the outsourced dataset or the protected searchable
file of the dataset are encoded and overseen by a solitary proprietor, regularly in
view of symmetric cryptography. In this paper, we concentrate on an alternate yet
additionally difficult situation where the outsourced dataset can be contributed
from different proprietors and are searchable by numerous clients, i.e. multi-client
multi-supporter case. Propelled by trait based encryption (ABE), we show the
main characteristic based watchword look conspire with proficient client
disavowal (ABKS-UR) that empowers versatile fine-grained (i.e. document
level) look approval. Our plan permits different proprietors to encode and
outsource their information to the cloud server freely. Clients can create their own
particular pursuit abilities without depending on a constantly online trusted
power. Fine-grained seek approval is additionally actualized by the proprietor
implemented get to strategy on the list of every record. Promote, by fusing
intermediary re-encryption and lethargic re-encryption procedures, we can
appoint overwhelming framework redesign workload amid client disavowal to
the creative semi-trusted cloud server. We formalize the security definition and
demonstrate the proposed ABKS-UR plot specifically secure against picked
catchphrase assault.

51
CHAPTER 5
5. MODULES DESCRIPTION

5 .1 MODULES USED
 Data Owner
 Data User
 Trust Third Party
 Cloud Service Provider
5.1.1 Data owner:
 Registration
 Login
 Uploading Files
 Using Secrete key

Data owner uploading files into the cloud using some secrete key that file will be
in encrypted form.

Fig 5.1.1 Data Owner

52
5.1.2 Data User
 Registration
 Login
 Getting key from TTP
 Search token
The data user requesting file from data owner and getting some key from trust
third party for accessing the file from cloud.

Fig 5.1.2 Data User

5.1.3 TRUST THIRD PARTY

It is a fully trusted entity who receives each user’s access tree, and generates
their secret keys corresponding to his/her attributes set presented in his/her access
tree. Then, the TTP sends back the users’ credentials through a secure and
authenticated channel

 Generating key for Data Owner


 Generating Key for User
 Viewing Data Owner details
 Viewing Data User details

53
Trust third party will be generating key for the data owner and cloud
user for accessing file.

Fig 5.1.3 Trust Third Party


5.1.4 CLOUD
Data owner outsourcing there file into cloud over encrypted form.

 Viewing Key Request Details


 Providing file to users

Fig 5.1.4. Cloud

54
CHAPATER 6
6. SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of
trying to discover every conceivable fault or weakness in a work product. It
provides a way to check the functionality of components, sub-assemblies,
assemblies and/or a finished product It is the process of exercising software with
the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types
of test. Each test type addresses a specific testing requirement

6.1 TYPES OF TESTING


6.1.1 UNIT TESTING
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program input produces valid
outputs. All decision branches and internal code flow should be validated. It is
the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.
6.1.2 INTEGRATION TESTING
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures
caused by interface defects.

55
The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

6.1.3 FUNCTIONAL TESTING


Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.
Functional testing is cantered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
6.1.4 SYSTEM TESTS
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration-oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-
driven process links and integration points.
6.1.5 ACCEPTANCE TESTING
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.
Acceptance testing for Data Synchronization
 The Acknowledgements will be received by the Sender Node after the
Packets are received by the Destination Node
 The Route add operation is done only when there is a Route request in n

56
7. SCREEN SHOTS

USER ADMIN:

Fig 7.1 User Admin

HOME PAGE:

Fig 7.2 Home Page

57
DATA OWNER:

Fig 7.3 Data Owner

UPLOATE FILE:

Fig 7.4 Uploate File

58
TEMPORARY KEY:

Fig 7.5 Temporary Key

59
ENCRYPT FILES:

Fig 7.6 Encrypt Files

60
ARBITRARY KEY:

Fig 7.7 Arbitrary Key

61
OUTPUT:

Fig 7.8 Output

62
FILE DOCUMENT:

Fig 7.9 File Document

63
OUTPUT:

Fig 7.11 Output

64
CHAPTER 8

8. CONCLUSION AND FUTURE ENHANCEMENT

8.1 CONCLUSION

Securing cloud storage is an important problem in cloud computing. We


addressed this issue and introduced the notion of key-policy attribute-based
temporary keyword search (KPABTKS). According to this notion, each data user
can generate a search token which is valid only for a limited time interval. We
proposed the first concrete construction for this new cryptographic primitive
based on bilinear map. We formally showed that our scheme is provably secure
in the random oracle model. The complexity of encryption algorithm of our
proposal is linear with respect to the number of the involved attributes. In
addition, the number of required pairing in the search algorithms is independent
of the number of the intended time units specified in the search token and it is
linear with respect to the number of attributes. Performance evaluation of our
scheme in term of both computational cost and execution time shows the practical
aspects of the proposed scheme.

8.2 FUTURE ENHANCEMENT

We can assign a Time Interval for each Encryption Key for Performance
evaluation shows our scheme’s practicality. We can show that the complexity of
the encryption algorithm is linear with respect to the number of the involved attr

65
CHATPER 9

APPENEDIX CODING

CODING:
HOME PAGE:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
namespace RolebaseAuthorization
{
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
}}

66
KEY GENERATION:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace RolebaseAuthorization.Models
{
public class KeyGeneration
{
public int Id { get; set; }
public string KeyGenerationValue { get; set; }
public string ApplicationMemberId { get; set; }
public ApplicationUser ApplicationUser { get; set; }
}
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace RolebaseAuthorization.Models
{
public class Subscribers
{
public int Id { get; set; }
public string Email { get; set; }
public string Password { get; set; }
public string Address { get; set; }
public string PhoneNumber { get; set; }

67
}
}
ADMIN CONTROLLER:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using RolebaseAuthorization.Models;
using RolebaseAuthorization.ViewModel;
namespace RolebaseAuthorization.Controllers
{
[Authorize(Roles="Admin")]
public class AdminController : Controller
{
private readonly UserManager<ApplicationUser> _userManager;
private readonly RoleManager<IdentityRole> _roleManager;
public AdminController(UserManager<ApplicationUser> userManager,
RoleManager<IdentityRole> roleManager)
{
_userManager = userManager;
_roleManager = roleManager;
}

public IActionResult Index()


{
return View();
68
}
public IActionResult UserManagement()
{
var users = _userManager.Users;
return View(users);
}
public IActionResult AddUser()
{
return View();
}
[HttpPost]
public async Task<IActionResult> AddUser(AddUserViewModel
addUserViewModel)
{
if (!ModelState.IsValid) return View(addUserViewModel);
var user = new ApplicationUser()
{
UserName = addUserViewModel.UserName,
Email = addUserViewModel.Email,
};
IdentityResult result = await _userManager.CreateAsync(user,
addUserViewModel.Password);
if (result.Succeeded)
{
return RedirectToAction("Index", "Home");
}
foreach (IdentityError error in result.Errors)
{
ModelState.AddModelError("", error.Description);

69
}
return View(addUserViewModel);
}
public async Task<IActionResult> EditUser(string id)
{
var user = await _userManager.FindByIdAsync(id);
if (user == null)
return RedirectToAction("UserManagement", _userManager.Users);
var claims = await _userManager.GetClaimsAsync(user);
var vm = new EditUserViewModel() { Id = user.Id, Email = user.Email,
UserName = user.UserName };
ileStorage return View(vm);
}
[HttpPost]
public async Task<IActionResult> EditUser(EditUserViewModel
editUserViewModel)
{
var user = await _userManager.FindByIdAsync(editUserViewModel.Id);
if (user != null)
{
user.Email = editUserViewModel.Email;
user.UserName = editUserViewModel.UserName;
var result = await _userManager.UpdateAsync(user);
if (result.Succeeded)
return RedirectToAction("UserManagement", _userManager.Users);
ModelState.AddModelError("", "User not updated, something went wrong.");
return View(editUserViewModel);
}
return RedirectToAction("UserManagement", _userManager.Users);

70
}
[HttpPost]
public async Task<IActionResult> DeleteUser(string id)
{
var user = await _userManager.FindByIdAsync(id);

if (user != null)
{
IdentityResult result = await _userManager.DeleteAsync(user);
if (result.Succeeded)
return RedirectToAction("UserManagement");
else
ModelState.AddModelError("", "Something went wrong while deleting this
user.");
}
else
{
ModelState.AddModelError("", "This user can't be found");
}
return View("UserManagement", _userManager.Users);
}
//Roles management
public IActionResult RoleManagement()
{
var roles = _roleManager.Roles;
return View(roles);
}

public IActionResult AddNewRole() => View();

71
[HttpPost]
public async Task<IActionResult> AddNewRole(AddRoleViewModel
addRoleViewModel)
{
if (!ModelState.IsValid) return View(addRoleViewModel);
var role = new IdentityRole
{
Name = addRoleViewModel.RoleName
};
IdentityResult result = await _roleManager.CreateAsync(role);
if (result.Succeeded)
{
return RedirectToAction("RoleManagement", _roleManager.Roles);
}
foreach (IdentityError error in result.Errors)
{
ModelState.AddModelError("", error.Description);
}
return View(addRoleViewModel);
}
public async Task<IActionResult> EditRole(string id)
{
var role = await _roleManager.FindByIdAsync(id);
if (role == null)
return RedirectToAction("RoleManagement", _roleManager.Roles);
var editRoleViewModel = new EditRoleViewModel
{
Id = role.Id,
RoleName = role.Name,

72
Users = new List<string>()
};
foreach (var user in _userManager.Users)
{
if (await _userManager.IsInRoleAsync(user, role.Name))
editRoleViewModel.Users.Add(user.UserName);
}
return View(editRoleViewModel);
}
[HttpPost]
public async Task<IActionResult> EditRole(EditRoleViewModel
editRoleViewModel)
{
var role = await _roleManager.FindByIdAsync(editRoleViewModel.Id);
if (role != null)
{
role.Name = editRoleViewModel.RoleName;
var result = await _roleManager.UpdateAsync(role);
if (result.Succeeded)
return RedirectToAction("RoleManagement", _roleManager.Roles);
ModelState.AddModelError("", "Role not updated, something went wrong.");
return View(editRoleViewModel);
}
return RedirectToAction("RoleManagement", _roleManager.Roles);
}
[HttpPost]
public async Task<IActionResult> DeleteRole(string id)
{
IdentityRole role = await _roleManager.FindByIdAsync(id);

73
if (role != null)
{
var result = await _roleManager.DeleteAsync(role);
if (result.Succeeded)
return RedirectToAction("RoleManagement", _roleManager.Roles);
ModelState.AddModelError("", "Something went wrong while deleting this
role.");
}
else
{
ModelState.AddModelError("", "This role can't be found.");
}
return View("RoleManagement", _roleManager.Roles);
}
public async Task<IActionResult> AddUserToRole(string roleId)
{
var role = await _roleManager.FindByIdAsync(roleId);
if (role == null)
return RedirectToAction("RoleManagement", _roleManager.Roles);
var addUserToRoleViewModel = new UserRoleViewModel { RoleId = role.Id };
foreach (var user in _userManager.Users)
{
if (!await _userManager.IsInRoleAsync(user, role.Name))
{
addUserToRoleViewModel.Users.Add(user);
}
}
return View(addUserToRoleViewModel);
}

74
[HttpPost]
public async Task<IActionResult> AddUserToRole(UserRoleViewModel
userRoleViewModel)
{
var user = await _userManager.FindByIdAsync(userRoleViewModel.UserId);
var role = await _roleManager.FindByIdAsync(userRoleViewModel.RoleId);
var result = await _userManager.AddToRoleAsync(user, role.Name);
if (result.Succeeded)
{
return RedirectToAction("RoleManagement", _roleManager.Roles);
}
foreach (IdentityError error in result.Errors)
{
ModelState.AddModelError("", error.Description);
}
return View(userRoleViewModel);
}
public async Task<IActionResult> DeleteUserFromRole(string roleId)
{
var role = await _roleManager.FindByIdAsync(roleId);
if (role == null)
return RedirectToAction("RoleManagement", _roleManager.Roles);
var addUserToRoleViewModel = new UserRoleViewModel { RoleId = role.Id };
foreach (var user in _userManager.Users)
{
if (await _userManager.IsInRoleAsync(user, role.Name))
{
addUserToRoleViewModel.Users.Add(user);
}

75
}

return View(addUserToRoleViewModel);
}
[HttpPost]
public async Task<IActionResult> DeleteUserFromRole(UserRoleViewModel
userRoleViewModel)
{
var user = await _userManager.FindByIdAsync(userRoleViewModel.UserId);
var role = await _roleManager.FindByIdAsync(userRoleViewModel.RoleId);
var result = await _userManager.RemoveFromRoleAsync(user, role.Name);
if (result.Succeeded)
{
return RedirectToAction("RoleManagement", _roleManager.Roles);
}
foreach (IdentityError error in result.Errors)
{
ModelState.AddModelError("", error.Description);
}
return View(userRoleViewModel);
}
}
}

76
HOME CONTROLLER:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using RolebaseAuthorization.Data;
using RolebaseAuthorization.Models;
namespace RolebaseAuthorization.Controllers
{
public class HomeController : Controller
{
private readonly ApplicationDbContext _context;
private readonly UserManager<ApplicationUser> userManager;
private IHttpContextAccessor _contextAccessor;
public HomeController(ApplicationDbContext context,
UserManager<ApplicationUser> _userManager,
IHttpContextAccessor contextAccessor)
{
_context = context;
_userManager = userManager;
_contextAccessor = contextAccessor;
}
public IActionResult Index()

77
{
if (User.Identity.IsAuthenticated)
{
var userid = "YES";
ViewData["userid"] = userid;
}
return View();
}
public IActionResult About()
{
ViewData["Message"] = "Your application description page.";
return View();
}
public IActionResult Contact()
{
ViewData["Message"] = "Your contact page.";
return View();
}
public IActionResult KeyGeneration()
{
return View();
}
public IActionResult Privacy()
{
return View();
}

[ResponseCache(Duration = 0, Location = ResponseCacheLocation.None,


NoStore = true)]

78
public IActionResult Error()
{
return View(new ErrorViewModel { RequestId = Activity.Current?.Id ??
HttpContext.TraceIdentifier });
}
}
}

79
CHAPTER 10
REFERENCES

[1] Y. Shi, Q. Zheng, J. Liu, and Z. Han, “Directly revocable key-policy attribute-
based encryption with verifiable ciphertext delegation,” Information Sciences,
vol. 295, pp. 221–231, 2015.
[2] J. Bethencourt, A. Sahai, and B. Waters, “Ciphertext-policy attribute-based
encryption,” in Security and Privacy, 2007. SP’07. IEEE Symposium on. IEEE,
2007, pp. 321–334.
[3] B. Waters, “Ciphertext-policy attribute-based encryption: An expressive,
efficient, and provably secure realization,” in International Workshop on Public
Key Cryptography. Springer, 2011, pp. 53–70.
[4] V. Goyal, A. Jain, O. Pandey, and A. Sahai, “Bounded ciphertext policy
attribute-based encryption,” in International Colloquium on Automata,
Languages, and Programming. Springer, 2008, pp. 579–591.
[5] H. Deng, Q. Wu, B. Qin, J. Domingo-Ferrer, L. Zhang, J. Liu, and W. Shi,
“Ciphertext-policy hierarchical attribute-based encryption with short
ciphertexts,” Information Sciences, vol. 275, pp. 370–384, 2014.
[6] A. Balu and K. Kuppusamy, “An expressive and provably secure ciphertext-
policy attribute-based encryption,” Information Sciences, vol. 276, pp. 354–362,
2014.

80

Você também pode gostar