Escolar Documentos
Profissional Documentos
Cultura Documentos
INTRODUCTION
1
adversary, A cannot determine the keyword from the related cipher text and valid
search tokens with a probability more than a random keyword guess. This
property is formalized via the keyword secrecy game. The cloud server based on
the received search token can find the encrypted documents which contain the
intended keyword and are generated in the specified time interval. Then, it returns
the search result to the data users whose attributes satisfy the access control policy
enforced by the data owner.
Today, cloud computing plays an important role in our daily life, because
it provides efficient, reliable and scalable resources for data storage and
computational activities at a very low price.
However, the direct access of the cloud to the sensitive information of its
users threatens their privacy. A trivial solution to address this problem is
encrypting data before outsourcing it to the cloud. However, searching on the
encrypted data is very difficult.
2
CHAPTER 2
3
ADVANTAGES IN PROPOSED SYSTEM
It uses Trusted Third Party (TTP) users to create an Temporary key to
Secure and Retrieve the Data.
The Data is More Secured.
Setup,
KeyGen ,
Enc,
TokenGen,
Search.
• (msk,pp) ← Setup(1λ): This algorithm is run by the TTP. It takes the security
parameter λ as input and generates the master secret key msk and the public
parameter pp.
4
• {0,1} := Search(cph,st): For each stored ciphertext cph and the received search
token st which is associated with specific keyword ω and attribute set Atts, this
algorithm
5
2.4.2 USE CASE DIAGRAM
6
2.4.3 CLASS DIAGRAM
7
2.4.4 SEQUENCE DIAGRAM
8
2.4.5 STATE DIAGRAM:
9
2.2.6 COMPONENT DIAGRAM:
10
CHAPTER 3
Mouse : Logitech.
RAM : 2 GB (minimum)
Coding Language : C#
11
3.3 SOFTWARE DESCRIPTION
3.3.1 DOTNET
INTRODUCTION TO DOTNET
12
The most important features are
MANAGE DATA
13
COMMON TYPE SYSTEM (CTS)
The CLR uses something called the Common Type System (CTS) to
strictly enforce type-safety. This ensures that all classes are compatible with each
other, by describing types in a common way. CTS define how types work within
the runtime, which enables types in one language to interoperate with types in
another language, including cross-language exception handling. As well as
ensuring that types are only used in appropriate ways, the runtime also ensures
that code doesn’t attempt to access memory that hasn’t been allocated to it.
14
namespaces), each providing distinct areas of functionality, with dependencies
between the namespaces kept to a minimum.
Visual Basic .NET has been updated to include many new and
improved language features that make it a powerful object-oriented programming
language. These features include inheritance, interfaces, and overloading, among
others. Visual Basic also now supports structured exception handling, custom
attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any
CLS-compliant language can use the classes, objects, and components you create
in Visual Basic .NET.
15
Microsoft Visual J# .NET provides the easiest transition for Java-
language developers into the world of XML Web Services and dramatically
improves the interoperability of Java-language programs with existing software
written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which
enable .NET-aware applications to be built in either Perl or Python. Both products
can be integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active State’s Perl Dev Kit.
16
3.3.2 FEATURES OF C#
1. C# is a simple, modern, object oriented language derived from C++ and Java.
2. It aims to combine the high productivity of Visual Basic and the raw power of
C++.
3. Visual studio supports Vb, VC++, C++, Vbscript, Jscript. All of these
ASP.NET,C++.
8. Source code --->Intermediate Language code (IL) ---> (JIT Compiler) Native
code.
9. The classes and data types are common to all of the .NET languages.
10. We may develop Console application, Windows application, and Web
application using C#.
11. In C# Microsoft has taken care of C++ problems such as Memory
management , pointers etc.
12.It supports garbage collection, automatic me
TYPESAFE
1. In C# we cannot perform unsafe casts like convert double to a Boolean.
2. Value types (primitive types) are initialized to zeros and reference types
(objects and classes are initialized to null by the compiler automatically.
3.Arrays are zero base indexed and are bound checked.
4. Overflow of types can be checked.
17
INTEROPERABILITY
1. C# includes native support for the COM and windows-based applications.
2. Allowing restricted use of native pointers
3. Users no longer have to explicitly implement the unknown and other COM
interfaces those features are built in.
4. 9C# allows the users to use pointers as unsafe code blocks to manipulate your
old
Objectives of .NET Framework
1) Platform dependent
2) Language Independent
3) Language Interoperability
4) Security
5) Database Connectivity
6) Globalization of Application
Specification in ASP.net
18
Language Interoperability: The code written in one language should be used
from the application developed using other language.
19
The .NET Framework has two main components: the common language
runtime and the .NET Framework class library. The common language runtime
is the foundation of the .NET Framework. You can think of the runtime as an
agent that manages code at execution time, providing core services such as
memory management, thread management, and remoting, while also enforcing
strict type safety and other forms of code accuracy that promote security and
robustness. In fact, the concept of code management is a fundamental principle
of the runtime. Code that targets the runtime is known as managed code, while
code that does not target the runtime is known as unmanaged code. The class
library, the other main component of the .NET Framework, is a comprehensive,
object-oriented collection of reusable types that you can use to develop
applications ranging from traditional command-line or graphical user interface
(GUI) applications to applications based on the latest innovations provided by
ASP.NET, such as Web Forms and XML Web services.
20
mobile code (similar to Microsoft® ActiveX® controls) possible, but with
significant improvements that only managed code can offer, such as semi-trusted
execution and isolated file storage.
21
also reduces the time associated with learning new features of the .NET
Framework. In addition, third-party components can integrate seamlessly with
classes in the .NET Framework.
For example, the .NET Framework collection classes implement a set of
interfaces that you can use to develop your own collection classes. Your
collection classes will blend seamlessly with the classes in the .NET
Framework.As you would expect from an object-oriented class library, the
.NET Framework types enable you to accomplish a range of common
programming tasks, including tasks such as string management, data collection,
database connectivity, and file access. In addition to these common tasks, the
class library includes types that support a variety of specialized development
scenarios. For example, you can use the .NET Framework to develop the
following types of applications and services:
Console applications.
Windows GUI applications (Windows Forms).
ASP.NET applications.
XML Web services.
Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable
types that vastly simplify Windows GUI development. If you write an ASP.NET
Web Form application, you can use the Web Forms classes.
22
Security, Versioning and development support, a simplified model for
component interaction, debugging and Profiling services.
23
Type Safety: - This feature ensures that object is always accessed in
compatible ways. Therefore the CLR will prohibit a code from assign a 10-byte
value to an object that occupies 8 bytes
BENEFITS OF CLR
Performance improvement
The ability to easily use components developed in other languages.
Extensible types provided by library.
New Language features such as inheritance, interfaces etc.
Complete Object-Oriented design.
Very Strong Type Safety.
A good blend of Visual Basic simplicity and c++ power.
Syntax and keywords similar to c and c++.
Use of delegates rather than function pointers for increased type safety and
security.
ASP.NET OVERVIEW
ASP.Net works on top of the HTTP protocol and uses the HTTP commands
and policies to set a browser-to-server two-way communication and cooperation.
24
The ASP.Net application codes could be written in either of the following
languages:
C#
Jscript
J#
All client-side user activities are forwarded to the server for stateful
processing. The server processes the output of the client actions and triggers the
reactions.
Page state
Session state
The page state is the state of the client, i.e., the content of various input
fields in the web form. The session state is the collective obtained from various
25
pages the user visited and worked with, i.e., the overall session state. To clear the
concept, let us take up an example of a shopping cart as follows.
User adds items to a shopping cart. Items are selected from a page, say the items
page, and the total collected items and price are shown in a different page, say the
cart page. Only HTTP cannot keep track of all the information coming from
various pages. ASP.Net session state and server-side infrastructure keeps track of
the information collected globally over a session.
The ASP.Net runtime carries the page state to and from the server across page
requests while generating the ASP.Net runtime codes and incorporates the state
of the server-side components in hidden fields.
This way the server becomes aware of the overall application state and operates
in a two-tiered connected way.
ASP.Net is a technology, which works on the .Net framework that contains all
web-related functionalities. The .Net framework is made of an object-oriented
hierarchy. An ASP.Net web application is made of pages. When a user requests
an ASP.Net page, the IIS delegates the processing of the page to the ASP.Net
runtime system.
26
The ASP.Net runtime transforms the .aspx page into an instance of a class, which
inherits from the base class Page of the .Net framework. Therefore, each ASP.Net
page is an object and all its components i.e., the server-side controls are also
objects.
ASP.NET ARCHITECTURE
Architecture is
explained form bottom to top in the following discussion.
The 4th layer of the framework consists of the Windows application model
and, in parallel, the Web application model. The Web application model-
in the slide presented as ASP.NET-includes Web Forms and Web
Services. ASP.NET comes with built-in Web Forms controls, which are
responsible for generating the user interface. They mirror typical HTML
widgets like text boxes or buttons. If these controls do not fit your needs,
you are free to create your own user controls. Web Services brings you a
model to bind different applications over the Internet. This model is based
on existing infrastructure and applications and is therefore standard-based,
simple, and adaptable. Web Services are software solutions delivered via
Internet to any device. Today, that means Web browsers on computers, for
the most part, but the device-agnostic design of .NET will eliminate this
limitation.
One of the obvious themes of .NET is unification and interoperability
between various programming languages. In order to achieve this; certain
28
rules must be laid and all the languages must follow these rules. In other
words we cannot have languages running around creating their own
extensions and their own fancy new data types. CLS is the collection of the
rules and constraints that every language (that seeks to achieve .NET
compatibility) must follow.
The CLR and the .NET Frameworks in general, however, are designed in
such a way that code written in one language can not only seamlessly be
used by another language. Hence ASP.NET can be programmed in any of
the .NET compatible language whether it is VB.NET, C#, Managed C++
or JScript.NET.
29
A unified pipeline receives the first request and the following events take
place:
The Page class creates a hierarchical tree of all the controls on the page.
All the components on the page, except the directives are part of this control tree.
You can see the control tree by adding trace= "true" to the Page directive. We
will cover page directives and tracing under 'directives' and 'error handling'.
Initialization
30
Instantiation of the controls on the page
Page rendering
Understanding the page cycle helps in writing codes for making some specific
thing happen at any stage of the page life cycle. It also helps in writing custom
controls and initializing them at right time, populate their properties with view-
state data and run control behaviour code.
31
Page rendering: At this stage, view state for the page and all controls are
saved. The page calls the Render method for each control and the output of
rendering is written to the OutputStream class of the Page's Response
property.
Unload: The rendered page is sent to the client and page properties, such
as Response and Request are unloaded and all cleanup done.
ASP.Net Page Life Cycle Events:
At each stage of the page life cycle, the page raises some events, which
could be coded. An event handler is basically a function or subroutine,
bound to the event, using declarative attributes like onclick or handle.
PreInit: PreInit is the first event in page life cycle. It checks the
IsPostBack property and determines whether the page is a post back. It sets
the themes and master pages, creates dynamic controls and gets and sets
profile property values. This event can be handled by overloading the
OnPreInit method or creating a Page_PreInit handler.
Init: Init event initializes the control property and the control tree is built.
This event can be handled by overloading the OnInit method or creating a
Page_Init handler.
InitComplete : InitComplete event allows tracking of view state. All the
controls turn on view-state tracking.
LoadViewState: LoadViewState event allows loading view state
information into the controls.
LoadPostData: During this phase, the contents of all the input fields
defined with the <form> tag are processed.
32
Preload: Preload occurs before the post back data is loaded in the controls.
This event can be handled by overloading the On Preload method or
creating a Page Preload handler.
Load: The Load event is raised for the page first and then recursively for
all child controls. The controls in the control tree are created. This event
can be handled by overloading the On Load method or creating a Page
Load handler.
Load Complete: The loading process is completed, control event handlers
are run and page validation takes place. This event can be handled by
overloading the Onload Complete method or creating a
Page_LoadComplete handler.
PreRender: The PreRender event occurs just before the output is
rendered. By handling this event, pages and controls can perform any
updates before the output is rendered.
PreRenderComplete: As the PreRender event is recursively fired for all
child controls, this event ensures the completion of the pre-rendering
phase.
SaveStateComplete: State of control on the page is saved.
Personalization, control state and view state information is saved. The
HTML markup is generated. This stage can be handled by overriding the
Render method or creating a Page Render handler.
Unload: The Unload phase is the last phase of the page life cycle. It raises
the Unload event for all controls recursively and lastly for the page itself.
Final cleanup is done and all resources and references, such as database
connections, are freed. This event can be handled by modifying the On
Unload method or creating a Page_UnLoad handler.
An ASP.Net page is made of number of server controls along with the HTML
controls, text and images. Sensitive data from the page and the states of different
33
controls on the page are stored in hidden fields and forms the context of that page
request.
ASP.Net runtime controls all association between a page instance and its state.
An ASP.Net page is an object of the Page Class or inherited from it.
All the controls on the pages are also objects of the related control class inherited
from a parent Control class. When a page is run an instance of the page object is
created along with all its content controls.
An ASP.Net page is also a server side file saved with the .aspx extension. It is
modular in nature and can be divided into the following core sections:
Page directives
Code Section
Page Layout
Page directives:
The page directives set up the environments for the page to run. The @Page
directive defines page-specific attributes used by the ASP.Net page parser and
compiler. Page directives specify how the page should be processed, and which
assumptions are to be taken about the page.
Code Section:
The code section provides the handlers for the page and control events along with
other functions required. We mentioned that, ASP.Net follows an object model.
34
Now, these objects raises events when something happens on the user interface,
like a user clicks a button or moves the cursor. How these events should be
handled? That code is provided in the event handlers of the controls, which are
nothing but functions bound to the controls.
The code section or the code behind file provides all these event handler routines,
and other functions used by the developer. The page code could be precompiled
and deployed in the form of a binary assembly.
Page Layout:
The page layout provides the interface of the page. It contains the server
controls, text, inline JavaScript and HTML tags:.
SQL Server 2005 will be soon reaching its three-year mark, which in terms of
software life-cycle translates into fairly advanced maturity. While this is still far
from retirement age, the name of its successor, SQL Server 2012, suggests that it
might be time for you to start looking into what the new generation has to offer.
The release of SQL Server 2012, originally introduced as Yukon, has already
been postponed, but its current Beta 2 implementation (with several incremental
Community Technical Previews expected before Beta 3 becomes available early
next year) brings promise of a timely RTM stage (planned for summer next year).
In this series of articles, we will look into functional highlights of the new
incarnation of the Microsoft database management system, focusing on those that
are likely to remain unchanged in the final product.
Improvements to the database engine, the details of which are not published by
Microsoft, and the corresponding changes to the main infrastructure components
are reflected by a substantial number of new features as well as enhancements to
35
existing ones. The most relevant ones can be grouped into several categories, such
as high availability and scalability, security, data management, administration
and maintenance, and development.
SQL Server Standard Edition - offering the most diverse set of features and
intended for the majority of clients.
SQL Server 2012 Express Edition - serving as the replacement for
Microsoft Data Engine (MSDE) and available for download from t. Like
its predecessor, it was designed with developers in mind, however, unlike
the previous version, it also includes a Web based management interface.
SQL Server 2012 Mobile Edition - as a successor to SQL Server 2012
Windows CE Edition, it is intended for Windows mobile-based devices,
such as Tablet PCs, Pocket PCs, and Smart phones
36
FEATURES OF SQL SERVER
The following is a list of the new features provided in SQL Server 2012:
Database mirroring
Database snapshots
CLR integration
Service Broker
Database Mail
User-defined functions
Indexed views
Distributed partitioned views
INSTEAD OF and AFTER triggers
New data types
Cascading RI constraints
Multiple SQL Server instances
XML support
Log shipping
Database mirroring
Database mirroring is a new high-availability feature in SQL Server 2012.
It's similar to server clustering in that failover is achieved by the use of a stand-
by server; the difference is that the failover is at the database level rather than the
server level. The primary database continuously sends transaction logs to the
backup database on a separate SQL Server instance. A third SQL Server instance
is then used as a witness database to monitor the interaction between the primary
and the mirror databases.
37
Database snapshots
A database snapshot is essentially an instant read-only copy of a database,
and it is a great candidate for any type of reporting solution for your company. In
addition to being a great reporting tool, you can revert control from your primary
database to your snapshot database in the event of an error. The only data loss
would be from the point of creation of the database snapshot to the event of
failure.
CLR integration
With SQL Server 2012, you now have the ability to create custom .NET
objects with the database engine. For example, stored procedures, triggers, and
functions can now be created using familiar .NET languages such as VB and
C#. Exposing this functionality gives you tools that you never had access to
before such as regular expressions.
Service Broker
This feature gives you the ability to create asynchronous, message-based
applications in the database entirely through TSQL. The database engine
guarantees message delivery, message order consistency, and handles message
grouping. In addition, Service Broker gives you the ability to send messages
between different SQL Server instances. Server Broker is also used in several
other features in SQL Server 2012. For example, you can define Event
Nonfictions in the database to send a message to a Queue in the database when
someone attempts to alter a table structure, of if there is a string of login failures.
Database Mail
Database Mail, the eventual successor to SQL Mail, is greatly enhanced e-
mail solution available in the database engine. With Database Mail, there is no
longer a dependency on Microsoft Outlook or MAPI e-mail clients. Database
Mail uses standard SMTP to send e-mail messages. These messages may contain
query results, attachments (which can be governed by the DBA), and is fully
38
cluster aware. In addition, the e-mail process runs outside of the database engine
space, which means that messages can continue to be queued even when the
database engine has stopped.
User-Defined Functions
SQL Server has always provided the ability to store and execute SQL code
routines via stored procedures. In addition, SQL Server has always supplied a
number of built-in functions. Functions can be used almost anywhere an
expression can be specified in a query. This was one of the shortcomings of stored
procedures—they couldn't be used in line in queries in select lists, where clauses,
and so on. Perhaps you want to write a routine to calculate the last business day
of the month. With a stored procedure, you have to exec the procedure, passing
in the current month as a parameter and returning the value into an output
variable, and then use the variable in your queries. If only you could write your
own function that you could use directly in the query just like a system function.
In SQL Server 2012, you can.
Indexed Views
Views are often used to simplify complex queries, and they can contain joins and
aggregate functions. However, in the past, queries against views were resolved to
queries against the underlying base tables, and any aggregates were recalculated
39
each time you ran a query against the view. In SQL Server 2012 Enterprise or
Developer Edition, you can define indexes on views to improve query
performance against the view. When creating an index on a view, the result set of
the view is stored and indexed in the database. Existing applications can take
advantage of the performance improvements without needing to be modified.
Indexed views can improve performance for the following types of queries:
SQL Server 7.0 provided the ability to create partitioned views using the
UNION ALL statement in a view definition. It was limited, however, in that all
the tables had to reside within the same SQL Server where the view was defined.
SQL Server 2012 expands the ability to create partitioned views by allowing you
to horizontally partition tables across multiple SQL Servers. The feature helps
you scale out one database server to multiple database servers, while making the
data appear as if it comes from a single table on a single SQL Server. In addition,
partitioned views are now able to be updated.
In versions of SQL Server prior to 7.0, a table could not have more than one
trigger defined for INSERT, UPDATE, and DELETE. These triggers only fired
after the data modification took place. SQL Server 7.0 introduced the ability to
40
define multiple AFTER triggers for the same operation on a table. SQL Server
2012 extends this capability by providing the ability to define which AFTER
trigger fires first and which fires last.
SQL Server 2012 introduces three new data types. Two of these can be used as
datatypes for local variables, stored procedure parameters and return values, user-
defined function parameters and return values, or table columns:
The third new datatype, the table datatype, can be used only as a local variable
datatype within functions, stored procedures, and SQL batches. The table
datatype cannot be passed as a parameter to functions or stored procedures, nor
can it be used as a column datatype. A variable defined with the table datatype
can be used to store a result set for later processing. A table variable can be used
in queries anywhere a table can be specified.
41
Text in Row Data
In previous versions of SQL Server, text and image data was always stored
on a separate page chain from where the actual data row resided. The data row
contained only a pointer to the text or image page chain, regardless of the size of
the text or image data. SQL Server 2012 provides a new text in row table option
that allows small text and image data values to be placed directly in the data row,
instead of requiring a separate data page. This can reduce the amount of space
required to store small text and image data values, as well as reduce the amount
of I/O required to retrieve rows containing small text and image data values.
Cascading RI Constraints
42
SQL Server 2012 provides support for running multiple instances of SQL
Server on the same system. This allows you to simultaneously run one instance
of SQL Server 6.5 or 7.0 along with one or more instances of SQL Server 2012.
Each SQL Server instance runs independently of the others and has its own set of
system and user databases, security configuration, and so on. Applications can
connect to the different instances in the same way they connect to different SQL
Servers on different machines.
XML Support
SQL Server 2012 provides native support for XML. This new feature provides
the ability to do the following:
43
Log Shipping
The Enterprise Edition of SQL Server 2012 now supports log shipping, which
you can use to copy and load transaction log backups from one database to one
or more databases on a constant basis. This allows you to have a primary
read/write database with one or more read-only copies of the database that are
kept synchronized by restoring the logs from the primary database. The
destination database can be used as a warm standby for the primary database, for
which you can switch users over in the event of a primary database failure.
Additionally, log shipping provides a way to offload read-only query processing
from the primary database to the destination database.
DDL triggers
In previous articles, I outlined how you can use data definition language (DDL)
triggers in SQL Server 2012 to implement custom database and server auditing
solutions for Sarbanes-Oxley compliance (here are part one and part two of my
SOX articles). DDL triggers are defined at the server or database level and fire
when DDL statements occur. This gives you the ability to audit when new tables,
stored procedures, or logins are created.
44
Ranking functions
SQL Server 2012 provides you with the ability to rank result sets returned from
the database engine. This allows you to customize the manner in which result sets
are returned, such as creating customized paging functions for Web site data.
Row versioning-based isolation levels
This new database engine feature improves database read concurrency by
reducing the amount of locks being used in your database. There are two versions
of this feature (both of which must be enabled at the database level):
Read Committed Isolation Using Row Versioning is used at the individual
statement level, and guarantees that the data is consistent for the duration of the
statement.
45
CHAPTER 4
4. LITERATURE SURVEY
46
TITLE: Anonymous hierarchical identity-based encryption.
AUTHOR:X. Boyen and B. Waters.
YEAR: 2014
47
TITLES: Efficient public key encryption with revocable keyword search.
YEAR: 2015
48
TITLE: A secure and dynamic multi-keyword ranked search scheme over
encrypted cloud data
AUTHOR: Z. Xia, X. Wang, X. Sun, and Q. Wang.
YEAR:2016
In this paper, we propose increasing popularity of cloud computing, more
and more data owners are motivated to outsource their data to cloud servers for
great convenience and reduced cost in data management. However, sensitive data
should be encrypted before outsourcing for privacy requirements, which
obsoletes data utilization like keyword-based document retrieval. In this paper,
we present a secure multi-keyword ranked search scheme over encrypted cloud
data, which simultaneously supports dynamic update operations like deletion and
insertion of documents. Specifically, the vector space model and the widely-used
TF x IDF model are combined in the index construction and query generation.
We construct a special tree-based index structure and propose a “Greedy Depth-
first Search” algorithm to provide efficient multi-keyword ranked search. The
secure kNN algorithm is utilized to encrypt the index and query vectors, and
meanwhile ensure accurate relevance score calculation between encrypted index
and query vectors. In order to resist statistical attacks, phantom terms are added
to the index vector for blinding search results. Due to the use of our special tree-
based index structure, the proposed scheme can achieve sub-linear search time
and deal with the deletion and insertion of documents flexibly. Extensive
experiments are conducted to demonstrate the efficiency of the proposed scheme.
49
TITLE: Towards privacy preserving storage and retrieval in multiple clouds.
AUTHOR: J. Li, D. Lin, A. C. Squicciarini, J. Li, and C. Jia,
YEAR:2017
In this paper, we propose Cloud computing is growing exponentially,
whereby there are now hundreds of cloud service providers (CSPs) of various
sizes. While the cloud consumers may enjoy cheaper data storage and
computation offered in this multi-cloud environment, they are also in face of more
complicated reliability issues and privacy preservation problems of their
outsourced data. Though searchable encryption allows users to encrypt their
stored data while preserving some search capabilities, few efforts have sought to
consider the reliability of the searchable encrypted data outsourced to the clouds.
In this paper, we propose a privacy-preserving Storage and REtrieval (STRE)
mechanism that not only ensures security and privacy but also provides reliability
guarantees for the outsourced searchable encrypted data. The STRE mechanism
enables the cloud users to distribute and search their encrypted data across
multiple independent clouds managed by different CSPs, and is robust even when
a certain number of CSPs crash. Besides the reliability, STRE also offers the
benefit of partially hidden search pattern. We evaluate the STRE mechanism on
Amazon EC2 using a real world dataset and the results demonstrate both
effectiveness and efficiency of our approach.
50
TITLE: Protecting your right: Attribute-based keyword search with fine-grained
owner-enforced search authorization in the cloud.
AUTHOR:W. Sun, S. Yu, W. Lou, Y. T. Hou, and H. Li.
YEAR: 2016
Search over scrambled information is a basically critical empowering
system in distributed computing, where encryption-before outsourcing is a
principal answer for securing client information protection in the untrusted cloud
server environment. Many secure hunt plans have been concentrating on the
single-donor situation, where the outsourced dataset or the protected searchable
file of the dataset are encoded and overseen by a solitary proprietor, regularly in
view of symmetric cryptography. In this paper, we concentrate on an alternate yet
additionally difficult situation where the outsourced dataset can be contributed
from different proprietors and are searchable by numerous clients, i.e. multi-client
multi-supporter case. Propelled by trait based encryption (ABE), we show the
main characteristic based watchword look conspire with proficient client
disavowal (ABKS-UR) that empowers versatile fine-grained (i.e. document
level) look approval. Our plan permits different proprietors to encode and
outsource their information to the cloud server freely. Clients can create their own
particular pursuit abilities without depending on a constantly online trusted
power. Fine-grained seek approval is additionally actualized by the proprietor
implemented get to strategy on the list of every record. Promote, by fusing
intermediary re-encryption and lethargic re-encryption procedures, we can
appoint overwhelming framework redesign workload amid client disavowal to
the creative semi-trusted cloud server. We formalize the security definition and
demonstrate the proposed ABKS-UR plot specifically secure against picked
catchphrase assault.
51
CHAPTER 5
5. MODULES DESCRIPTION
5 .1 MODULES USED
Data Owner
Data User
Trust Third Party
Cloud Service Provider
5.1.1 Data owner:
Registration
Login
Uploading Files
Using Secrete key
Data owner uploading files into the cloud using some secrete key that file will be
in encrypted form.
52
5.1.2 Data User
Registration
Login
Getting key from TTP
Search token
The data user requesting file from data owner and getting some key from trust
third party for accessing the file from cloud.
It is a fully trusted entity who receives each user’s access tree, and generates
their secret keys corresponding to his/her attributes set presented in his/her access
tree. Then, the TTP sends back the users’ credentials through a secure and
authenticated channel
53
Trust third party will be generating key for the data owner and cloud
user for accessing file.
54
CHAPATER 6
6. SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of
trying to discover every conceivable fault or weakness in a work product. It
provides a way to check the functionality of components, sub-assemblies,
assemblies and/or a finished product It is the process of exercising software with
the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types
of test. Each test type addresses a specific testing requirement
55
The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.
56
7. SCREEN SHOTS
USER ADMIN:
HOME PAGE:
57
DATA OWNER:
UPLOATE FILE:
58
TEMPORARY KEY:
59
ENCRYPT FILES:
60
ARBITRARY KEY:
61
OUTPUT:
62
FILE DOCUMENT:
63
OUTPUT:
64
CHAPTER 8
8.1 CONCLUSION
We can assign a Time Interval for each Encryption Key for Performance
evaluation shows our scheme’s practicality. We can show that the complexity of
the encryption algorithm is linear with respect to the number of the involved attr
65
CHATPER 9
APPENEDIX CODING
CODING:
HOME PAGE:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
namespace RolebaseAuthorization
{
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
}}
66
KEY GENERATION:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace RolebaseAuthorization.Models
{
public class KeyGeneration
{
public int Id { get; set; }
public string KeyGenerationValue { get; set; }
public string ApplicationMemberId { get; set; }
public ApplicationUser ApplicationUser { get; set; }
}
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace RolebaseAuthorization.Models
{
public class Subscribers
{
public int Id { get; set; }
public string Email { get; set; }
public string Password { get; set; }
public string Address { get; set; }
public string PhoneNumber { get; set; }
67
}
}
ADMIN CONTROLLER:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using RolebaseAuthorization.Models;
using RolebaseAuthorization.ViewModel;
namespace RolebaseAuthorization.Controllers
{
[Authorize(Roles="Admin")]
public class AdminController : Controller
{
private readonly UserManager<ApplicationUser> _userManager;
private readonly RoleManager<IdentityRole> _roleManager;
public AdminController(UserManager<ApplicationUser> userManager,
RoleManager<IdentityRole> roleManager)
{
_userManager = userManager;
_roleManager = roleManager;
}
69
}
return View(addUserViewModel);
}
public async Task<IActionResult> EditUser(string id)
{
var user = await _userManager.FindByIdAsync(id);
if (user == null)
return RedirectToAction("UserManagement", _userManager.Users);
var claims = await _userManager.GetClaimsAsync(user);
var vm = new EditUserViewModel() { Id = user.Id, Email = user.Email,
UserName = user.UserName };
ileStorage return View(vm);
}
[HttpPost]
public async Task<IActionResult> EditUser(EditUserViewModel
editUserViewModel)
{
var user = await _userManager.FindByIdAsync(editUserViewModel.Id);
if (user != null)
{
user.Email = editUserViewModel.Email;
user.UserName = editUserViewModel.UserName;
var result = await _userManager.UpdateAsync(user);
if (result.Succeeded)
return RedirectToAction("UserManagement", _userManager.Users);
ModelState.AddModelError("", "User not updated, something went wrong.");
return View(editUserViewModel);
}
return RedirectToAction("UserManagement", _userManager.Users);
70
}
[HttpPost]
public async Task<IActionResult> DeleteUser(string id)
{
var user = await _userManager.FindByIdAsync(id);
if (user != null)
{
IdentityResult result = await _userManager.DeleteAsync(user);
if (result.Succeeded)
return RedirectToAction("UserManagement");
else
ModelState.AddModelError("", "Something went wrong while deleting this
user.");
}
else
{
ModelState.AddModelError("", "This user can't be found");
}
return View("UserManagement", _userManager.Users);
}
//Roles management
public IActionResult RoleManagement()
{
var roles = _roleManager.Roles;
return View(roles);
}
71
[HttpPost]
public async Task<IActionResult> AddNewRole(AddRoleViewModel
addRoleViewModel)
{
if (!ModelState.IsValid) return View(addRoleViewModel);
var role = new IdentityRole
{
Name = addRoleViewModel.RoleName
};
IdentityResult result = await _roleManager.CreateAsync(role);
if (result.Succeeded)
{
return RedirectToAction("RoleManagement", _roleManager.Roles);
}
foreach (IdentityError error in result.Errors)
{
ModelState.AddModelError("", error.Description);
}
return View(addRoleViewModel);
}
public async Task<IActionResult> EditRole(string id)
{
var role = await _roleManager.FindByIdAsync(id);
if (role == null)
return RedirectToAction("RoleManagement", _roleManager.Roles);
var editRoleViewModel = new EditRoleViewModel
{
Id = role.Id,
RoleName = role.Name,
72
Users = new List<string>()
};
foreach (var user in _userManager.Users)
{
if (await _userManager.IsInRoleAsync(user, role.Name))
editRoleViewModel.Users.Add(user.UserName);
}
return View(editRoleViewModel);
}
[HttpPost]
public async Task<IActionResult> EditRole(EditRoleViewModel
editRoleViewModel)
{
var role = await _roleManager.FindByIdAsync(editRoleViewModel.Id);
if (role != null)
{
role.Name = editRoleViewModel.RoleName;
var result = await _roleManager.UpdateAsync(role);
if (result.Succeeded)
return RedirectToAction("RoleManagement", _roleManager.Roles);
ModelState.AddModelError("", "Role not updated, something went wrong.");
return View(editRoleViewModel);
}
return RedirectToAction("RoleManagement", _roleManager.Roles);
}
[HttpPost]
public async Task<IActionResult> DeleteRole(string id)
{
IdentityRole role = await _roleManager.FindByIdAsync(id);
73
if (role != null)
{
var result = await _roleManager.DeleteAsync(role);
if (result.Succeeded)
return RedirectToAction("RoleManagement", _roleManager.Roles);
ModelState.AddModelError("", "Something went wrong while deleting this
role.");
}
else
{
ModelState.AddModelError("", "This role can't be found.");
}
return View("RoleManagement", _roleManager.Roles);
}
public async Task<IActionResult> AddUserToRole(string roleId)
{
var role = await _roleManager.FindByIdAsync(roleId);
if (role == null)
return RedirectToAction("RoleManagement", _roleManager.Roles);
var addUserToRoleViewModel = new UserRoleViewModel { RoleId = role.Id };
foreach (var user in _userManager.Users)
{
if (!await _userManager.IsInRoleAsync(user, role.Name))
{
addUserToRoleViewModel.Users.Add(user);
}
}
return View(addUserToRoleViewModel);
}
74
[HttpPost]
public async Task<IActionResult> AddUserToRole(UserRoleViewModel
userRoleViewModel)
{
var user = await _userManager.FindByIdAsync(userRoleViewModel.UserId);
var role = await _roleManager.FindByIdAsync(userRoleViewModel.RoleId);
var result = await _userManager.AddToRoleAsync(user, role.Name);
if (result.Succeeded)
{
return RedirectToAction("RoleManagement", _roleManager.Roles);
}
foreach (IdentityError error in result.Errors)
{
ModelState.AddModelError("", error.Description);
}
return View(userRoleViewModel);
}
public async Task<IActionResult> DeleteUserFromRole(string roleId)
{
var role = await _roleManager.FindByIdAsync(roleId);
if (role == null)
return RedirectToAction("RoleManagement", _roleManager.Roles);
var addUserToRoleViewModel = new UserRoleViewModel { RoleId = role.Id };
foreach (var user in _userManager.Users)
{
if (await _userManager.IsInRoleAsync(user, role.Name))
{
addUserToRoleViewModel.Users.Add(user);
}
75
}
return View(addUserToRoleViewModel);
}
[HttpPost]
public async Task<IActionResult> DeleteUserFromRole(UserRoleViewModel
userRoleViewModel)
{
var user = await _userManager.FindByIdAsync(userRoleViewModel.UserId);
var role = await _roleManager.FindByIdAsync(userRoleViewModel.RoleId);
var result = await _userManager.RemoveFromRoleAsync(user, role.Name);
if (result.Succeeded)
{
return RedirectToAction("RoleManagement", _roleManager.Roles);
}
foreach (IdentityError error in result.Errors)
{
ModelState.AddModelError("", error.Description);
}
return View(userRoleViewModel);
}
}
}
76
HOME CONTROLLER:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using RolebaseAuthorization.Data;
using RolebaseAuthorization.Models;
namespace RolebaseAuthorization.Controllers
{
public class HomeController : Controller
{
private readonly ApplicationDbContext _context;
private readonly UserManager<ApplicationUser> userManager;
private IHttpContextAccessor _contextAccessor;
public HomeController(ApplicationDbContext context,
UserManager<ApplicationUser> _userManager,
IHttpContextAccessor contextAccessor)
{
_context = context;
_userManager = userManager;
_contextAccessor = contextAccessor;
}
public IActionResult Index()
77
{
if (User.Identity.IsAuthenticated)
{
var userid = "YES";
ViewData["userid"] = userid;
}
return View();
}
public IActionResult About()
{
ViewData["Message"] = "Your application description page.";
return View();
}
public IActionResult Contact()
{
ViewData["Message"] = "Your contact page.";
return View();
}
public IActionResult KeyGeneration()
{
return View();
}
public IActionResult Privacy()
{
return View();
}
78
public IActionResult Error()
{
return View(new ErrorViewModel { RequestId = Activity.Current?.Id ??
HttpContext.TraceIdentifier });
}
}
}
79
CHAPTER 10
REFERENCES
[1] Y. Shi, Q. Zheng, J. Liu, and Z. Han, “Directly revocable key-policy attribute-
based encryption with verifiable ciphertext delegation,” Information Sciences,
vol. 295, pp. 221–231, 2015.
[2] J. Bethencourt, A. Sahai, and B. Waters, “Ciphertext-policy attribute-based
encryption,” in Security and Privacy, 2007. SP’07. IEEE Symposium on. IEEE,
2007, pp. 321–334.
[3] B. Waters, “Ciphertext-policy attribute-based encryption: An expressive,
efficient, and provably secure realization,” in International Workshop on Public
Key Cryptography. Springer, 2011, pp. 53–70.
[4] V. Goyal, A. Jain, O. Pandey, and A. Sahai, “Bounded ciphertext policy
attribute-based encryption,” in International Colloquium on Automata,
Languages, and Programming. Springer, 2008, pp. 579–591.
[5] H. Deng, Q. Wu, B. Qin, J. Domingo-Ferrer, L. Zhang, J. Liu, and W. Shi,
“Ciphertext-policy hierarchical attribute-based encryption with short
ciphertexts,” Information Sciences, vol. 275, pp. 370–384, 2014.
[6] A. Balu and K. Kuppusamy, “An expressive and provably secure ciphertext-
policy attribute-based encryption,” Information Sciences, vol. 276, pp. 354–362,
2014.
80