Você está na página 1de 30

S.NO CONTENTS PAGE.

NO

ABSTRACTS

LIST OF TABLES

LIST OF FIGURES

1 INTRODUCTION

1.1 Problem Description

2 SYSTEM STUDY

2.1 Existing System

2.2 Proposed System

2.3 Need For Computerization

2.4 Data Flow Diagram

2.5 Class Diagram

3 SYSTEM CONFIGURATION

3.1 Hardware Requirements

3.2 Software Requirements

4 OVERVIEW OF THE SOFTWARE

5 DESIGN AND DEVELOPMENT

5.1 Use Case Diagram

5.2 Data Base Design

5.3 E-R Diagram

6 IMPLEMENTATION AND TESTING

7 CONCLUSION

8 APPENDICES
I. Source Code
II. Screenshots
ABSTRACTS

The problem is inspired by the proliferation of large-scale distributed file systems


supporting parallel access to multiple storage devices. Our work focuses on the current
Internet standard for such file systems, i.e., parallel Network File System which makes use of
Kerberos to establish parallel session keys between clients and storage devices. Our review
of the existing Kerberos-based protocol shows that it has a number of limitations. In this
paper, we propose a variety of authenticated Data exchange protocols that are designed to
address the above issues. We show that our protocols are capable of reducing up to
approximately of the workload of the metadata server and concurrently supporting forward
secrecy and escrow-freeness. All this requires only a small fraction of increased computation
overhead at the client.
LIST OF TABLES

S.NO TABLES PAGE.NO

5
LIST OF FIGURES

S.NO DIAGRAM PAGE.NO

5
1. INTRODUCTION

In a parallel file system, file data is distributed across multiple storage devices or nodes to
allow concurrent access by multiple tasks of a parallel application. This is typically used in
large-scale cluster computing that focuses on high-performance and reliable access to large
datasets. That is higher I/O bandwidth is achieved through concurrent access to multiple
storage devices within large compute clusters; while data loss is protected through data
mirroring using fault-tolerant striping algorithms. Some examples of high performance
parallel file systems that are in production use are the IBM General Parallel File System
(GPFS) , Google File System (Google’s) , Luster , Parallel Virtual File System (PVFS) , and
Panasas File System ; while there also exist research projects on distributed object storage
systems such as Usra Minor , Ceph , Extremes, and Gfarm . These are usually required for
advanced scientific or data-intensive applications such as, seismic data processing, digital
animation studios, computational fluid dynamics, and semiconductor manufacturing. In these
environments, hundreds or thousands of file system clients share data and generate very high
aggregate I/O load on the file system supporting pet byte- or terabyte-scale storage
capacities.
1.1 Problem Description

They discussed several applications such as encrypted file systems and content protection.
They leave as an open problem the question of building a public-key broadcast encryption
system with the same parameters as ours which is secure against adaptive adversaries. They
can design and prove security of key-exchange protocols in an idealized model where the
communication links are perfectly authenticated, and then translate those using general tools
to obtain security in the realistic setting of adversary-controlled links. With parallel and
distributed systems, since it hides the details of parallelization, fault-tolerance, locality
optimization and load balancing. Second, a large variety of problems are easily expressible as
Map Reduce computations. For example, Map Reduce is used for the generation of data for
Google's production web search service, for sorting, for data mining, for machine learning,
and many other systems. Third, we have developed an implementation of Map Reduce that
scales to large clusters of machines comprising thousands of machines.
2. SYSTEM STUDY

2.1 Existing System

Our work focuses on the current Internet standard for such file systems, i.e., parallel Network
File System (pNFS), which makes use of Kerberos to establish parallel session keys between
clients and storage devices. Our review of the existing Kerberos-based protocol shows that it
has a number of limitations: (i) a metadata server facilitating key exchange between the
clients and the storage devices has heavy workload that restricts the scalability of the
protocol; (ii) the protocol does not provide forward secrecy; (iii) the metadata server
generates itself all the session keys that are used between the clients and storage devices, and
this inherently leads to key escrow.

Disadvantage
 It has many workloads.
 It has to refer multiples of data owners.
 It does not support many to many Communications.
 Parallel access is difficult to access multiple storage devices.
2.2 Proposed System

We propose a variety of authenticated DATA exchange protocols that are designed to


address the above issues. We show that our protocols are capable of reducing up to
approximately of the workload of the metadata server and concurrently supporting forward
secrecy and escrow-freeness. All this requires only a small fraction of increased computation
overhead at the client.

Advantage
 It reduces a workload of server.
 It contain metadata server which make user to access file easily.
 It reduces the use of key generator.
 It makes the computation of the client easier. Key the computation of the client easier.
2.3 Need for Computerization

 The methods used in fingerprint authentication systems can be divided into texture-
based methods and human partial-based methods using algorithm Delaunay
quadrangle.
 In this project, we used information security domain for increasing the security
accessing multi accessing system.
2.4 Data Flow Diagram

DATA OWNER

USER
DATA UPLOAD
DETAILS
WITH SECURE

ACCEPT/ SERVER
DECLINE

VIEW

USER
2.5 Class Diagram
3. SYSTEM CONFIGURATION

3.1 Hardware Requirements

Hard disk : 160 GB


RAM : 4 GB
Processor : Core i3
Monitor : 15’’Color Monitor

3.2 Software Requirements

Front-End : ASP.NET
Back end : SQL server
Operating System : Windows 7
IDE : Visual Studio
4. OVERVIEW OF THE SOFTWARE

Microsoft .Net Framework


The .Net framework is a software development platform developed by Microsoft. The
framework was meant to create applications, which would run on the Windows Platform.
The first version of the .Net framework was released in the year 2002.
The version was called .Net framework 1.0. The .Net framework has come a long way since
then, and the current version is 4.7.1.
The .Net framework can be used to create both - Form-based and Web-based applications.
Web services can also be developed using the .Net framework.
The framework also supports various programming languages such as Visual Basic and C#.
So developers can choose and select the language to develop the required application. In this
chapter, you will learn some basics of the .Net framework.
In this tutorial, you will learn-
.Net Framework Architecture
.NET Components
.Net Framework Design Principle
.Net Framework Architecture
The basic architecture of the .Net framework is as shown below.
.NET Framework
.net framework architecture diagram
.NET Components
The architecture of the .Net framework is based on the following key components;
1. Common Language Runtime
The "Common Language Infrastructure" or CLI is a platform on which the .Net programs are
executed.
The CLI has the following key features:
Exception Handling - Exceptions are errors which occur when the application is executed.
Examples of exceptions are:
If an application tries to open a file on the local machine, but the file is not present.
If the application tries to fetch some records from a database, but the connection to the
database is not valid.
Garbage Collection - Garbage collection is the process of removing unwanted resources
when they are no longer required.
Examples of garbage collection are
A File handles which is no longer required. If the application has finished all operations on a
file, then the file handle may no longer be required.
The database connection is no longer required. If the application has finished all operations
on a database, then the database connection may no longer be required.
Working with Various programming languages –
As noted in an earlier section, a developer can develop an application in a variety of .Net
programming languages.
Language - The first level is the programming language itself, the most common ones are
VB.Net and C#.
Compiler – There is a compiler which will be separate for each programming language. So
underlying the VB.Net language, there will be a separate VB.Net compiler. Similarly, for C#,
you will have another compiler.
Common Language Interpreter – This is the final layer in .Net which would be used to run a
.net program developed in any programming language. So the subsequent compiler will send
the program to the CLI layer to run the .Net application.
2. Class Library
The .NET Framework includes a set of standard class libraries. A class library is a collection
of methods and functions that can be used for the core purpose.
For example, there is a class library with methods to handle all file-level operations. So there
is a method which can be used to read the text from a file. Similarly, there is a method to
write text to a file.
Most of the methods are split into either the System.* or Microsoft.* namespaces. (The
asterisk * just means a reference to all of the methods that fall under the System or Microsoft
namespace)
A namespace is a logical separation of methods. We will learn these namespaces more in
detail in the subsequent chapters.
3. Languages
The types of applications that can be built in the .Net framework is classified broadly into the
following categories.
WinForms – This is used for developing Forms-based applications, which would run on an
end user machine. Notepad is an example of a client-based application.
ASP.Net – This is used for developing web-based applications, which are made to run on any
browser such as Internet Explorer, Chrome or Firefox.
The Web application would be processed on a server, which would have Internet Information
Services Installed.
Internet Information Services or IIS is a Microsoft component which is used to execute an
Asp.Net application.
The result of the execution is then sent to the client machines, and the output is shown in the
browser.
ADO.Net – This technology is used to develop applications to interact with Databases such
as Oracle or Microsoft SQL Server.
Microsoft always ensures that .Net frameworks are in compliance with all the supported
Windows operating systems.
.Net Framework Design Principle
The following design principles of the .Net framework is what makes it very relevant to
create .Net based applications.
Interoperability - The .Net framework provides a lot of backward support. Suppose if you
had an application built on an older version of the .Net framework, say 2.0. And if you tried
to run the same application on a machine which had the higher version of the .Net
framework, say 3.5. The application would still work. This is because with every release,
Microsoft ensures that older framework versions gel well with the latest version.
Portability- Applications built on the .Net framework can be made to work on any Windows
platform. And now in recent times, Microsoft is also envisioning to make Microsoft products
work on other platforms, such as iOS and Linux.
Security - The .NET Framework has a good security mechanism. The inbuilt security
mechanism helps in both validation and verification of applications. Every application can
explicitly define their security mechanism. Each security mechanism is used to grant the user
access to the code or to the running program.
Memory management - The Common Language runtime does all the work or memory
management. The .Net framework has all the capability to see those resources, which are not
used by a running program. It would then release those resources accordingly. This is done
via a program called the "Garbage Collector" which runs as part of the .Net framework.
The garbage collector runs at regular intervals and keeps on checking which system
resources are not utilized, and frees them accordingly.
Simplified deployment - The .Net framework also have tools, which can be used to package
applications built on the .Net framework. These packages can then be distributed to client
machines. The packages would then automatically install the application.
Summary
.Net is a programming language developed by Microsoft. It was designed to build
applications which could run on the Windows platform.
The .Net programming language can be used to develop Forms based applications, Web
based applications, and Web services.
Developers can choose from a variety of programming languages available on the .Net
platform. The most common ones are VB.Net and C#.
SQL Server
Like other RDBMS software, Microsoft SQL Server is built on top of SQL, a standardized
programming language that database administrators (DBAs) and other IT professionals use to
manage databases and query the data they contain. SQL Server is tied to Transact-SQL (T-
SQL), an implementation of SQL from Microsoft that adds a set of proprietary programming
extensions to the standard language.
The original SQL Server code was developed in the 1980s by the former Sybase Inc., which
is now owned by SAP. Sybase initially built the software to run on Unix systems and
minicomputer platforms. It, Microsoft and Ashton-Tate Corp., then the leading vendor of PC
databases, teamed up to produce the first version of what became Microsoft SQL Server,
designed for the OS/2 operating system and released in 1989.
Ashton-Tate stepped away after that, but Microsoft and Sybase continued their partnership
until 1994, when Microsoft took over all development and marketing of SQL Server for its
own operating systems. The year before, with the Sybase relationship starting to unravel,
Microsoft had also made the software available on the newly released Windows NT after
modifying the 16-bit OS/2 code base to create a 32-bit implementation with added features; it
focused on the Windows code going forward. In 1996, Sybase renamed its version Adaptive
Server Enterprise, leaving the SQL Server name to Microsoft.
Versions of SQL Server
Between 1995 and 2016, Microsoft released 10 versions of SQL Server. Early versions were
aimed primarily at departmental and workgroup applications, but Microsoft expanded SQL
Server's capabilities in subsequent ones, turning it into an enterprise-class relational DBMS
that could compete with Oracle Database, DB2 and other rival platforms for high-end
database uses. Over the years, Microsoft has also incorporated various data management and
data analytics tools into SQL Server, as well as functionality to support new technologies that
emerged, including the web, cloud computing and mobile devices.
Microsoft SQL Server 2016, which became generally available in June 2016, was developed
as part of a "mobile first, cloud first" technology strategy adopted by Microsoft two years
earlier. Among other things, SQL Server 2016 added new features for performance tuning,
real-time operational analytics, and data visualization and reporting on mobile devices, plus
hybrid cloud support that lets DBAs run databases on a combination of on-premises systems
and public cloud services to reduce IT costs. For example, a SQL Server Stretch Database
technology moves infrequently accessed data from on-premises storage devices to the
Microsoft Azure cloud, while keeping the data available for querying, if needed.
Key components in Microsoft SQL Server
SQL Server 2016 also increased support for big data analytics and other advanced analytics
applications through SQL Server R Services, which enables the DBMS to run analytics
applications written in the open source R programming language, and PolyBase, a
technology that lets SQL Server users access data stored in Hadoop clusters or Azure blob
storage for analysis. In addition, SQL Server 2016 was the first version of the DBMS to run
exclusively on 64-bit servers based on x64 microprocessors. And it added the ability to run
SQL Server in Docker containers, a virtualization technology that isolates applications from
each other on a shared operating system.
Prior versions included SQL Server 2005, SQL Server 2008 and SQL Server 2008 R2, which
was considered a major release despite the follow-up sound of its name. Next to come were
SQL Server 2012 and SQL Server 2014. SQL Server 2012 offered new features, such as
columnstore indexes, which can be used to store data in a column-based format for data
warehousing and analytics applications, and AlwaysOn Availability Groups, a high
availability and disaster recovery technology. (Microsoft changed the spelling of the latter's
name to Always On when it released SQL Server 2016.)
SQL Server 2014 added In-Memory OLTP, which lets users run online transaction
processing (OLTP) applications against data stored in memory-optimized tables instead of
standard disk-based ones. Another new feature in SQL Server 2014 was the buffer pool
extension, which integrates SQL Server's buffer pool memory cache with a solid-state drive -
- another feature designed to boost I/O throughput by offloading data from conventional hard
disks.
Microsoft SQL Server ran exclusively on Windows for more than 20 years. But, in 2016,
Microsoft said it planned to also make the DBMS available on Linux, starting with a new
version released as a community technology preview that November and initially dubbed
SQL Server vNext; later, the update was formally named SQL Server 2017, and it became
generally available in October of that year.
The support for running SQL Server on Linux moved the database platform onto an open
source operating system commonly found in enterprises, giving Microsoft potential inroads
with customers that don't use Windows or have mixed server environments. SQL Server
2017 also expanded the Docker support added for Windows systems in the previous release
to include Linux-based containers.
Another notable feature in SQL Server 2017 is support for the Python programming
language, an open source language that is widely used in analytics applications. With its
addition, SQL Server R Services was renamed Machine Learning Services (In-Database) and
expanded to run both R and Python applications. Initially, the machine learning toolkit and a
variety of other features are only available in the Windows version of the database software,
with a more limited feature set supported on Linux.
Inside SQL Server's architecture
Like other RDBMS technologies, SQL Server is primarily built around a row-based table
structure that connects related data elements in different tables to one another, avoiding the
need to redundantly store data in multiple places within a database. The relational model also
provides referential integrity and other integrity constraints to maintain data accuracy; those
checks are part of a broader adherence to the principles of atomicity, consistency, isolation
and durability -- collectively known as the ACID properties and designed to guarantee that
database transactions are processed reliably.
The core component of Microsoft SQL Server is the SQL Server Database Engine, which
controls data storage, processing and security. It includes a relational engine that processes
commands and queries, and a storage engine that manages database files, tables, pages,
indexes, data buffers and transactions. Stored procedures, triggers, views and other database
objects are also created and executed by the Database Engine.
Sitting beneath the Database Engine is the SQL Server Operating System, or SQLOS; it
handles lower-level functions, such as memory and I/O management, job scheduling and
locking of data to avoid conflicting updates. A network interface layer sits above the
Database Engine and uses Microsoft's Tabular Data Stream protocol to facilitate request and
response interactions with database servers. And at the user level, SQL Server DBAs and
developers write T-SQL statements to build and modify database structures, manipulate data,
implement security protections and back up databases, among other tasks.
SQL Server services, tools and editions
Microsoft also bundles a variety of data management, business intelligence (BI) and analytics
tools with SQL Server. In addition to the R Services and now Machine Learning Services
technology that first appeared in SQL Server 2016, the data analysis offerings include SQL
Server Analysis Services, an analytical engine that processes data for use in BI and data
visualization applications, and SQL Server Reporting Services, which supports the creation
and delivery of BI reports.
On the data management side, Microsoft SQL Server includes SQL Server Integration
Services, SQL Server Data Quality Services and SQL Server Master Data Services. Also
bundled with the DBMS are two sets of tools for DBAs and developers: SQL Server Data
Tools, for use in developing databases, and SQL Server Management Studio, for use in
deploying, monitoring and managing databases.
Microsoft offers SQL Server in four primary editions that provide different levels of the
bundled services. Two are available free of charge: a full-featured Developer edition for use
in database development and testing, and an Express edition that can be used to run small
databases with up to 10 GB of disk storage capacity. For larger applications, Microsoft sells
an Enterprise edition that includes all of SQL Server's features, as well as a Standard one
with a partial feature set and limits on the number of processor cores and memory sizes that
users can configure in their database servers.
However, when SQL Server 2016 Service Pack 1 (SP1) was released in late 2016, Microsoft
made some of the features previously limited to the Enterprise edition available as part of the
Standard and Express ones. That included In-Memory OLTP, PolyBase, columnstore
indexes, and partitioning, data compression and change data capture capabilities for data
warehouses, as well as several security features. In addition, the company implemented a
consistent programming model across the different editions with SQL Server 2016 SP1,
making it easier to scale up applications from one edition to another.
Security features in SQL Server
The advanced security features supported in all editions of Microsoft SQL Server starting
with SQL Server 2016 SP1 include three technologies added to the 2016 release: Always
Encrypted, which lets user update encrypted data without having to decrypt it first; row-level
security, which enables data access to be controlled at the row level in database tables; and
dynamic data masking, which automatically hides elements of sensitive data from users
without full access privileges.
Other notable SQL Server security features include transparent data encryption, which
encrypts data files in databases, and fine-grained auditing, which collects detailed
information on database usage for reporting on regulatory compliance. Microsoft also
supports the Transport Layer Security protocol for securing communications between SQL
Server clients and database servers.
Most of those tools and the other features in Microsoft SQL Server are also supported in
Azure SQL Database, a cloud database service built on the SQL Server Database Engine.
Alternatively, users can run SQL Server directly on Azure, via a technology called SQL
Server on Azure Virtual Machines; it configures the DBMS in Windows Server virtual
machines running on Azure. The VM offering is optimized for migrating or extending on-
premises SQL Server applications to the cloud, while Azure SQL Database is designed for
use in new cloud-based applications.
In the cloud, Microsoft also offers Azure SQL Data Warehouse, a data warehousing service
based on a massively parallel processing (MPP) implementation of SQL Server. The MPP
version, originally a stand-alone product called SQL Server Parallel Data Warehouse, is also
available for on-premises uses as part of the Microsoft Analytics Platform System, which
combines it with Polybasic and other big data technologies.
5. DESIGN AND DEVELOPMENT

5.1 Use Case Diagram


5.2 Data Base Design
5.3 E-R Diagram
6. IMPLEMENTATION AND TESTING

Modules

Data Owner

A data owner is an individual who is accountable for a data asset. This is typically an
executive role that goes to the department, team or business unit that owns a data asset. The
following are examples of responsibilities associated with the data owner role.

Server

The term database server may refer to both hardware and software used to run a
database, according to the context. As software, a database server is the back-end portion of a
database application, following the traditional client-server model. This back-end portion is
sometimes called the instance. It may also refer to the physical computer used to host the
database. When mentioned in this context, the database server is typically a dedicated higher-
end computer that hosts the database.

Accessing Data from User

Data access is a generic term referring to a process which has both an IT-specific
meaning and other connotations involving access rights in a broader legal and/or political
sense. In the former it typically refers to software and activities related to storing, retrieving,
or acting on data housed in a database or other repository. Two fundamental types of data
access exist:

1. sequential access (as in magnetic tape, for example)

2. random access (as in indexed media)

Data access crucially involves authorization to access different data repositories. Data
access can help distinguish the abilities of administrators and users. For example,
administrators may have the ability to remove, edit and add data, while general users may not
even have "read" rights if they lack access to particular information.
Data Access Restriction

If any role does not restrict access to data from any table or field, it means that in this
table the values of the required fields are accessible for any record. In other words, no data
access restriction implies that the restriction "WHERE TRUE" is applied.

Software Testing
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to check
the functionality of components, sub assemblies, assemblies and/or a finished product It is
the process of exercising software with the intent of ensuring that the Software system meets
its requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

Types of Tests:
Testing is the process of trying to discover every conceivable fault or weakness in a
work product. The different type of testing are given below:
Unit Testing:
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual software
units of the application .it is done after the completion of an individual unit before
integration.
This is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific business
process, application, and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and contains clearly
defined inputs and expected results.
Integration Testing:
Integration tests are designed to test integrated software components to determine if
they actually run as one program. Testing is event driven and is more concerned with the
basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically aimed
at exposing the problems that arise from the combination of components.
Functional Test:
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/ Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key functions, or
special test cases. In addition, systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test:
System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.
White Box Testing:
White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It
is used to test areas that cannot be reached from a black box level.
Black Box Testing:
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of
tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.
7. CONCLUSION

We proposed three authenticated key exchange protocols for parallel network file system
(PNFS). Our protocols offer three appealing advantages over the existing Kerberos-based
Pnfs protocol. First, the metadata server executing our protocols has much lower workload
than that of the Kerberos-based approach. Second, two our protocols provide forward
secrecy: one is partially forward securing (with respect to multiple sessions within a time
period), while the other is fully forward secure (with respect to a session). Third, we have
designed a protocol which not only provides forward secrecy, but is also escrow-free.
8. APPENDICES

Source Code

Screenshots

Você também pode gostar