Escolar Documentos
Profissional Documentos
Cultura Documentos
NO
ABSTRACTS
LIST OF TABLES
LIST OF FIGURES
1 INTRODUCTION
2 SYSTEM STUDY
3 SYSTEM CONFIGURATION
7 CONCLUSION
8 APPENDICES
I. Source Code
II. Screenshots
ABSTRACTS
5
LIST OF FIGURES
5
1. INTRODUCTION
In a parallel file system, file data is distributed across multiple storage devices or nodes to
allow concurrent access by multiple tasks of a parallel application. This is typically used in
large-scale cluster computing that focuses on high-performance and reliable access to large
datasets. That is higher I/O bandwidth is achieved through concurrent access to multiple
storage devices within large compute clusters; while data loss is protected through data
mirroring using fault-tolerant striping algorithms. Some examples of high performance
parallel file systems that are in production use are the IBM General Parallel File System
(GPFS) , Google File System (Google’s) , Luster , Parallel Virtual File System (PVFS) , and
Panasas File System ; while there also exist research projects on distributed object storage
systems such as Usra Minor , Ceph , Extremes, and Gfarm . These are usually required for
advanced scientific or data-intensive applications such as, seismic data processing, digital
animation studios, computational fluid dynamics, and semiconductor manufacturing. In these
environments, hundreds or thousands of file system clients share data and generate very high
aggregate I/O load on the file system supporting pet byte- or terabyte-scale storage
capacities.
1.1 Problem Description
They discussed several applications such as encrypted file systems and content protection.
They leave as an open problem the question of building a public-key broadcast encryption
system with the same parameters as ours which is secure against adaptive adversaries. They
can design and prove security of key-exchange protocols in an idealized model where the
communication links are perfectly authenticated, and then translate those using general tools
to obtain security in the realistic setting of adversary-controlled links. With parallel and
distributed systems, since it hides the details of parallelization, fault-tolerance, locality
optimization and load balancing. Second, a large variety of problems are easily expressible as
Map Reduce computations. For example, Map Reduce is used for the generation of data for
Google's production web search service, for sorting, for data mining, for machine learning,
and many other systems. Third, we have developed an implementation of Map Reduce that
scales to large clusters of machines comprising thousands of machines.
2. SYSTEM STUDY
Our work focuses on the current Internet standard for such file systems, i.e., parallel Network
File System (pNFS), which makes use of Kerberos to establish parallel session keys between
clients and storage devices. Our review of the existing Kerberos-based protocol shows that it
has a number of limitations: (i) a metadata server facilitating key exchange between the
clients and the storage devices has heavy workload that restricts the scalability of the
protocol; (ii) the protocol does not provide forward secrecy; (iii) the metadata server
generates itself all the session keys that are used between the clients and storage devices, and
this inherently leads to key escrow.
Disadvantage
It has many workloads.
It has to refer multiples of data owners.
It does not support many to many Communications.
Parallel access is difficult to access multiple storage devices.
2.2 Proposed System
Advantage
It reduces a workload of server.
It contain metadata server which make user to access file easily.
It reduces the use of key generator.
It makes the computation of the client easier. Key the computation of the client easier.
2.3 Need for Computerization
The methods used in fingerprint authentication systems can be divided into texture-
based methods and human partial-based methods using algorithm Delaunay
quadrangle.
In this project, we used information security domain for increasing the security
accessing multi accessing system.
2.4 Data Flow Diagram
DATA OWNER
USER
DATA UPLOAD
DETAILS
WITH SECURE
ACCEPT/ SERVER
DECLINE
VIEW
USER
2.5 Class Diagram
3. SYSTEM CONFIGURATION
Front-End : ASP.NET
Back end : SQL server
Operating System : Windows 7
IDE : Visual Studio
4. OVERVIEW OF THE SOFTWARE
Modules
Data Owner
A data owner is an individual who is accountable for a data asset. This is typically an
executive role that goes to the department, team or business unit that owns a data asset. The
following are examples of responsibilities associated with the data owner role.
Server
The term database server may refer to both hardware and software used to run a
database, according to the context. As software, a database server is the back-end portion of a
database application, following the traditional client-server model. This back-end portion is
sometimes called the instance. It may also refer to the physical computer used to host the
database. When mentioned in this context, the database server is typically a dedicated higher-
end computer that hosts the database.
Data access is a generic term referring to a process which has both an IT-specific
meaning and other connotations involving access rights in a broader legal and/or political
sense. In the former it typically refers to software and activities related to storing, retrieving,
or acting on data housed in a database or other repository. Two fundamental types of data
access exist:
Data access crucially involves authorization to access different data repositories. Data
access can help distinguish the abilities of administrators and users. For example,
administrators may have the ability to remove, edit and add data, while general users may not
even have "read" rights if they lack access to particular information.
Data Access Restriction
If any role does not restrict access to data from any table or field, it means that in this
table the values of the required fields are accessible for any record. In other words, no data
access restriction implies that the restriction "WHERE TRUE" is applied.
Software Testing
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to check
the functionality of components, sub assemblies, assemblies and/or a finished product It is
the process of exercising software with the intent of ensuring that the Software system meets
its requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.
Types of Tests:
Testing is the process of trying to discover every conceivable fault or weakness in a
work product. The different type of testing are given below:
Unit Testing:
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual software
units of the application .it is done after the completion of an individual unit before
integration.
This is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific business
process, application, and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and contains clearly
defined inputs and expected results.
Integration Testing:
Integration tests are designed to test integrated software components to determine if
they actually run as one program. Testing is event driven and is more concerned with the
basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically aimed
at exposing the problems that arise from the combination of components.
Functional Test:
Functional tests provide systematic demonstrations that functions tested are available
as specified by the business and technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/ Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key functions, or
special test cases. In addition, systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test:
System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.
White Box Testing:
White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It
is used to test areas that cannot be reached from a black box level.
Black Box Testing:
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of
tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.
7. CONCLUSION
We proposed three authenticated key exchange protocols for parallel network file system
(PNFS). Our protocols offer three appealing advantages over the existing Kerberos-based
Pnfs protocol. First, the metadata server executing our protocols has much lower workload
than that of the Kerberos-based approach. Second, two our protocols provide forward
secrecy: one is partially forward securing (with respect to multiple sessions within a time
period), while the other is fully forward secure (with respect to a session). Third, we have
designed a protocol which not only provides forward secrecy, but is also escrow-free.
8. APPENDICES
Source Code
Screenshots