Você está na página 1de 30

ABSTRACT

Public-key digital certificate has been widely used in public-key infrastructure (PKI)
to provide user public key authentication. However, the public-key digital certificate itself
cannot be used as a security factor to authenticate user. In this paper, we propose the concept
of generalized digital certificate (GDC) that can be used to provide user authentication and
key agreement.
A GDC contains users public information, such as the information of users digital
drivers license, the information of a digital birth certificate, etc., and a digital signature of the
public information signed by a trusted certificate authority (CA). However, the GDC does not
contain any users public key. Since the user does not have any private and public key pair,
key management in using GDC is much simpler than using public-key digital certificate.
The digital signature of the GDC is used as a secret token of each user that will never
be revealed to any verifier. Instead, the owner proves to the verifier that he has the knowledge
of the signature by responding to the verifiers challenge. Based on this concept, we propose
both discrete logarithm (DL)-based and integer factoring (IF)-based protocols that can
achieve user authentication and secret key establishment.

TABLE OF CONTENTS

1. INTRODUCTION

1.2 Existing system........


........................................................5
1.3 Proposed System........
................................................5
2. MODULES
10
1.1 Page count Based co-occurrence
measures...2
1.2 Lexical pattern
extraction....................................................5
1.3 Measuring semantic
similarity...............................................5
1.4 Ranking search results...............................................5
3. SYSTEM ANALYSIS
10
3.1 Software and Hardware
Requirements....11
3.2 About the
Software...12
3.3 Feasibility
Study..16
3.4 Functional and Non-functional
Requirements18
4. SYSTEM DESIGN
20

5. SAMPLE CODE
30
6. OUTPUT SCREENS
48
7. SYSTEM TESTING
57
8. CONCLUSION
63
9. FUTURE ENHANCEMENTS
65
10. BIBLIOGRAPHY
67

1. INTRODUCTION
1.1 Overview:
We propose an innovative approach which enables a user to be authenticated
and a shared secret session key be established with his communication partner using any
general form of digital certificates, such as a digital drivers license, a digital birth
certificate or a digital ID, etc. We call this kind of digital certificate as a generalized digital
certificate (GDC).
A GDC contains users public information and a digital signature of this
public information signed by a trusted CA. However, in GDC, the public information does
not contain any users public key. Since user does not have any private and public key pair,
this type of digital certificate is much easier to manage than the X.509 public-key digital
certificates. The digital signature of the GDC is used as a secret token of each user.
The owner of a GDC never reveals signature of GDC to a verifier in
plaintext. Instead, the owner computes a response to the verifiers challenge to prove that

he has the knowledge of the digital signature. Thus, owning a GDC can provide user
authentication in a digital world. In addition, a secret session key can be established
between the verifier and the certificate owner during this interaction.
There are three entities in a digital certificate application. They are the
following: a) Certificate Authority (CA): CA is the person or organization that digitally
signs a statement with its private key. In PKI applications, the X.509 public-key digital
certificate contains a statement, including the users public key, and a digital signature of
the statement. The difference between the GDC and the existing public-key digital
certificate is that in a GDC, the public information does not contain any users public key.
b) Owner of a GDC: The owner of the GDC is the person who receives the GDC from a
trusted CA over a secure channel. The owner needs to compute a valid answer in
response to the verifiers challenged question in order to be authenticated and establish a
secret session key. c) Verifier: The verifier is the person who challenges the owner of a
GDC and validates the answer using the owners public information and CAs public key.
In most paper-world user identification applications, a trusted authority is
responsible for issuing identification card with user information, such as user name and a
personal photo on the card, to each user. Each user can be successfully identified if the user
owns a legitimate paper certificate and matches the photo on the card. The built-in
tamper-resistant technology made the identification cards very difficult to be forged.
Therefore, owning a paper certificate is the factor in the authentication process. In this
paper, our goal is to propose a similar solution in electronic-world applications. We call it
the generalized digital certificate (GDC).
A GDC contains public information of the user and a digital signature of the
public information signed by a trusted certificate authority. The digital signature will never
be revealed to the verifier. Therefore, the digital signature of a GDC becomes a security
factor that can be used for user authentication.

1.2 Existing System


The well-known digital certificate is the X.509 public-key digital certificate [1]. The
statement generally contains the users public key as well as some other information. The
signer of the digital signature is normally a trusted certificate authority (CA). The X.509

public-key digital certificate has been widely used in public-key infrastructure (PKI) to
provide authentication on the users public key contained in the certificate. The user is
authenticated if he is able to prove that he has the knowledge of the private key
corresponding to the public key specified in the X.509 public-key digital certificate.
However, the public key digital certificate itself cannot be used to authenticate a user since
a public-key digital certificate contains only public information and can be easily recorded
and played back once it has been revealed to a verifier.
A traditional digital signature provides authentication of a given message to
the receiver. However, this approach can sometimes violate the signers privacy. A
malicious receiver can reveal the senders digital signature to any third-party without the
senders consent. Subsequently, anyone can access the signers public key and validate the
digital signature. In 1989, Chaum and Antwerpen [5] introduced the notion of an
undeniable signature, which enables the signer to have a complete control over his/her
signature. The verification of an undeniable signature requires participation of the message
signer. However, this arrangement can prevent undesirable verifiers from validating the
signature. The real problem of the undeniable signature is that the signer needs to
authenticate the verifier before helping the verifier to validate the undeniable signature

DISADVANTAGES IN EXISTING SYSTEM

Efficient estimation of semantic similarity between words is critical for various


natural language processing tasks such as word sense disambiguation (WSD), textual
entailment, and automatic text summarization.

Semantic similarity between entities changes over time and across domains.

1.3 Proposed system


Our proposed scheme is closely related to the ID-based cryptography. In an
ID-based cryptographic algorithm, each user needs to register at a private key generator
(PKG) and identify himself before joining the network. Once a user is accepted, the PKG
will generate a private key for the user. The users identity (e.g. users name or email
address) becomes the corresponding public key.

In this way, in order to verify a digital signature of a message, the sender


sends an encrypted message to a receiver, a user only needs to know the identity of his
communication partner and the public key of the PKG, which is extremely useful in cases
like wireless communication where pre-distribution of authenticated public keys is
infeasible. However, in an ID-based cryptographic algorithm, it is assumed that each user
already knows the identity of his communication partner. Based on this assumption, there
is no need, nor have feasible ways, to authenticate the identity. This is the main advantage
of ID-based cryptography. Due to this assumption, ID-based cryptography is only limited
to applications that communication entities know each other prior to communication.
While in our proposed GDC scheme, the user does not need to know any
information of his/her communication partner. The public information of a GDC, such as
users identity, can be transmitted and verified by each communication entity. Furthermore,
this information is used to authenticate each other. In other words, our proposed schemes
support general PKI applications, such as Internet e-commence, that communication
entities do not need to know each other prior to the communication. Our proposed solution
is based on the combination of a conventional digital signature scheme and the well-known
(generalized) Diffie- Hellman assumption.

SOFTWARE REQUIRMENT SPECIFICATION

Introduction: A Software Requirements Specification (SRS) is a


complete description of the behaviour of the system to be
developed. It includes a set of use cases that describe all the
interactions the users will have with the software. Use cases are also
known as functional requirements. In addition to use cases, the SRS
also contains non-functional (or supplementary) requirements. Nonfunctional requirements are requirements which impose constraints
on the design or implementation (such as performance engineering
requirements, quality standards, or design constraints). Functional
Requirements: In software engineering, a functional requirement
defines a function of a software system or its component. A function
is described as a set of inputs, the behavior, and outputs. Functional
requirements may be calculations, technical details, data
manipulation and processing and other specific functionality that
define what a system is supposed to accomplish. Behavioural
requirements describing all the cases where the system uses the
functional requirements are captured in use cases. Functional
requirements are supported by non-functional requirements (also
known as quality requirements), which impose constraints on the
design or implementation (such as performance requirements,
security, or reliability). How a system implements functional
requirements is detailed in the system design. In some cases a
requirements analyst generates use cases after gathering and
validating a set of functional requirements. Each use case illustrates
behavioral scenarios through one or more functional requirements.
Often, though, an analyst will begin by eliciting a set of use cases,
from which the analyst can derive the functional requirements that
must be implemented to allow a user to perform each use case.
Non Functional Requirements: In systems engineering and
requirements engineering, a non-functional requirement is a
requirement that specifies criteria that can be used to judge the
operation of a system, rather than specific behaviors. This should be
contrasted with functional requirements that define specific
behavior or functions In general; functional requirements define
what a system is supposed to do whereas non-functional

requirements define how a system is supposed to be. Non-functional


requirements are often called qualities of a system. Other terms for
non-functional requirements are "constraints", "quality attributes",
"quality goals" and "quality of service requirements," and "nonbehavioral requirements.Qualities, that is, non-functional
requirements, can be divided into two main categories: 1. Execution
qualities, such as security and usability, which are observable at run
time. 2. Evolution qualities, such as testability, maintainability,
extensibility and scalability, which are embodied in the static
structure of the software system.

SYSTEM ANALYSIS

Introduction: To be used efficiently, all computer software needs certain hardware


components or other software resources to be present on a computer. These prerequisites are known as (computer) system requirements and are often used as a
guideline as opposed to an absolute rule. Most software defines two sets of system
requirements: minimum and recommended. With increasing demand for higher
processing power and resources in newer versions of software, system requirements
tend to increase over time. Industry analysts suggest that this trend plays a bigger part
in driving upgrades to existing computer systems than technological advancements.
Hardware Requirements: The most common set of requirements defined by any
operating system or software application is the physical computer resources, also
known as hardware, a hardware requirements list is often accompanied by a hardware
compatibility list (HCL), especially in case of operating systems. An HCL lists tested,
compatible, and sometimes incompatible hardware devices for a particular operating
system or application. The following sub-sections discuss the various aspects of
hardware requirements. Hardware Requirements for Present Project:
1. Input Devices: Keyboard and Mouse
2. RAM: 512 MB
3. Processor: P4 or above
4. Storage: Less than 100 MB of HDD space.
Software Requirements: Software Requirements deal with defining software
resource requirements and pre-requisites that need to be installed on a computer to
provide optimal functioning of an application. These requirements or pre-requisites
are generally not included in the software installation package and need to be installed
separately before the software is installed. Software Requirements for Present
Project:
1. Operating System: Windows XP SP2 or above
2. Run-Time: Java Virtual Machine
3. Apache tomcat server
4. Oracle Database
5. MS-Access Database

SYSTEM ANALYSIS

ABOUT THE SOFTWARE


JAVA TECHNOLOGY:
Initially the language was called as oak but it was renamed as java in 1995. The
primary motivation of this language was the need for a platform-independent (i.e.,
architecture neutral) language that could be used to create software to be embedded to various
consumer electronic devices.

Java is a programmers language.


Java is cohesive and consistent.
Except for those constraints imposed by the Internet environment, Java gives the

programmer, full control.


Finally, Java is to Internet programming where C was to system programming.

JAVA VIRTUAL MACHINE (JVM):


Beyond the language, there is the Java Virtual Machine. The Java Virtual Machine is
an important element of the Java technology. The virtual machine can be embedded within a
web browser or an operating system. Once a piece of java code is loaded on to a machine, it
is verified. As part of loading process, a class loader is invoked and does byte code
verification makes sure that the code thats has been generated by the compiler will not
corrupt the machine that its loaded on. Byte code verification takes place at the end of the
compilation process to make sure that is all accurate and correct. So byte code verification is
integral to the compiling and executing of java code.

DEVELOPMENT PROCESS OF JAVA PROGRAM:


Java programming uses JVM to produce byte codes and executes them. The first box
indicates that the Java source code is located in a Java file that is processed with a Java
compiler called Javac. The Java compiler produces a file called a .class file, which contains
the byte code. The class file is then loaded across the network or loaded locally on your
machine into the execution environment is the Java Virtual Machine, which interprets and
executes the byte code.

SWINGS

Swing is a large set of components ranging from the very simple, such as labels, to the
very complex, such as tables, trees, and styled text documents. Almost all Swing components
are derived from a single parent called JComponent which extends the AWT Container class.
For this reason, Swing is best described as a layer on top of AWT rather than a
replacement for it. Following figure shows a partial JComponent hierarchy. If you compare
this with the AWT Component hierarchy of figure 1.1, you will notice that each AWT
component has a Swing equivalent that begins with the prefix J. The only exception to this
is the AWT Canvas class, for which JComponent, JLabel, or JPanel can be used as a
replacement. You will also notice many Swing classes that dont have AWT counterparts.
Following Figure represents only a small fraction of the Swing library, but this
fraction contains the classes you will be dealing with the most. The rest of Swing exists to
provide extensive sup-port and customization capabilities for the components these classes
define.

SOME OF THE PACKAGES USED IN SYSTEM ARE:

javax.swing
Contains the most basic Swing components, default component models and interfaces. (Most of the classes shown in figure 1.2 are contained in this package.)

javax.swing.border
Contains the classes and interfaces used to define specific border styles. Note that
borders can be shared by any number of Swing components, as they are not components
themselves.

javax.swing.colorchooser
Contains classes and interfaces that support the JColorChooser component, which is
used for color selection. (This package also contains some interesting undocu-mented private
classes.)

javax.swing.event
Contains all Swing-specific event types and listeners. Swing components also support events and listeners defined in java.awt.event and java.beans.

FEASIBILITY STUDY

Preliminary investigation examine project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are
aspects in the feasibility study portion of the preliminary investigation:

Technical Feasibility

Operational Feasibility

Economical Feasibility

TECHNICAL FEASIBILITY:
The technical issue usually raised during the feasibility stage of the investigation
includes the following:

Does the necessary technology exist to do what is suggested?

Do the proposed equipments have the technical capacity to hold the data required to
use the new system?

Will the proposed system provide adequate response to inquiries, regardless of the
number or location of users?

Can the system be upgraded if developed?

Are there technical guarantees of accuracy, reliability, ease of access and data
security?

OPERATIONAL FEASIBILITY:
Proposed projects are beneficial only if they can be turned out into information
system. That will meet the organizations operating requirements. Operational feasibility

aspects of the project are to be taken as an important part of the project implementation.
Some of the important issues raised are to test the operational feasibility of a project includes
the following:

Is there sufficient support for the management from the users?

Will the system be used and work properly if it is being developed and implemented?

Will there be any resistance from the user that will undermine the possible application
benefits?
This system is targeted to be in accordance with the above-mentioned issues.

Beforehand, the management issues and user requirements have been taken into
consideration. So there is no question of resistance from the users that can undermine the
possible application benefits.
The well-planned design would ensure the optimal utilization of the computer
resources and would help in the improvement of performance status.

ECONOMICAL FEASIBILITY:
A system can be developed technically and that will be used if installed must still be a
good investment for the organization. In the economical feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.

SYSTEM DESIGN

UNIFIED MODELING LANGUAGE DIAGRAMS:


The unified modelling language allows the software engineer to express an analysis
model using the modelling notation that is governed by a set of syntactic semantic and
pragmatic rules.
A UML system is represented using five different views that describe the system from
distinctly different perspective. Each view is defined by a set of diagram, which is as follows.

User Model View

This view represents the system from the users perspective.

The analysis representation describes a usage scenario from the end-users perspective.

Structural Model View

In this model the data and functionality are arrived from inside the system.

This model view models the static structures.

Behavioural Model View

It represents the dynamic of behavioural as parts of the system, depicting the


interactions of collection between various structural elements described in the user
model and structural model view

.
Implementation Model View

In this the structural and behavioural as parts of the system are represented as they
are to be built.

Environmental Model View

In this the structural and behavioural aspects of the environment in which the system is to be
implemented are represented

Use Case Diagram


A use case is a generalization of scenario that is a use case specifies all possible
scenarios for a given piece of functionality. An actor initiates a use case. A use case represents
the complete flow of events through the system in the sense that it describes a series of
related interactions that result from its initiation.
The use case defines the scope of the system; they also help in identifying the
boundary condition of the system. They help in clarifying the roles of the actor in his
interaction with the system.

Sequence Diagram:
A sequence diagram in Unified Modelling Language (UML) is a kind of interaction
diagram that shows how processes operate with one another and in what order. It is a
construct of a Message Sequence Chart. Sequence diagrams are sometimes called event
diagrams, event scenarios, and timing diagrams.
A sequence diagram shows, as parallel vertical lines (lifelines), different processes
or objects that live simultaneously, and, as horizontal arrows, the messages exchanged
between them, in the order in which they occur. This allows the specification of simple
runtime scenarios in a graphical manner.

Activity Diagram:
Activity diagrams are graphical representations of workflows

of stepwise

activities and actions with support for choice, iteration and concurrency.[1]

In the

Unified Modelling Language, activity diagrams can be used to describe the business and
operational step-by-step workflows of components in a system.

An activity diagram shows the overall flow of control. Activity diagrams are
constructed

from a

limited

repertoire of

shapes, connected with arrows. The most

important shape types:


* rounded rectangles represent activities;
* diamonds represent decisions;
* bars represent the start (split) or end (join) of concurrent activities;
* a black circle represents the start (initial state) of the workflow;
* an encircled black circle represents the end (final state).

State Chart Diagram:


A state diagram is a type of diagram used in computer science and related fields to
describe the behaviour of systems. State diagrams require that the system described is
composed of a finite number of states; sometimes, this is indeed the case, while at other times
this is a reasonable abstraction.

Class Diagram:
In software engineering, a class diagram in the Unified Modelling Language
(UML) is a type of static structure diagram that describes the structure of a system by
showing the system's classes, their attributes, and the relationships between the classes.
The class diagram is the main building block in object oriented modelling.
They are being used both for general conceptual modelling of the systematic of the
application, and for detailed modelling translating the models into programming code.

The classes in a class diagram represent both the main objects and or interactions in
the application and the objects to be programmed. In the class diagram these classes are
represented with boxes which contain three parts:
A class with three sections:
The upper part holds the name of the class
The middle part contains the attributes of the class
The bottom part gives the methods or operations the class can take or undertake

SAMPLE CODE

Testing
Overview:

The process of executing a system with the intent of finding errors


Testing is defined as the process in which defects are identified, isolated, subjected for
rectification and ensured that product is defect free in order to produce the quality product
and hence customer satisfaction.
Quality is defined as justification of the requirements

Defect is nothing but deviation from the requirements.


Defect is nothing but bug.
Testing----the presence of bugs
Testing can demonstrate the presence of bugs, but not their absence
Debugging and Testing are not the same thing!
Testing is systematic attempt to break a program or the AUT
Debugging is the art or method of uncovering why the script/program did not execute
properly.

Testing Methodologies:

Black box Testing: is the testing process in which tester can perform testing on an
application without having any internal structural knowledge of application.
Usually Test Engineers are involved in the black box testing.
White box Testing: is the testing process in which tester can perform testing on an
application with having internal structural knowledge.
Usually the Developers are involved in white box testing.
Gray box Testing: is the process in which the combination of black box and white
box tonics are used.

Types of Testing
Unit testing
Unit testing refers to tests that verify the functionality of a specific section of code,
usually at the function level. In an object-oriented environment, this is usually at the class
level, and the minimal unit tests include the constructors and destructors These type of
tests are usually written by developers as they work on code (white-box style), to ensure
that the specific function is working as expected.

Integration testing
Integration testing is any type of software testing that seeks to verify the interfaces
between components against a software design. Software components may be integrated in
an iterative way or all together ("big bang"). Normally the former is considered a better
practice since it allows interface issues to be localised more quickly and fixed.

System testing
System testing tests a completely integrated system to verify that it meets its
requirements

System integration testing


System integration testing verifies that a system is integrated to any external or
third party systems defined in the system requirements.[citation needed]

Regression testing

Regression testing focuses on finding defects after a major code change has
occurred. Specifically, it seeks to uncover software regressions, or old bugs that have
come back. Such regressions occur whenever software functionality that was previously
working correctly stops working as intended.

Acceptance testing
Acceptance testing can mean one of two things:
1. A smoke test is used as an acceptance test prior to introducing a new build to the main
testing process, i.e. before integration or regression.
2. Acceptance testing performed by the customer, often in their lab environment on their
own hardware, is known as user acceptance testing (UAT). Acceptance testing may be
performed as part of the hand-off process between any two phases of development.
[citation needed]

Alpha testing
Alpha

testing

is

simulated

or

actual

operational

testing

by

potential

users/customers or an independent test team at the developers' site. Alpha testing is often
employed for off-the-shelf software as a form of internal acceptance testing, before the
software goes to beta testing. van Veenendaal, Erik. "Standard glossary of terms used in
Software Testing".

Beta testing
Beta testing comes after alpha testing. Versions of the software, known as beta
versions, are released to a limited audience outside of the programming team. The
software is released to groups of people so that further testing can ensure the product has few
faults or bugs. Sometimes, beta versions are made available to the open public to
increase the feedback field to a maximal number of future users.[citation needed]

Non-functional testing

Special methods exist to test non-functional aspects of software. In contrast to


functional testing, which establishes the correct operation of the software (correct in that it
matches the expected behavior defined in the design requirements).
Non-functional testing verifies that the software functions properly even when it
receives invalid or unexpected inputs. Software fault injection, in the form of fuzzing, is an
example of non-functional testing.

TestCases:

Test No.
1.

Test Case
Invalid Log In Test:
By providing invalid
User name and
Password

Expected Output
A dialog Box to be
displayed saying
Invalid Login, Access
Denied

Actual Output
A dialog Box is
displayed saying
Invalid Login, Access
Denied

Result
Passed

Valid Log In Test:


By providing Valid
User name and
Password

The Text Screen for


accepting the text to
be shown

The Text Screen for


accepting the text is
shown

Passed

Invalid input text : It will not generate


By providing invalid cipher text
format of data

It will show error

passed

Valid inputs text: By


providing valid
format of data(.txt
file only)

It will calculate
cipher text

It will show cipher


text and convert into
plain text

Passed

5.

Invalid port number:


By providing invalid
port in server and
client it will not
connect each other.

It will show error

It will show error

passed

Conclusion

In this paper, we have proposed a novel design in using a GDC for user
authentication and key establishment. In our design, a GDC does not contain
the users public key. Since the user does not have any private and public key
pair, this type of digital certificate is much easier to manage than the X.509
public-key digital certificates. Our approach can be applied to both DL-based
and IF-based public-key cryptosystems.

Future Scope

Bibliography

[1] Network Working Group, Internet X.509 public key infrastructure


certificate and crl profile, RFC: 2459," Jan. 1999. [2] C. Tang and D. Wu, An
efficient mobile authentication scheme for wireless networks," IEEE Trans.
Wireless Commun., vol. 7, pp. 1408- 1416, Apr. 2008. [3] G. Yang, Q. Huang,
D. Wong, and X. Deng, An efficient mobile authentication scheme for
wireless networks," IEEE Trans. Wireless Commun., vol. 9, pp. 168-174, Jan.
2010. [4] J. Chun, J. Hwang, and D. Lee, A note on leakage-resilient
authenticated key exchange," IEEE Trans. Wireless Commun., vol. 8, pp.
2274- 2279, May 2009. [5] D. Chaum and H. van Antwerpen, Undeniable
signatures," Advances in Cryptology - Crypto89, Lecture Notes in Computer
Science, vol. 435, pp. 212-217, 1989. [6] M. Bohj and M. Kjeldsen,
Cryptography report: undeniable signature schemes," Tech. Rep., Dec. 15,
2006. [7] X. Huang, Y. Mu, W. Susilo, and W. Wu, Provably secure pairingbased convertible undeniable signature with short signature length," PairingBased Cryptography -C Pairing 2007, vol. 4575/2007 of Lecture Notes in
Computer Science, pp. 367-391, Springer Berlin / Heidelberg, 2007. [8] M.
Jakobsson, K. Sako, and R. Impagliazzo, Designated verifier proofs and their
applications," Advances in Cryptology - EUROCRYPT, pp. 143-154, 1996. LNCS
Vol 1070. [9] D. Chaum, Private signature and proof systems," 1996. [10] R.
Rivest, A. Shamir, and Y. Tauman, How to leak a secret," Advances in
Cryptology-ASIACRYPT, Lecture Notes in Computer Science, vol. 2248/2001,
Springer Berlin / Heidelberg, 2001.