Escolar Documentos
Profissional Documentos
Cultura Documentos
5 Overview 2.6 Existing System 2.6.1 Disadvantages 2.7 Proposed System 2.7.1 Advantages 3. Project Requirements 3.1 Functional Requirements 3.2 Non-functional Requirements 4. Project Analysis 4.1 Data Flow Diagrams(DFDs) 4.2 UML 5. Project Design 5.1 Architecture 5.1.1 Introduction 5.2 Data Model 5.2.1 Introduction 5.2.2 Data Dictionary 5.3 Application Design 5.3.1 Introduction 5.3.2 Modules 5.3.3 User Interfaces 6. Project Development 6.1 Software Algorithm 6.2 Software Flow Diagrams 7. Testing 8. Conclusion
1. ABSTRACT
In the present system the network helps a particular organization to share the data by using external devices. The external devices are used to carry the data. The existing system cannot provide security, which allows an unauthorized user to access the secret files. It also cannot share a single costly printer. Many interrupts may occur with in the system. In this project the networking allows a company to share files or data with out using some external devices to carry the data. Similarly a company can share a single costly printer. Though it is advantageous we have numerous disadvantageous, some body writes a program and can make the costly printer to misprint the data. Similarly some unauthorized user may get access over the network and may perform any illegal functions like deleting some of the sensitive information like employee salary details, while they are in transactions. Security is the term that comes into picture when some important or sensitive information must be protected from an unauthorized access. Hence there must be someway to protect the data from them and even if he hacks the information, he should not be able to understand whats the actual information in the file, which is the main intention of the project. The project is designed to protect the sensitive information while it is in transaction in the network. There are many chances that an unauthorized person can have an access over the network in someway and can access this sensitive information. The project uses the strong secured algorithms that enables and guarantees the security of the information in the network.
2. ABOUT PROJECT
2.1 INTRODUCTION:
The Encryption and Decryption are two powerful security technologies that are widely used to protect the data from loss and inadvertent or deliberate compromise. Businesses today are focused on the importance of securing customer and business data. Increasing regulatory requirements are driving need for security of data. Cryptography is the technique used to secure the data while they are in transactions. Encryption and Decryption are two techniques used under cryptography technology. Data cryptography is the art of securing the resource that is shared among the applications. In this project the networking allows a company to share files or data with out using external devices. At the same time some unauthorized users may get access over the network and perform some illegal functions like deleting files while they are in transaction at that time encryption and decryption techniques are used to secure the data. The project uses the strong secured algorithms that enables and guarantees the security of information in the network.
2.2
PURPOSE:
The main purpose of the project is to secure the data while it is in transaction in the network. The project Data Encryption and Decryption shows how the sensitive information must be protected from unauthorized users by providing Cryptographic techniques which constitutes the algorithms for encryption and decryption that intends to ensure the secrecy and authenticity of messages.
2.3
OBJECTIVE:
The main objective of this article is to bring out key approaches that are involved in the Encryption and Decryption of data in the way to make an application secure by designing of the Encryption and Decryption algorithms. Encryption offers the looking of particular data, where Decryption is the process that unlocks the data.
2.4
SCOPE:
With the rapid development of Multimedia data management technologies over the internet there is need to concern about the internet there is need to concern about the security and privacy of information. In multimedia document, dissipation and sharing of data is becoming a common practice for internet based application and enterprises. As the internet forms the open source the present for all users security forms the critical issue. Hence the transfer of information over the internet forms the critical issue. At the present situations the cryptographic techniques are used for providing SECURITY. Cryptography constitutes Encryption and Decryption.
2.5
PROJECT PERSPECTIVE:
The project Data Encryption Techniques is totally enhanced with the features that enable us to feel the real-time environment. Todays world is mostly employing the latest networking techniques instead of using stand-alone pcs. Encryption or information scrambling, technology is an important security tool. By properly applying, it can provide a secure communication channel even when the underlying system and network infrastructure is not secure. This is particularly important when data passes through the shared systems or network segments where multiple people may have access to the information. In these situations, sensitive data and especially passwords should be encrypted in order to protect it from unintended disclosure or modification.
2.6
EXISTING SYSTEM:
In the physical system the network helps a particular organization to share the data by using external devices. The external devices are used to carry data. The existing system cannot provide security, which allows an unauthorized user to access the secret files. It also cannot share a single costly printer, many interrupts may occur with in the system.
2.6.1
DISADVANTAGES:
The existing system cannot provide security. It allows an unauthorized user to access the secret files.
2.7
PROPOSED SYSTEM:
In this system security is the term that comes into picture when some important or sensitive information must be protected from an unauthorized access. Hence there must be some way to protect the data from them and even if he hacks the information, he/she should not be able to understand whats the actual information in the file, which is the main intention of the project.
2.7.1 ADVANTAGES:
The proposed system provides the security and it does not allow unauthorized users to access the secret files. As per the ISO standards the security parameters are: o o o o o Confidentiality Authentication Integrity Key distribution Access control
CONFIDENTIALITY:Confidentiality is the protection of transmitted data from passive attacks. It can protect the data from unauthorized disclosure.
AUTHENTICATION:-
A process used to verify the integrity of the transmitted data, especially a message. It is the process of proving ones identity to someone else.
INTEGRITY:The sender and the receiver want to ensure that the content of their communication is not altered during transmission.
KEY DISTRIBUTION:Key distribution is a term that refers to the means of delivering a key to the communicating parties, with out allowing others to see the key.
ACCESS CONTROL:It is a ability to limit and control the access to host systems and applications via communication links.
3.
3.1
PROJECT REQUIREMENTS
FUNCTIONAL REQUIREMENTS:
Security is the term that comes into picture when some important or some sensitive information must be protected from an unauthorized users. Today, the maximum of the worlds population is using computers to access their required information in some form of the networked systems. Some are accessing through the worlds famous Internet and some through the different networks like LAN, WAN etc. The two types of securities are: 1. System Security 2. Network Security
System Security:
Protecting the data and files with in the system.
Network Security:
Protecting sensitive information while it is in the transaction in the network. If there is no security, then there are many chances that an unauthorized person can have access over the network in some way and can access this sensitive information. For example: Sys1 Sys2
Third person In the above diagram that sys1 and sys2 are transmit the data simultaneously. Here, third person will comes into picture, sys1 transmit the data to the third person correctly and the third person will transmit the data sys2 is wrong. In this we protect the data and the data will send to the system correctly. The requirements of information security within an organization have undergone two major changes in the last several decades. Before the widespread use of data processing equipment, the security of information felt to be valuable to an organization lock for storing sensitive documents. An example of the latter is personnel screening procedures used during the hiring process. The first and foremost, security for this sensitive information especially the case for a shared system, such as time-sharing system, is even more accurate for systems that can be accessed over a public telephone network, data network, or the Internet. To protect data and to thwart hackers is known as computer security. Secondly, the change that affected security is the introduction of distributed systems and the use of networks and communication facilities for carrying data between terminal user and computer and between computer and computer. Network security measures are needed to protect data during their transmission.
Before we proceed, there are some considerations how Information can be threatened to access from an unauthorized person, what we call as Security Threats. Some of them are shown as under:
Information Source
(a) Normal
flow
Information Destination
(b) Interruption
(c) Interception
(d) Modification
Interruption: This is the type of security threat in which the sender thinks that he has successfully sent his file to the receiver. The receiver is unaware of the information and he might think that the sender has not yet sent the file. Interception: In this, an unauthorized party gains access to an asset. This is an attack on confidentiality. The unauthorized party could be a person, a program, or a computer. Examples include wiretapping to capture data in a network and the illicit copying of files or programs. Modification: In this, an unauthorized party not only gains access to but tampers with an asset. This is an attack on integrity. Examples include changing values in a data file, altering a program so that it performs differently, and modifying the content of messages being transmitted in a network. Fabrication: An unauthorized party inserts counterfeit objects into the system. This is an attack on authenticity. Examples include the insertion of spurious messages in a network or the addition of records to a file.
The assets mentioned above may be one of the following: Hardware Software Data and Communication lines and Networks
Note: - Our project is limited to the assets - Software and Data. We are not at all concerned with Hardware and Communication lines and networks.
Attacks:
An assault on system security that derives from an intelligent threat. Security attacks can be categorized into:
1. Passive attack:
A passive attack attempts to learn or make use of information but does not change the data. It can be classified into two types. They are Release of message contents Traffic analysis
Passive attacks are very difficult to detect because they do not involve any alteration of data. The emphasis in dealing with passive attacks is on prevention rather than detection.
2. Active attack:
In active attacks, data is not only copied but also altered. It can be classified into four categories: Masquerade Replay Modification of message Denial of service
ENCRYPTION is a procedure that involves a mathematical transformation into scrambled gobbledygook, called Cipher text. The computational process (an algorithm) uses a key, actually just a big number associated with a password or pass phrase to compute or convert plaintext into cipher text with numbers or strings of characters. The resulting encrypted text is decipherable only by the holder of the corresponding key. This deciphering process is also called DECRYPTION. There are many different and incompatible encryption techniques available, and not all the software we need to use implants a common approach. One very important feature of a good encryption scheme is the ability to specify a key or password of some kind, and have the encryption method alter itself such that each key or password produces a different encrypted output, which requires unique Key or Password to decrypt. Ek [M] = C This can either be a symmetrical Key (both encrypt and decrypt use the same key) or Asymmetrical Key (Encryption and decryption uses different key). The Encryption Key, the public Key is significantly different from the Decryption Key, the Private Key such that attempting to derive the Private Key from the Public Key involves many hours of computing time making if impractical at the best. Decryption of the data is also the other module which is implemented at the receiver. When the encrypted data or a file is reached at the receiver then the data has to be decrypted so that the information can be viewed by the Client/User. DK [EK[m]] =M
There are two approaches to Encrypt and Decrypt the data: 1. 2. Private or Symmetric key Encryption Public or Asymmetric Key Encryption
LIMITATIONS:
The organization has to maintain a separate key for each customer. It should maintain many numbers of keys when the potential of business transactions increases. The exchange of a secret key should be very confidential, otherwise hackers can misuse it.
They are used for Encryption and decryption of the data. Public key is freely available key that is used for the Encryption and private key is the master key used for decryption of the encrypted data. The private key is not exposed to the outside world and it is kept secret. Key generation tools are used to generate this pair of keys. Public key encryption is critical for the development of secure, distributed application. It provides the best and most efficient mechanism to maintain the confidentiality of data. It is a proven encryption approach that provides key distribution among the shared parties.
Cipher text KB+ (m) Plaintext message Encryption Algorithm A Decryption Algorithm B Plaintext message
10
m = K B+ = K B- =
BENEFITS:
It guarantees the confidentiality of data transfer across untrusted networks. Public Key Encryption is best suited for an organization that has a huge number of customers and many applications are deployed to support secure transactions. Public Key Cryptography is a proven approach for commercial applications/E-economics sites that are using credit card transactions over the internet.
LIMITATIONS:
Public Key Encryption is relatively expensive, and is not suited to encrypting large volumes of data. Though the Web services needs encryption data that is exchanged among services, traditional public key cryptography is not an exact fit to the Web services model; In the Web services model we need only a set of services parameters to be encrypted. The encryption of whole XML documents causes parsing problems when it is decrypted. So, we have to apply the encryption strategies that are design for XML.
CRYPTOGRAPHIC COMPONENTS
Plaintext, p
Encryptio n algorithm
Decrypti on algorithm
Plaintext
Trudy
11
The S-DES encryption algorithm takes an 8-bit block of plaintext and a 10-bit key as input and produces an 8-bit block of cipher text as output. The S-DES decryption algorithm takes an 8-bit block of cipher text and the same 10-bit key used to produce that cipher text as input and produces the original 8-bit block of plaintext. The encryption algorithm involves five functions: an initial permutation (IP); a complex function labeled fk, which involves both permutation and substitution operations and depends on a key input; a simple permutation function that switches (SW) the two halves of the data; the function fk again; and finally a permutation function that is the inverse of the initial permutation. The function fk takes as input not only the data passing through the encryption algorithm, but also an 8-bit key. The algorithm could have been designed to work with a 16-bit key, consisting of two 8-bit sub keys, one used for each occurrence of fk. Alternatively, a single 8-bit key could have been used, with the same key used twice in the algorithm. A compromise is to use a 10-bit key from which two 8-bit sub keys are generated, as depicted in the figure. In this case, the key is first subjected to a permutation (P10). Then a shift operation is performed. The output of the shift operation then passes through a permutation function that produces an 8-bit output (P8) for the first sub keys (K1). The output of the shift operation also feeds into another shift and another instance of P8 to produce the second sub key (K2). We can concisely express the encryption algorithm as a composition of functions: (IP)-1 * fk2 * SW * fk1 * IP This can also be written as Cipher text= (IP)-1(fk2 (SW (fk1 (IP (plain text))))) Where K1=P8 (shift (P10 (key))) K2 =P8 (shift (shift (P10 (key)))) Decryption is also shown in the figure and is essentially the reverse of encryption: Plain text= (IP)-1(fk1 (SW (fk2 (IP (cipher text)))))
12
This table is read from left to right; each position in the table gives the identity of the input bit that produces the produces the output bit in that position. So the first output bit is bit 3 of the input; the second output bit is bit 5 of the input, and so on. Next we apply P8, which picks out and permutes 8 of the 10 bits according to the following rule:
13
The result is sub key 1 (K1). We then go back to the pair of 5-bit strings produced by the two LS-1 function perform a circular left shift of 2 bit positions on each string.
S-DES Encryption:
Encryption involves the sequential applications of five functions. examine each of these. We
14
This retains all 8 bits of the plaintext but mixes them up. At the end of the algorithm, the inverse permutation is used: It is easy to show by example that the second permutation is indeed the reverse of the first; that is, (IP)-1(IP(X)) =X.
The Function fk
The most complex company of S-DES is the function fk, which consists of a combination of permutation and substitution functions. The functions can be expressed as follows. Let L and R be the leftmost 4 bits and rightmost 4 bits of the 8bit input to fk, and let F be a mapping from 4-bit string to 4-bit string.
15
Then we let fk (L, R) = (L XOR F(R, SK), R) where SK is a sub key and XOR is the bit-by-bit exclusive-OR function. Expansion/Permutation: And it uses two so-called s-boxes, S0 and S1. Here is S0
E/P 4 1 2 3 2 3 4 1
IP-1 4 1 3 5 7 2 8 6
16
The first 4 bits are fed into the S-box S0 to produce a 2-bit output, and the remaining 4 bits are fed into S1 to produce another 2-bit output. The S-boxes operates as follows. The first and fourth input bits are treated as a two bit number that specify a row of the S-box, and the second and third input bits specify a column of the S-box. The entry in that row and column, in base 2, is the 2bit output. Next, the 4 bits produced by S0 and S1 undergo a further permutation as Follows: P4 2 4 3 1
S-DES Decryption:
As with any, decryption uses the same algorithm as encryption, except that the application of the sub keys is reserved.
17
the key length in IBMs original LUCIFER algorithm was 128 bits, but that of the proposed system was only 56 bits, an enormous reduction in key size of 72 bits. Critics feared that this key length was too short to withstand Brute Force attacks. The second area of concern was that the design criteria for the internal structure of DES, the S-boxes, were classified. Thus users could not be sure that the internal structure of DES was free of any hidden weak points that would enable NSA decipher messages without benefit of the key. Subsequent events, particularly the recent work on differential cryptanalysis, seem to indicate that DES has a very strong internal structure. Furthermore, according to IBM participants, the only changes that were made to the proposal were changes to the S-boxes, suggested by NSA that removed vulnerabilities identified the course of the evaluation process.
DES ENCRYPTION:
The overall scheme for DES encryption is illustrated in Figure below. As with any encryption scheme, there are two inputs to the encryption function: the plain text to be encrypted and the key. In this case, the plain text must be 64 bits in length and the key is 56 bits in length. Looking at the left hand side of the figure, we can see the processing of the plain text proceeds in three phases. First, the 64-bit plain text passes through an initial permutation (IP) that rearranges the bits to produce the permuted input. This is
18
both permutation and substitution functions. The output of the last (sixteen) round consists of 64 bits that are a function of the input plain text and the key. The left and right halves of the output are swapped to produce the Pre-output. Finally, the preoutput is passed through a permutation (IP-1) that is the inverse of the initial permutation function, to produce the 64-bit cipher text. With the exception of the initial and final permutations, DES has the exact structure of Feistel cipher, as shown in the figure. The right-hand portion of fig above shows the way in which the 56-bit key is used. Initially, the key is passed through a permutation function. Then, for each of the 16 rounds, a sub key (Ki) is produced by the combination of a left circular shift and a permutation. The permutation function is the same for each round, but a different sub key is produced because of the repeated iteration of the key bit.
Initial Permutation:
Tables as shown in tables below define the initial permutation and its inverse. The tables are to be interpreted as follows. The input to a table consists of 64 bits numbered from 1 to 64. The 64 entries in the permutation table contain a permutation of the numbers from 1 to 64. Each entry in the permutation table indicates the position of a numbered input bit in the output, which also consists of 64 bits. To see that these two permutation functions are needed in the inverse of each other, consider the following 64-bit input M: M1 M9 M17 M25 M33 M41 M49 M57 M2 M10 M18 M26 M34 M42 M50 M58 M3 M11 M19 M27 M35 M43 M51 M59 M4 M12 M20 M28 M36 M44 M52 M60 M5 M13 M21 M29 M37 M45 M53 M61 M6 M14 M22 M30 M38 M46 M54 M62 M7 M15 M23 M31 M39 M47 M55 M63 M8 M16 M24 M32 M40 M48 M56 M64
Where Mi is a binary digit. Then the permutation X = IP (M) is as follows: M58 M60 M62 M64 M57 M59 M61 M63 M50 M52 M54 M56 M49 M51 M53 M55 M42 M44 M46 M48 M41 M43 M45 M47 M34 M36 M38 M40 M33 M35 M37 M39 M26 M28 M30 M32 M25 M27 M29 M31 M18 M20 M22 M24 M17 M19 M21 M23 M10 M12 M14 M16 M9 M11 M13 M15 M2 M4 M6 M8 M1 M3 M5 M7
If we then take the inverse permutation Y= IP-1 (IP (M)), it can be seen that the original ordering of the bits is restored.
19
Figure: show the internal structure of a single round. Again, begin by focusing on the left hand side of the diagram. A left and right half of each 64-bit intermediate value is treated as separate 32-bit quantities, labeled L (left) and R (right). The overall processing at each round can be summarized in the following formulas: Li = Ri-1 Ri = Li-1 XOR F (Ri-1, Ki)
The round key Ki is 48 bits. The R input is 32 bits. This R input is first expanded to 48 bits by using a table that defines a permutation plus an expansion that involves duplication of 16 of the R bits. Resulting 48 bits are XOR ed with Ki. This 48-bit result passes through a substitution function that produces a 32-bit output, which is permuted as defined by table. The role of the S-boxes in the function F is illustrated in Figure. The substitution consists of a set of eight S-boxes each of which accepts 6 bits as input and produces 4 bits as output. The first and last bits of the input to box S i form a 2-bit binary number to select one of four substitutions defined by the four rows in the table for Si. The middle four bits select one of the sixteen columns. The decimal value in
20
the cell selected by the row and column is then converted to its 4-bit representation to produce the output.
R (32) ) bits)
E K
48 Bits
(48) ) bits)
S
1
S
2
S
4
S
5
S
6
S
7
S
8
P
32 Bits
Calculation of F(R, K) Each row of an S-box defines a general reversible substitution. Figure may be useful in understanding the mapping. The figure shows the substitution for row 0 of box S1.
21
The operation of the S-boxes is worth further comment. Ignore for the moment the contribution of the key (Ki). If you examine the expansion table, you see that the 32 bits of input are split into groups of 4 bits, and then become groups of 6 bits by taking the outer bits from the two adjacent groups. For example, if part of the input word is .efgh ijkl mnop this becomes defrghi hijklm imnopq The outer two bits of each group select one of four possible substitutions. Then a 4-bit output value is substituted for the particular 4-bit input. The 32-bit output from the eight S-boxes is then permuted, so that on the next round the output from each B-box immediately affects as many others as possible.
KEY GENERATION:
Returning to fig, we see that a 64-bit key used as input to the algorithm. The bits of the key are numbered from 1 through 64; every eight bit is ignored, as indicated by the lack of shading in table. This is first subjected to a permutation governed by table labeled Permuted Choice One. The resulting 56-bit key is then treated as two 28-bit quantities, labeled C0 and D0. At each round, Ci-1 and Di-1 are separately separated to a circular shift, or rotation of 1 or 2 bits, as governed by Table. These shifted values serve as input to the next round. They also serve as input to Permuted Choice Two, which produces a 48-bit output that serves as input to the function F (Ri-1, Ki)
DES ENCRYPTION:
As with any decryption uses the same algorithm as encryption, except that the application of the sub keys is reserved. RSA ALGORITHM The RSA algorithm is named after Ron Rivest, AdiL Shamir and Len Adleman, who invented it in 1977. The RSA algorithm can be used for both public key encryption and digital signatures. Its security is based on the difficulty of factoring large integers.
Contents
Key generation algorithm Encryption Decryption Digital signing Signature verification Notes on practical application Summary of RSA Computational efficiency and the Chinese Remainder Theorem Theory and proof of the RSA algorithm A very simple example
22
1. Generate two large random primes, p and q, of approximately equal size such
that their product n = PQ is of the required bit length, e.g. 1024 bits. 2. Compute n = p.q and () phi = (p-1)(q-1). 3. Choose an integer e, 1 < e < phi, such that gcd (e, phi) = 1.
a. n is known as the modulus. b. e is known as the public exponent or encryption exponent c. d is known as the secret exponent or decryption exponent.
Encryption
Sender A does the following: 1. Obtains the recipient B's public key (n, e). 2. Represents the plaintext message as a positive integer m 3. Computes the cipher text c = m^e mod n. 4. Sends the cipher text c to B.
Decryption
Recipient B does the following: 1. Uses his private key (n, d) to compute m = c^d mod n.
1.
4.
Compute d such that Ed 1 (mod phi) i.e. compute d = e^-1 mod phi = 3^-1 mod 20 i.e. Find a value for d such that phi divides (ed-1) i.e. find d such that 20 divides 3d-1. Simple testing (d = 1, 2) gives d = 7 Check: ed-1 = 3.7 - 1 = 20, which is divisible by phi. 5. Public key = (n, e) = (33, 3) Private key = (n, d) = (33, 7). This is actually the smallest possible value for the modulus n for which the RSA algorithm works. Now say we want to encrypt the message m = 7, c = m^e mod n = 7^3 mod 33 = 343 mod 33 = 13. Hence the cipher text c = 13. To check decryption we compute m' = coda mod n = 13^7 mod 33 = 7. Note that we don't have to calculate the full value of 13 to the power 7 here. We can make use of the fact that a = bC mod n = (b mod n). (C mod n) mod n so we can break down a potentially large number into its components and combine the results of easier, smaller calculations to calculate the final value. One way of calculating m' is as follows: m' = 13^7 mod 33 = 13^(3+3+1) mod 33 = 13^3.13^3.13 mod 33 = (13^3 mod 33).(13^3 mod 33).(13 mod 33) mod 33 = (2197 mod 33).(2197 mod 33).(13 mod 33) mod 33 = 19.19.13 mod 33 = 4693 mod 33 = 7. Now if we calculate the cipher text c for all the possible values of m (0 to 32), we get M 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 C 0 1 8 27 31 26 18 13 17 3 10 11 12 19 5 9 4 M 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 C 29 24 28 14 21 22 23 30 16 20 15 7 2 6 25 32 Note that all 33 values of m (0 to 32) map to a unique code c in the same range in a sort of random manner. In this case we have nine values of m that map to the same value of c - these are known as unconcealed messages. M = 0 and 1 will always do this for any N, no matter how large. But in practice, higher values shouldn't be a problem when we use large values for N. If we wanted to use this system to keep secrets, we could let A=2, B=3, Z=27. (We specifically avoid 0 and 1 here for the reason given above). Thus the plaintext message "HELLOWORLD" would be represented by the set of integers m1, m2, ...
24
{9,6,13,13,16,24,16,19,13,5} Using our table above, we obtain cipher text integers c1, c2, ... {3,18,19,19,4,30,4,28,19,26} Note that this example is no more secured than using a simple Caesar substitution cipher, but it serves to illustrate a simple example of the mechanics of RSA encryption.
Remember that calculating m^e mod n is easy, but calculating the inverse c^-e mod ^n is very difficult, well, for large n's anyway. However, if we can factor n into its prime factors p and q, the solution becomes easy again, even for large n's. Obviously, if we can get hold of the secret exponent d, the solution is easy, too.
3.2
Network devices:
LAN
Software requirements:
Application logic Database logic Business logic Web server : : : : HTML, JavaScript. Oracle 9i.
JSP Architecture:
JSPs are built on top of SUNs servlet technology. JSPs are essential an HTML page with special JSP tags embedded. These JSP tags can contain Java code. The JSP file extension is .jsp rather than .htm or .html. The JSP engine parses the .jsp and creates a Java servlet source file. It then compiles the source file into a class file; this is done the first time and this why the JSP is probably slower the first time it is accessed. Any time after this the special compiled servlet is executed and is therefore returns faster.
25
1. The user goes to a web site made using JSP. The user goes to a JSP page (ending with .jsp). The web browser makes the request via the Internet. 2. The JSP request gets sent to the Web server. 3. The Web server recognizes that the file required is special (.jsp), therefore passes the JSP filed to the JSP Servlet Engine. 4. If the JSP file has been called the first time, the JSP file is parsed, otherwise go to step 7. 5. The next step is to generate a special Servlet from the JSP file. The entire HTML required is converted to println statements. 6. The Servlet source code is compiled into a class. 7. The Servlet is instantiated, calling the init and service methods. 8. HTML from the Servlet output is sent via the Internet. 9. HTML results are displayed on the users web browser. JSP Tags: There are four main tags: 1. Declaration tag 2. Expression tag 3. Directive Tag 4. Scriplet tag 5. Action tag
26
Declaration tag ( <%! %> ) This tag allows the developer to declare variables or methods. Before the declaration you must have <%!. At the end of the declaration, the developer must have %>. Code placed in this tag must end in a semicolon ( ; ). Declarations do not generate output so are used with JSP expressions or scriptlets. For Example, <%! private int counter = 0 ; private String getAccount(int accountNo) ; %> Expression tag ( <%= %>) This tag allows the developer to embed any Java expression and is short for out.println(). A semicolon ( ; ) does not appear at the end of the code inside the tag. For example, to show the current date and time. Date : <%= new java.util.Date() %> Directive tag (<%@ directive %>) There are three main types of directives: 1) page processing information for this page. 2) Include files to be included. 3) Tag library tag library to be used in this page. Directives do not produce any visible output when the page is requested but change the way the JSP Engine processes the page.
Page directive
This directive has 11 optional attributes that provide the JSP Engine with special processing information.
27
Include directive
Allows a JSP developer to include contents of a file inside another. Typically include files are used for navigation, tables, headers and footers that are common to multiple pages. Two examples of using include files: This includes the html from privacy.html found in the include directory into the current jsp page. <%@ include file = include/privacy.html %> or Include a navigation menu (jsp file) found in the current directory. <%@ include file = navigation.jsp %> Include files are discussed in more detail in the later sections of this tutorial.
28
Comments(<%-- --%>)
JSP comments start with <%-- and end with --%>, and are not included in the out put web page. <%-- some comments --%> If you need to include comments in the source of the output page, use the HTML comments, which start with <! and end with --%>.
Actions
JSP actions provide runtime instructions to the JSP containers. For example, a JSP action can include a file, forward a request to another page, or create an instance of a JavaBeans. include forward useBean plugin param
29
Implicit Objects There are 9 implicit objects in JSP request response out session exception page pageContext application config request: This variable points at HttpServletRequest. <% String dest=request.getParameter(t1); %> response: Use this variable to access the HttpServletResponse object. <% response.setContentType(text/html); response.sendRedirect(nextjsp.jsp); %> out: This variable represents the JSPWriter class, which has the same functionality as the PrintWriter class in servlets. <% out.println(hai); session: This variable represents the instance of the HTTPSession object <% session.setAttribute(sessionname, sessionvalue); Object obj= session.getAttribute(sessionname) %>
JDBC Drivers :
To connect with individual databases, JDBC requires drivers for each database. The driver gives out the connection to the database and implements the protocol for transferring the query and result from client to database.
Advantages:
30
Almost any database for which ODBC driver is installed, can be accessed.
Disadvantages:
Performance overhead since the calls have to go through the JDBC overhead bridge to the ODBC driver, then to the native db connectivity interface. The ODBC driver needs to be installed on the client machine. Considering the client-side software needed, this might not be suitable for applets.
Type 2 Driver: Native-API Driver This type of driver converts JDBC calls into calls to the client API for that database. Client -> JDBC Driver -> Vendor Client DB Library -> Database
Advantage:
Better performance than Type 1 since no jdbc to odbc translation is needed.
Disadvantages:
The vendor client library needs to be installed on the client machine. Cannot be used in internet due the client side software needed. Not all databases give the client side library.
Advantages:
Since the communication between client and the middleware server is database independent, there is no need for the vendor db library on the client machine. Also the client to middleware neednt be changed for a new database. The Middleware Server (Can be a full fledged J2EE Application server) can provide typical middleware services like caching (connections, query results, and so on), load balancing, logging, auditing etc.. eg. for the above include jdbc driver features in Web logic. Can be used in internet since there is no client side software needed.
31
At client side a single driver can handle any database. (It works provided the middleware supports that database!!)
Disadvantages:
Requires database-specific coding to be done in the middle tier. An extra layer added may result in a time-bottleneck. But typically this is overcome by providing efficient middleware services described above.
32
Interface ResultSetMetaData Interface Statement Classes: Class Class Class Class Class Class Date DriverManager DriverPropertyInfo Time TimeStamp Types
These interfaces are DBMS specific and are implemented in the driver specific to a DBMS by the driver vendor. These interfaces and classes defined many methods. These interfaces, classes and methods in them are so interconnected and interrelated that while discussing an interface or a class. Callable Statement Interface:Callable Statement interface extends Prepared Statement interface. Callable Statement interface can be used to execute stored SQL procedures in a database. These procedures may be taking some input parameters and giving some output values. Connection Interface:The connection interface is used to establish a connection to the database we want to access. The main use of Connection interface is to create Statement objects. Prepared Statement Interface:Prepared Statement interface extends Statement interface. Statement interface is used to pass parameters into SQL Statement. Prepared
Since it is extending the Statement interface, all the methods available in Statement interface are available to Prepared Statement interface. In addition, several other methods are available in Prepared Statement interface. Result Set Interface:As a result of execution of an SQL statement, one or more tables of data may be generated. The methods in the Result Set interface can be used to retrieve these data. The Result Set interface defines many more methods.
next () method reads the current row of data and positions the result set pointer to the next row. Initially, the result set pointer is on the first row. The next () method returns a false value if the currently pointing row is empty. getString(int index) method returns the string value stored in column with index number index from the row just read using the next() method. Columns are indexed 1, 2, 3, 4
33
get String (String colname) does the same operation like its counterpart String getString(int index). The only difference is that instead of referencing the column by its index number, the column is referenced by its name.
The getType methods are available for all Java data types in the Result Set interface. Statement Interface:The Statement interface is used to send SQL queries to the database and retrieve a set of data. SQL statements can be queries, updates, insertions, deletions etc. The Statement interface provides a number of methods.
Driver Manager Class:The Driver Manager class controls the loading of driver specific classes. Drivers are implemented as a set of .class files. Drivers are registered with the Driver Manager class either at initialization of Driver Manager or when an instance of a driver is created using the Driver Manager Class method register Driver. When Driver Manager Class is loaded, a section of static code is run and the class names of drivers listed in the java property named jdbc. Drivers are loaded. This system variable can be used to define a list of colon separated driver class names.
34
4. PROJECT ANALYSIS
4.1 DATAFLOW DIAGRAMS (DFDs):
A data flow diagram is graphical tool used to describe and analyze movement of data through a system. These are the central tool and the basis from which the other components are developed. The transformation of data from input to output, through processed, may be described logically and independently of physical components associated with the system. These are known as the logical data flow diagrams. The physical data flow diagrams show the actual implements and movement of data between people, departments and workstations. A full description of a system actually consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gain and Samson notation develops the data flow diagrams. Each component in a DFD is labeled with a descriptive name. Process is further identified with a number that will be used for identification purpose. The development of DFDs is done in several levels. Each process in lower level diagrams can be broken down into a more detailed DFD in the next level. The lop-level diagram is often called context diagram. It consists a single process bit, which plays vital role in studying the current system. The process in the context level diagram is exploded into other process at the first level DFD. The idea behind the explosion of a process into more process is that understanding at one level of detail is exploded into greater detail at the next level. This is done until further explosion is necessary and an adequate amount of detail is described for analyst to understand the process. Larry Constantine first developed the DFD as a way of expressing system requirements in a graphical from, this lead to the modular design. A DFD is also known as a bubble Chart has the purpose of clarifying system requirements and identifying major transformations that will become programs in system design. So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of bubbles joined by data flows in the system. TYPES OF DATA FLOW DIAGRAMS Current Physical Current Logical New Logical New Physical
35
CURRENT PHYSICAL: In Current Physical DFD process label include the name of people or their positions or the names of computer systems that might provide some of the overall system-processing label includes an identification of the technology used to process the data. Similarly data flows and data stores are often labels with the names of the actual physical media on which data are stored such as file folders, computer files, business forms or computer tapes. CURRENT LOGICAL: The physical aspects at the system are removed as mush as possible so that the current system is reduced to its essence to the data and the processors that transforms them regardless of actual physical form. NEW LOGICAL: This is exactly like a current logical model if the user were completely happy with he user were completely happy with the functionality of the current system but had problems with how it was implemented typically through the new logical model will differ from current logical model while having additional functions, absolute function removal and inefficient flows recognized. NEW PHYSICAL: The new physical represents only the physical implementation of the new system. SAILENT FEATURES OF DFD s The DFD shows flow of data, not of control loops and decision are controlled considerations do not appear on a DFD. The DFD does not indicate the time factor involved in any process whether the dataflow take place daily, weekly, monthly or yearly. The sequence of events is not brought out on the DFD.
1. LOGIN DFD:
36
37
4. Decrypt DFD:
LANGUAGE (UML)
4.2
UNIFIED
MODELLING
UML is a language to specifying, visualizing and constructing the artifacts of software system as well as for business models. The UML notation is useful for graphically depicting object oriented analysis and object oriented design modules. Each UML diagram is designed for developers and customers to view a software system. From a different perspective and in varying degrees of abstraction. UML diagrams commonly created in visual modeling tools include : USE CASE DIAGRAM: Use case diagrams display the relationship among actors and use case. CLASS DIAGRAM: Class diagram models class structure and contents using design elements such as classes, packages and objects. It also displays relationships such as containment, inheritance, associations and others. INTERACTION DIAGRAMS SEQUENCE DIAGRAMS: Sequence diagrams displays the time sequence of the objects participating in the interaction. This consists of the vertical dimension and horizontal dimension. COLLABORATION DIAGRAMS:
38
Collaboration diagrams displays the interaction organized around the objects and their links to one another. Numbers are used to show the sequence of messages. STATE DIAGRAM: State diagrams display the sequences of states that an object of an interaction goes through during its life in response to received stimuli, together with its responses and actions.
ACTIVITY DIAGRAM: Activity diagrams displays a special state diagram where most of states are action states and most of the transitions are triggered by completion of the actions in the source states. This diagram focuses on flows driven by internal processing. PHYSICAL DIAGRAMS: COMPONENT DIAGRAM:
Component diagram displays the high level packaged structure of the code itself. Dependencies among components are shown, including source code components, binary code components, and executable components. Some components exist at compile time, at link time, at runtimes well as at more than one time. DEPLOYMENT DIAGRAM:
Deployment diagram displays the configuration of run-time processing elements and the software components, processes and objects that live on them. Software component instances represent run-time manifestations of code units.
USE CASE DIAGRAMS: A use case diagram is a set of scenarios that describing an interaction between a user and a system. A use case diagram displays the relation ship among actors and use cases. The two main components of the use case diagrams are use cases and actors.
39
Actor
Use Case
An actor is represents a user or another system that will interact with the system your modeling. A use case is an external view of the system that represents some action the user might perform in order to complete a task. They are helpful in exposing requirements and planning the project.
Login
Send
LOGIN
User Server
SEND FILE
41
User
Server
42
VIEW FILE
5. PROJECT DESIGN
43
5.1 CONFIGURATIONAL ARCHITECTURE: Architectural design is a creative process where we try to establish system organizations that will satisfy the functional and non functional requirements. Because it is a creative process the activities with in the process differ radically depending on the type of system being developed, the background and experience of the system architect, and the specific requirements of the system. The architecture of a software system may be based on a particular model or style. An architectural style is a pattern of system organization such as client - server organization or a layered architecture. The product of the architectural design process is an architectural design document. This may include a number of graphical representations of the system along with associated descriptive text. It should describe how the system is structured into sub systems, the approach adopted and how each sub system is structured into modules. 5.1.1 INTRODUCTION: Data intensive Internet applications can be understood in terms of three different functional components: Data Management Application Logic Presentation
The component that handles data management usually utilizes a DBMS for data storage. But application logic and presentation involve much more than just DBMS itself. SINGLE TIER ARCHITECTURE: Initially data intensive applications were combined into a single tier including DBMS, application logic user interface.
DBMS
44
Users expect graphical interfaces that require much more computational power than dumb terminals. Centralized computation of the graphical displays of such interfaces require much more computational power than a single server has available, and thus single tier architecture do not scale to thousands of users. The commoditization of the pc and the availability of cheap client computers led to the development of two tier architecture. TWO TIER ARCHITECTURE: A two-tier application generally includes a Java client that connects directly to the database through Top Link. The two-tier architecture is most common in complex user interfaces with limited deployment. The database session provides Top Link support for two-tier applications.
Although the two-tier architecture is the simplest Top Link application pattern, it is also the most restrictive, because each client application requires its own session. As a result, two-tier applications do not scale as easily as other architectures. Two-tier applications are often implemented as user interfaces that directly access the database. They can also be non-interface processing engines. In either case, the two-tier model is not as common as the three-tier model. The following are key elements of efficient two-tier (client-server) architecture with Top Link: Minimal dedicated connections from the client to the database An isolated object cache
Advantages and Disadvantages The advantage of the two-tier design is its simplicity. The Top Link database session that builds the two-tier architecture provides all the Top Link features in a single session type, thereby making the two-tier architecture simple to build and use.
45
The most important limitation of the two-tier architecture is that it is not scalable, because each client requires its own database session. In modern two-tier architecture, the server holds both the application and the data. The application resides on the server rather than the client, probably because the server will have more processing power and disk space than the PC. Two-tier architecture is also referred to as client-server architecture, consists of a client computer and a server computer which interact through a well defined protocol. In the traditional client-server architecture, the client implements just the graphical user interface, and the server implements both the business logic and the data management; such clients are called thin clients.
Application Logic
Client 1
TWO - SERVER ARCHITECTURE: Thin Clients Other divisions are possible, such as more powerful clients that implement both user interface and business logic, or clients that implement user interface and part of the business logic with the remaining part being implemented at the server level: such clients are often called thick clients.
46
DBMS
N E T W O R K
Compare to the single tier architectures physically separate the user interface from the data management layer. To implement two tier architectures, we can no longer than dumb terminals on the client side. Over the last ten years, a large number of client server development tools such Microsoft visual basic and Sybase power builder have been developed. These tools permit rapid development of client server software contributing to the success of the client server model, especially the thin client version. CLIENTSERVER ARCHITECTURE: Client server is network architecture which separates a client (often an application that uses a graphical user interface) from a server. Each instance of the client software can send requests to a server. Specific types of servers include web servers, application servers, file servers, terminal servers, and mail servers. Characteristics of a server: Passive (slave) Waits for requests
Servers can be stateless or stateful. A stateless server does not keep any information between requests. A stateful server can remember information between requests. The scope of this information can be global or session-specific. An HTTP server for static HTML pages is an example of a stateless server while Apache Tomcat is an example of a stateful server. The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the UML. Another type of network architecture is known as a peer-to-peer architecture because each node or instance of the program is both a "client" and a "server" and each has equivalent responsibilities. Both architectures are in wide use.
47
A Generic client/server architecture has two types of nodes on the network: clients and servers. As a result, these generic architectures are sometimes referred to as "two-tier" architectures. Some networks will consist of three different kinds of nodes: client, application servers which process data for the clients, and database servers which store data for the application servers. This configuration is called a three-tier architecture. THREE-TIER ARCHITECTURE: The thin client two tier architecture essentially separates presentation issues from the rest of the application. The three tier architecture goes one step further, and also separates application logic from data management. PRESENTATION TIER: In presentation tier the users require a natural interface to make request, provide input and to see results. MIDDLE TIER: The application logic executes here. An enterprise class application reflects complex business processes and is coded in a general purpose language such as C+ + or Java. DATA MANAGEMENT TIER: Data-intensive web applications involve DBMS.
CLIENT N E T W O R K N E T W O R K
CLIENT
48 DBMS
CLIENT
Different technologies have been developed to enable distribution of the three tiers of an application across multiple hardware platforms and different physical sites. TECHNOLOGIES FOR THREE-TIER ARCHITECTURE
HTTP
SERVLETS JSP
XML STORED
49
THIN CLIENTS: Clients only need enough computations power for the presentation layer. Clients are web browsers. Integrated data access: In many applications the data must be accessed from several sources. This can be handled transparently at the middle tier, where we can centrally manage connections to all database systems involved. Scalability to many clients: Each client is light weight and all access to the system is through the middle tier. The middle tier can share database connections across clients, and if the middle tier becomes the bottle-neck, we can deploy several servers executing the middle tier code; clients can connect to any one of these servlets, if the logic is designed appropriately. The fig shows how the middle tier accesses multiple data sources.
APPLICATION LOGIC
DBMS
....
CLIENT
..
APPLICATION LOGIC
DBMS
Software development benefits: By dividing application cleanly into parts that address presentation, data access, and business logic, we gain many advantages. The business logic is centralized and is therefore easy to maintain, debug and change. Interaction between tiers occurs through well-defined standardized APIs. Therefore, each application tier can be built out of reusable components that can be individually developed, debugged and tested. The advantage of an n-tier architecture compared
50
with a two-tier architecture (or a three-tier with a two-tier) is that it separates out the processing that occurs to better balance the load on the different servers; it is more scalable. The disadvantages of n-tier architectures are: It puts more load on the network. It is much more difficult to program and test software than in two-tier architecture because more devices have to communicate to complete a user's transaction. ADVANTAGES: All the data are stored at the servers, so it has better security control ability. The server can control access and resource to make sure only let those permitted users access and change data. It is more flexible than to the paradigm. If a server in C/S paradigm wants to update the data or other resources. There are already many matured technologies designed for C/S paradigm which ensures security, the user-friendliness of the interface, and ease of use. Any element of a C/S network can be easily upgraded.
DISADVANTAGES: Traffic congestion has always been a problem since the first day of the birth of C/S paradigm. When a large number of clients send requests to the same server at the same time, it might cause a lot of troubles for the server. The more clients there are the more troubles it has. Whereas, P2P networks bandwidth is made up of every node in the network, the more nodes there are, the better bandwidth it has. C/S paradigm does not have as good robustness as P2P network has. When the server is down, clients requests cannot be fulfilled. In most of P2P networks, resources are usually located on nodes all over the network. Even if one or a few nodes depart or abandon the downloading; other nodes can still finish the downloading by getting data from the rest of the nodes in the network. The software and hardware of a server is usually very strict. A regular personal computers hardware may not be able to serve over a certain amount of clients. Meanwhile, a Windows XP home edition does not even have IIS to work as a server. It needs specific software and hardware to fulfill the job. Of course, it will increase the cost.
5.2 DATA MODEL 5.2.1 INTRODUCTION: A data model is an abstract model that describes how data is represented and used. The term Data model has two generally accepted meanings: A Data model theory i.e., a formal description of how data may be structured and used. A Data model instance i.e., applying a data model theory to create a practical data model instance for some particular applications.
51
A Data model theory has three main components: Structural part: It is a collection of data structures which are used to create data bases representing the entities or objects modeled by the data base. Integrity part: It is a collection of rules governing the constraints placed on these data structures to ensure structural integrity. Manipulation part: It is a collection of operators which can be applied to the data structures, to update and query the data contained in the data base. Data modeling is the process of creating a data model instance by applying a data model theory. Business requirements are normally captured by a semantic logical data model. This is transformed into a physical data model instance from which is generated a physical data base. 5.2.2 DATA DICTONARY: A data dictionary is a set of meta data that contains definitions and representations of data elements. With in the context of a DBMS, a data dictionary is a read-only set of tables and views. Data dictionary holds the information like: Precise definition of data elements User names, roles and privileges Schema objects Integrity constraints Store procedures and triggers General data base structure Space allocations
When an organization builds an enterprise-wide data dictionary, it may include both semantics and representational definitions for data elements. The semantics components focus on creating precise meaning of data enterprise. Representation definitions include how data elements are stored in a computer structure such as an integer, string or date format. Data dictionary is at sometimes simply a collection of data base problems and the definitions of what the meaning and types the columns contain. Data dictionaries are more precise than glossaries (terms and definitions) because they frequently have one or more representations hoe data is structured. Data dictionaries are usually separate from data models since data models usually include complex relation ships between data elements.
52
REGISTER: Name User Name Password ENCRYPT: Name Sender Receiver Encrypted Data Decrypt RSA: Name Sender Receiver Encrypt Data Decrypt Data Type Varchar2 (30) Varchar2 (30) Varchar2 (50) Varchar2 (50) Data Type Varchar2 (30) Varchar2 (30) Varchar2 (50) Varchar2 (50) Data Type Varchar2 (30) Varchar2 (10)
53
User interface is a essential part of overall software design process. A poorly design user interfaces means that user will probably be enable to access some of the system features. People have limited short-term memory When system go wrong and issue warning message and alarms, this often puts more stress on users, thus increasing the chance that there will make operational errors.
The principle of user interfaces consistency means that system commands and menus should have same format, parameter should be passed to overall commands in the same way and command punctuations should be similar.
5.3.2 MODULES: The system can be divided into 3 modules: 1. Login 2. Send File 3. Receive File MODULE DESCRIPTIOIN: Login: In this module the user is requested to enter the user name and password, if he is a valid user, he enters the home page. The user ID given is checked with the database table. The user has two options in the home page to view a file and to send a file to other user. Send File: This module details with sending a file by attaching it to a message to the other user specified. Before attaching a file, the specified file will be encrypted by using a randomly generated key. The major disadvantage of the module is that it will encrypt only the plain text format files. Receive File: In this module the user is enabled to view the file that has been send to him by other users. When the user selects a file from all the list of files, the file is decrypted by using the key, used while encrypting. The decrypted file can be saved as an external file into the secondary storage.
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
6. PROJECT DEVELOPMENT
6.1 SOFTWARE ALGORITHM: LOGIN MODULE: Steps: 1. 2. 3. 4. 5. 6. Logging into page Enter Username and Password If Username and Password equals Enter into Security Algorithm page. Else go to step 2. Stop
SEND MODULE: Steps: 1. 2. 3. 4. 5. 6. 7. 8. Logging into an application. Enter Username and Password. If Username and Password equals Enter into Write page. Writing the data. Encrypts the data. Sends the data. Stop.
VIEW MODULE: Steps: 1. 2. 3. 4. 5. 6. 7. Logging into application. Enter Username and Password. If Username and Password equals. Enter into read page. Decrypts the data. Reads the data. Stop.
69
6.2
Start
? Yes If valid
Stop
70
SEND MODULE:
Start
NO Registration
? If valid
DES
Yes
Encrypts message
Send
Stop
71
RECEIVE MODULE:
Start
Decrypts message
Reply
Stop
72
Start
RSA
Yes
Stop
73
SEND MODULE:
Start
NO Registration
? If valid
RSA
Yes
Encrypts message
Send
Stop
74
RECEIVE MODULE:
Start
RSA
Yes
Decrypts message
Reply
Stop
75
7. TESTING
Testing is the process of detecting errors. Testing performs a very critical role for quality assurance and for ensuring the reliability of software . The results of testing are used later on during maintenance also.
PSYCHOLOGY OF TESTING:
The aim of testing is often to demonstrate that program works by showing that it has no errors. The basic purpose of testing phase is to detect the errors that may be present in the program. Hence one should not start testing with the intent of showing that a program works, but the intent should be to show that a program doesnt work. Testing is the process of executing a program with the intent of finding errors.
TESTING OBJECTIVES:
The main objective of testing is to uncover a host of errors, systematically and with minimum effort and time. Stating formally, we can say, Testing is a process of executing a program with the intent of finding an error. A successful test is one that uncovers an as yet undiscovered error. A good test case is one that has a high probability of finding error, if it exists. The tests are inadequate to detect possibly present errors. The software more or less confirms to the quality and reliable standards.
LINK TESTING:
Link testing does not test software but rather the integration of each module in system. The primary concern is the compatibility of each module. The Programmer tests where modules are designed with different parameters, length, type etc.
76
INTEGRATION TESTING:
After the unit testing we have to perform integration testing. The goal here is to see if modules can be integrated properly, the emphasis being on testing interfaces between modules. This testing activity can be considered as testing the design and hence the emphasis on testing module interactions. In this project integrating all the modules forms the main system. When integrating all the modules I have checked whether the integration effects working of any of the services by giving different combinations of inputs with which the two services run perfectly before Integration.
SYSTEM TESTING:
The philosophy behind testing is to find errors. Test cases are devised with this in mind. A strategy employed for system testing is code testing.
CODE TESTING:
This strategy examines the logic of the program. To follow this method we developed some test data that resulted in executing every instruction in the program and module i.e. every path is tested. Systems are not designed as entire nor are they tested as single systems. To ensure that the coding is perfect two types of testing is performed or for that matter is performed or that matter is performed or for that matter is performed on all systems.
8. CONCLUSION
Finally, in this project we conclude that by using the algorithms RSA and DES we are providing security to the software and we can also provide the security in future by using the other algorithms. Here we also using the MAC-ID to check whether the person is authorized or not and if he is authorized then only he is permitted to use the system.
77