Você está na página 1de 42

Venkat Alagarsamy

venkat.alagarsamy@gmail.com

https://www.linkedin.com/in/VenkatAlagarsamy https://www.scribd.com/VenkatAlagarsamy https://www.facebook.com/Venkatachalapathi.Alagarsamy

Last Updated: 7th March 2009

Testing is a process used to help identify the correctness, completeness and quality of developed computer software. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Testing helps in Verifying and Validating if the Software is working as it is intended to be working.

Aims/Objectives of Testing

To uncover the maximum no of bugs or errors. To increase the quality of the software product.

To ensure the user-friendliness.

Testing Start Process

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product. If we divide the lifecycle of software development into Requirements Analysis, Design, Programming/Construction and Operation and Maintenance, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed.

Testing Activities in Each Phase

Requirements Analysis - Determine correctness - Generate functional test data. Design - Determine correctness and consistency - Generate structural and functional test data. Programming/Construction - Determine correctness and consistency - Generate structural and functional test data - Apply test data - Refine test data Operation and Maintenance - Retest

Testing Stop Process

Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are: Deadlines ( release deadlines, testing deadlines.) Test cases completed with certain percentages passed Coverage of code/functionality/requirements reaches a specified point The rate at which Bugs can be found is too small Beta or Alpha Testing period ends The risk in the project is under acceptable limit.

Risk Analysis

A Risk is a potential for loss or damage to an Organization from materialized threats. A threat is a possible damaging event.
Risk Analysis attempts to identify all the risks and then quantify the severity of the risks. It exploits vulnerability in the security of a computer based system.

Test Metrics

Test metrics accomplish in analyzing the current level of maturity in testing and give a projection on how to go about testing activities by allowing us to set goals and predict future trends.

Types of Metrics
Process Metrics 2. Product Metrics 3. Project Metrics
1.

Product metrics are for describing characteristics of product such as its size, complexity, features, and performance. Several common product metrics are mean time to failure, defect density, customer problem, and customer satisfaction metrics. Many Organizations take measurements or metrics because they have the capability to measure, rather than determining why they need the information. Unfortunately, measurement for the sake of a number or statistic rarely makes a process better, faster, or cheaper. The first application of project metrics on most software projects occurs during estimation. Metrics collected from past projects are used as a basis from which effort and time estimates are made for current software work. As a project proceeds, measures of effort and calendar time expended are compared to original estimates. The project manager uses these data to monitor and control progress.

Static Testing

It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used. From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.

Code Review

It is systematic examination (often as peer review) of computer source code intended to find and fix mistakes overlooked in the initial development phase. Purpose of code review is to improve both the overall quality of software and the developers' skills.

Inspection

Inspection in software engineering, refers to peer review of any work product by trained individuals who look for defects using a well defined process. The goal of the inspection is for all of the inspectors to reach consensus on a work product and approve it for use in the project. In an inspection, a work product is selected for review and a team is gathered for an inspection meeting to review the work product. A moderator is chosen to moderate the meeting. Each inspector prepares for the meeting by reading the work product and noting each defect.

The goal of the inspection is to identify defects.

Walkthrough
Walkthroughs a walkthrough or walk-through is a form of software peer review, in which,
a designer programmer leads members of the development team and other interested parties

To scan through a software product, and the participants


ask questions make comments

About
possible errors, violation of development standards, other problems

Dynamic testing

Dynamic Testing involves working with the software to validate by giving input values and checking if the output is as expected. These are the Validation activities. Dynamic testing (or dynamic analysis) is a term used in software engineering to describe the testing of the dynamic behavior of code.
It is the examination of the physical response from the system to variables that are

not constant and change with time. The software must actually be compiled and run; this is in contrast to static testing.

Dynamic testing methodologies:


Unit Testing Integration Testing System Testing Acceptance Testing

Black box testing

Black Box Testing treats the software as a black box without any knowledge of internal implementation. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid input and determines the correct output. This method of test design is applicable to all levels of software testing:
unit, integration functional testing System acceptance.

Black Box Testing Strategy In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action. Functional testing covers how well the system executes the functions it is supposed to executeincluding user commands, data manipulation, searches and business processes, user screens, and integrations. These testing types are again divided in two groups:
Testing in which user plays a role of tester and User is not required.

Testing Method where User is NOT Required

Functional Testing In this type of testing, the software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected. Stress Testing The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand.

Load Testing The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades.

Ad-hoc Testing This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing. Exploratory Testing This testing is similar to the ad-hoc testing and is done in order to learn/explore the application. Usability Testing This testing is also called as Testing for User-Friendliness. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user.

Smoke Testing This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. Recovery Testing Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. Volume Testing Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system.

Testing where User plays a Role - User Required

User Acceptance Testing In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. Alpha Testing In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers. Beta Testing In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.

Advantages of Black Box Testing


more effective on larger units of code. tester needs no knowledge of implementation, including specific programming languages tester and programmer are independent of each other tests are done from a user's point of view

will help to expose any ambiguities or inconsistencies in the specifications


test cases can be designed as soon as the specifications are complete

Disadvantages of Black Box Testing

only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever without clear and concise specifications, test cases are hard to design there may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried may leave many program paths untested cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)

Characteristics of Black Box Testing

People: Who does the testing? Some people know how software works (developers) and others just use it (users). Accordingly, any testing by users or other non developers is sometimes called black box testing. Developer testing is called white box testing. The distinction here is based on what the person knows or can understand. Coverage: What is tested? These are the two most commonly used coverage criteria. Both are supported by extensive literature and commercial tools. Requirementsbased testing could be called black box because it makes sure that all the customer requirements have been verified. Code-based testing is often called white box because it makes sure that all the code (the statements, paths, or decisions) is exercised.

Risks: Why are you testing? Sometimes testing is targeted at particular risks. Boundary testing and other attack-based techniques are targeted at common coding errors. Effective security testing also requires a detailed understanding of the code and the system architecture. Thus, these techniques might be classified as white box.

Activities: How do you test? A common distinction is made between behavioral test design, which defines tests based on functional requirements, and structural test design, which defines tests based on the code itself. These are two design approaches. Since behavioral testing is based on external functional definition, it is often called black box, while structural testingbased on the code internalsis called white box. Indeed, this is probably the most commonly cited definition for black box and white box testing.

Evaluation: How do you know if youve found a bug? There are certain kinds of software faults that dont always lead to obvious failures. They may be masked by fault tolerance or simply luck. Memory leaks and wild pointers are examples. Certain test techniques seek to make these kinds of problems more visible. Related techniques capture code history and stack information when faults occur, helping with diagnosis. Assertions are another technique for helping to make problems more visible. All of these techniques could be considered white box test techniques, since they use code instrumentation to make the internal workings of the software more visible. These contrast with black box techniques that simply look at the official outputs of a program.

White Box Testing

White box testing is a security testing method that can be used to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities. White box testing, by contrast to black box testing, is when the tester has access to the internal data structures and algorithms The purpose of any security testing method is to ensure the robustness of a system in the face of malicious attacks or regular software failures. White box testing is performed based on the knowledge of how the system is implemented. White box testing includes analyzing data flow, control flow, information flow, coding practices, and exception and error handling within the system, to test the intended and unintended software behavior.

White box testing requires access to the source code. Though white box testing can be performed any time in the life cycle after the code is developed, it is a good practice to perform white box testing during the unit testing phase. White box testing requires knowing what makes software secure or insecure. The first step in white box testing is to comprehend and analyze source code, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively, testers need to know the different tools and techniques available for white box testing. The three requirements do not work in isolation, but together.

Types of White Box Testing

Code coverage - creating tests to satisfy some criteria of code coverage. For example, the test designer can create tests to cause all statements in the program to be executed at least once. Mutation testing methods - A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

Fault injection methods - fault injection is a technique for improving the coverage of a test by introducing faults in order to test code paths, in particular error handling code paths, that might otherwise rarely be followed.

Results to Expect

Any security testing method aims to ensure that the software under test meets the security goals of the system and is robust and resistant to malicious attacks. Security testing involves taking two diverse approaches: one, testing security mechanisms to ensure that their functionality is properly implemented; and two, performing risk-based security testing motivated by understanding and simulating the attackers approach. Some examples of errors uncovered include
data inputs compromising security

sensitive data being exposed to unauthorized users


improper control flows compromising security incorrect implementations of security functionality

Benefits of White Box Testing

Analyzing source code and developing tests based on the implementation details enables testers to find programming errors quickly. Validating design decisions and assumptions quickly through white box testing increases effectiveness. The design specification may outline a secure design, but the implementation may not exactly capture the design intent. Finding unintended features can be quicker during white box testing. Security testing is not just about finding vulnerabilities in the intended functionality of the software but also about examining unintended functionality introduced during implementation. Having access to the source code improves understanding and uncovering the additional unintended behavior of the software.

Unit Testing

In computer programming, unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc.,

while in object-oriented programming, the smallest unit is a method; which may belong to a base/super class, abstract class or derived/child class.
The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits. Unit tests find problems early in the development cycle. Unit testing is typically done by developers and not by Software testers or end-users.

Limitations of Unit Testing

Testing, in general, cannot be expected to catch every error in the program. The same is true for unit testing. By definition, it only tests the functionality of the units themselves. Therefore, it may not catch integration errors, performance problems, or other systemwide issues. To obtain the intended benefits from unit testing, a rigorous sense of discipline is needed throughout the software development process. It is essential to keep careful records, not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software. It is also essential to implement a sustainable process for ensuring that test case failures are reviewed daily and addressed immediately. If such a process is not implemented and ingrained into the team's workflow, the application will evolve out of sync with the unit test suite- increasing false positives and reducing the effectiveness of the test suite.

Integration Testing

Integration testing is the phase of software testing in which individual software modules are combined and tested as a group.

Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing. The purpose of integration testing is to verify functional, performance and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing.

System Testing

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "interassemblages" and also within the system as a whole.

Types of System Testing


Error Handling Testing - Error handling refers to the anticipation, detection, and resolution of programming, application, and communications errors. Specialized programs, called error handlers, are available for some applications. The best programs of this type forestall errors if possible, recover from them when they occur without terminating the application, or (if all else fails) gracefully terminate an affected application and save the error information to a log file.

Volume Testing belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application for a certain data volume. This volume can in generic terms be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will explode your database to that size and then test the application's performance on it. Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application's functionality with that file to check performance.

Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Performance testing can refer to the assessment of the performance of a human examinee. For example, a behind-the-wheel driving test is a performance test of whether a person is able to perform the functions of a competent driver of an automobile.

User Acceptance Testing

User Acceptance Testing (UAT) is a process to obtain confirmation by a Subject Matter Expert (SME), preferably the owner or client of the object under test, through trial or review, that the modification or addition meets mutually agreed-upon requirements. In software development, UAT is one of the final stages of a project and often occurs before a client or customer accepts the new system. Users of the system perform these tests, which developers derive from the client's contract or the user requirements specification. These tests, which are usually performed by clients or end-users, are not usually focused on identifying simple problems such as spelling errors and cosmetic problems, nor show stopper bugs, such as software crashes; testers and developers previously identify and fix these issues during earlier unit testing, integration testing, and system testing phases.

Regression Testing

Regression testing is any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired, stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes. Regression testing can be used not only for testing the correctness of a program, but it is also often used to track the quality of its output. For instance in the design of a compiler, regression testing should track the code size, simulation time and compilation time of the test suite cases.

Grey Box Testing


Gray box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. In gray box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the gray box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.

Test Cases

In software engineering, the most common definition of a test case is a set of conditions or variables under which a tester will determine if a requirement or use case upon an application is partially or fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub-requirements. What characterizes a formal, written test case is that there is a known input and an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a post condition.

Você também pode gostar