Escolar Documentos
Profissional Documentos
Cultura Documentos
By
M.Saravanan
(Software Testing Engineer)
1
TESTING CONCEPTS
FOREWORD
Beginners: For those of you who wish to mould your theoretical software
engineering knowledge into practical approach to working in the real world.
Those who wish to take up Software Testing as a profession.
Already a Tester! You can refresh all your testing basics and techniques and gear
up for Certifications in Software Testing
Beginners Guide To Software Testing is our sincere effort to educate and create
awareness among people, the growing importance of software quality. With the
advent of globalization and increase in market demand for software with good
2
TESTING CONCEPTS
quality, we see the need for all Software Engineers to know more about Software
Testing.
We believe that this article helps serve our motto – “Adding Values to Lives Around
Us”.
Being a Software Test Professional, you must know a brief history of Software
Engineering. Software Testing comes into picture in every phase of Software
Engineering.
The software industry has evolved through 4 eras, 50’s –60’s, mid 60’s –late 70’s,
mid 70’s- mid 80’s, and mid 80’s-present. Each era has its own distinctive
characteristics, but over the years the software’s have increased in size and
complexity. Several problems are common to almost all of the eras and are
discussed below.
The Software Crisis dates back to the 1960’s when the primary reasons for
this situation were less than acceptable software engineering practices. In the early
stages of software there was a lot of interest in computers, a lot of code written but
no established standards. Then in early 70’s a lot of computer programs
Started failing and people lost confidence and thus an industry crisis was declared.
Various reasons leading to the crisis included:
- Hardware advances outpacing the ability to build software for this hardware.
- The ability to build in pace with the demands.
- Increasing dependency on software’s
- Struggle to build reliable and high quality software
- Poor design and inadequate resources.
This crisis though identified in the early years, exists to date and we have
examples of software failures around the world. Software is basically considered a
failure if the project is terminated because of costs or overrun schedules, if the
project has experienced overruns in excess of 50% of the original or if the software
results in client lawsuits. Some examples of failures include failure of Air traffic
control systems, failure of medical software, and failure in telecommunication
3
TESTING CONCEPTS
software. The primary reason for these failures other than those mentioned above is
due to bad software engineering practices adopted. Some of the worst software
practices include:
To avoid these failures and thus improve the record, what is needed is a
better understanding of the process, better estimation techniques for cost time and
quality measures. But the question is, what is a process? Process transform inputs to
outputs i.e. a product. A software process is a set of activities, methods and
practices involving transformation that people use to develop and maintain software.
The process that has been defined and adopted needs to be managed well
and thus process management comes into play. Process management is concerned
with the knowledge and management of the software process, its technical aspects
and also ensures that the processes are being followed as expected and
improvements are shown.
From this we conclude that a set of defined processes can possibly save us
from software project failures. But it is nonetheless important to note that the
process alone cannot help us avoid all the problems, because with varying
4
TESTING CONCEPTS
circumstances the need varies and the process has to be adaptive to these varying
needs. Importance needs to be given to the human aspect of software development
since that alone can have a lot of impact on the results, and effective cost and time
estimations may go totally waste if the human resources are not planned and
managed effectively. Secondly, the reasons mentioned related to the software
engineering principles may be resolved when the needs are correctly identified.
Correct identification would then make it easier to identify the best practices that can
be applied because one process that might be suitable for one organization may not
be most suitable for another.
There may be many definitions of software testing and many which appeal to
us from time to time, but its best to start by defining testing and then move on
depending on the requirements or needs.
5
TESTING CONCEPTS
Content:
1. TESTING…………………………………………………………………………. 1
2. TESTING TYPES……………………………………………………………….. 2
2.1Static Testing……………………………………………………………………………. 3
2.2Dynamic Testing………………………………………………………………………. 4
3TESTING TECHNIQUES……………………………………………………….. 5
3.1 White box Testing……………………………………………………………………. X
White Box Techniques………………………………………………….. X
3.1.1 Method Coverage……………………………………………………. X
3.1.2 Statement Coverage………………………………………………..
X
3.1.3 Branch Coverage…………………………………………………….. X
3.1.4 Condition Coverage…………………………………………………. X
Types of White Box Testing
Advantages………………………………………………………………………… X
Disadvantages…………………………………………………………………… X
3.2 Black box Testing…………………………………………………………………… X
Black Box Techniques………………………………………………….. X
3.2.1 Boundary value Analysis………………………………………… X
3.2.2 Equivalence Partitioning………………………………………… X
3.2.3 Decision Table Testing………………………………………….. X
3.3 Gray box Testing………………………………………………………………….. X
4. Testing Levels …………………………………………………………………………………. X
a. Unit Testing………………………………………………………………….. X
b. Integration Testing………………………………………………………. X
Types of Integration Testing……………………………………………. X
1. Top down Integration Testing…………………………………….. X
2. Bottom up Integration Testing……………………………………. X
3. System Testing……………………………………………………………. X
6
TESTING CONCEPTS
6. STLC…………………………………………..………………………………………………………… X
7. TESTING DOCUMENTS
Test Plan………………………………………………………………………………………… X
Software Requirement Specification……………………………………………… X
Test Case………………………………………………………………………………………… X
8. BUG TRACKING…………………………………………………………………………………… X
7
TESTING CONCEPTS
Severity……………………………………………………………………………………………………… X
Priority………………………………………………………………………………………………………. X
Web testing………………………………………………………………………………………………. X
Configuration management………………………………………………………………….. X
Test deliverables……………………………………………………………………………………… X
Application……………………………………………………………………………………….. X
8
TESTING CONCEPTS
Text Boxes………………………………………………………………………………………..
X
Check Boxes……………………………………………………………………………………..
X
Command Buttons…………………………………………………………………………….
X
Combo Boxes…………………………………………………………………………………….
X
List Boxes…………………………………………………………………………………………..
X
Aesthetic Conditions………………………………………………………….
X
Validation Conditions…………………………………………………………………………. X
Navigation Conditions……………………………………………………………………….. X
Usability Conditions……………………………………………………………………………. X
General Conditions………………………………………………………………………………………….. X
9
TESTING CONCEPTS
WEB TESTING…………………………………………………………………….. …… X
Categories of risks…………………………………………………………………… X
How to get your all bugs resolved without any ‘Invalid bug’ label…. X
Website Cookie Testing, Test cases for testing web application cookies?
Drawbacks of cookies………………………………………………………………. X
10
TESTING CONCEPTS
Table of Content……………………………………………………………………….. X
Foreword........................................................................................... X
1. Introduction................................................................................. X
2. Scope .......................................................................................... X
3. Arrangement ............................................................................... X
4. Normative references................................................................... X
5. Definitions...................................................................................
A......................................................................................................
B......................................................................................................
C.......................................................................................................
D.......................................................................................................
E........................................................................................................
F........................................................................................................
G....................................................................................... …………….
H.......................................................................................................
I........................................................................................................
K................................................................................... ………………..
L................................................................................. …………………..
M.......................................................................................................
N........................................................................................................
O................................................................................. …………………..
P........................................................................................ …………….
R........................................................................................................
S .................................................................................................... ..
T.................................................................................................... …
U................................................................................................... …
V...................................................................................... ……………..
W................................................................................................... ..
Annex A (Informative).......................................................... ………..
11
TESTING CONCEPTS
TESTING:
Testing is the process of executing a program with the intent of finding errors
TESTING TYPES
12
TESTING CONCEPTS
Dynamic Testing
Testing the functionality is called dynamic Testing.
For example, Spreadsheet programs are, by their very nature, tested to a large
extent "on the fly" during the build process as the result of some calculation or text
manipulation is shown interactively immediately after each formula is entered.
TESTING TECHNIQUES:
White box testing OR (clear box testing, glass box testing, and transparent
box testing, translucent box testing or structural testing)
The process of Checking the program coding or source coding of the
application is called as whit box testing.
1. Method Coverage:
Method coverage is a measure of the percentage of methods that have been
executed by test cases. Undoubtedly, your tests should call 100% of your methods.
It seems irresponsible to deliver methods in your product when your testing never
13
TESTING CONCEPTS
used these methods. As a result, you need to ensure you have 100% method
coverage.
2. Statement Coverage:
Statement coverage is a measure of the percentage of statements that have
been executed by test cases. Your objective should be to achieve 100% statement
coverage through your testing. Identifying your cyclomatic number and executing
this minimum set of test cases will make this statement coverage achievable.
3. Branch Coverage:
4. Condition Coverage:
1. Code coverage:
Creating tests to satisfy some criteria of code coverage. For example, the test
designer can create tests to cause all statements in the program to be executed at
least once.
14
TESTING CONCEPTS
Code coverage techniques were amongst the first techniques invented for
systematic software testing. The first published reference was by Miller and Maloney
in Communications of the ACM in 1963.
15
TESTING CONCEPTS
Static testing is a form of software testing where the software isn't actually
used. This is in contrast to dynamic testing. It is generally not detailed testing, but
checks mainly for the sanity of the code, algorithm, or document. It is primarily
syntax checking of the code or and manually reading of the code or document to find
errors. This type of testing can be used by the developer who wrote the code, in
isolation. Code reviews, inspections and walkthroughs are also used.
From the black box testing point of view, static testing involves review of
requirements or specifications. This is done with an eye toward completeness or
appropriateness for the task at hand. This is the verification portion of Verification
and Validation.
Even static testing can be automated. A static testing test suite consists in
programs to be analyzed by an interpreter or a compiler that asserts the programs
syntactic validity.
Bugs discovered at this stage of development are less expensive to fix than
later in the development cycle.
Advantages:
● As the knowledge of internal coding structure is precondition, it becomes very
easy to find out which type of input/data can help in testing the application
effectively.
● The other advantage of white box testing is that it helps in optimizing the code
● It helps in removing the extra lines of code, which can bring in hidden defects.
● Forces test developer to reason carefully about implementation
Disadvantages:
16
TESTING CONCEPTS
c. Statement coverage
In this type of testing the code is executed in such a manner that every
statement of the application is executed at least once. It helps in assuring that all
the statements execute without any side effect.
d. Branch coverage
No software application can be written in a continuous mode of coding, at
some point we need to branch out the code in order to perform a particular
functionality. Branch coverage testing helps in validating of all the branches in the
17
TESTING CONCEPTS
code and making sure that no branching leads to abnormal behavior of the
application.
e. Security Testing
Security Testing is carried out in order to find out how well the system can
protect itself from unauthorized access, hacking – cracking, any code damage etc.
which deals with the code of application. This type of testing needs sophisticated
testing techniques.
f. Mutation Testing
A kind of testing in which, the application is tested for the code that was
modified after fixing a particular bug/defect. It also helps in finding out which code
and which strategy of coding can help in developing the functionality effectively.
18
TESTING CONCEPTS
test. In gray box testing, the tester applies a limited number of test cases to the
internal workings of the software under test. In the remaining part of the gray box
testing, one takes a black box approach in applying inputs to the software under test
and observing the outputs.
Gray box testing is a powerful idea. The concept is simple; if one knows
something about how the product works on the inside, one can test it better, even
from the outside. Gray box testing is not to be confused with white box testing; i.e. a
testing approach that attempts to cover the internals of the product in detail. Gray
box testing is a test strategy based partly on internals. The testing approach is
known as gray box testing, when one does have some knowledge, but not the full
knowledge of the internals of the product one is testing.
In gray box testing, just as in black box testing, you test from the outside of
a product, just as you do with black box, but you make better-informed testing
choices because you're better informed; because you know how the underlying
software components operate and interact.
TESTING LEVELS
There are four testing level such as;
Unit Testing
Integration Testing
System Testing
User acceptance Testing
Unit Testing:
The testing done to a unit or to a smallest piece of software. Done to verify if
it satisfies its functional specification or its intended design structure.
Integration Testing:
Testing which takes place as sub elements are combined (i.e., integrated) to
form higher-level elements
Or
19
TESTING CONCEPTS
20
TESTING CONCEPTS
2. Beta testing
Beta testing is also part of the user acceptance testing. Before releasing the
application testers and customers combined check the application inn customer
sight.
Gamma Testing:
Gamma testing is testing of software that has all the required features,
but it did not go through all the in-house quality checks.
Some of the Testing are mentioned below are:
Smoke testing
Smoke testing is a safe test. Smoke testing is a set of test cases, which we
execute when we getting a new build. Smoke testing is verify the build is testable or
not to based on smoke test cases. We can reject the build. It covers all the
functionality.
Sanity testing
Sanity testing is test critical and major functionality of application. It is a one
time application. For example cursor navigation.
Performance testing
We have putting under heavy load of the application. Checking the multi task
performance of the application.
Two types of performance testing.
Load testing
Stress testing
a. Load testing
We have putting under heavy load of the application. Checking the within a
limit of the application. Load testing success is criteria.
b. Stress testing
We have putting under heavy load of the application. Checking the beyond
the limit of the application. Stress testing failure is criteria.
Functional Testing:
Testing the application, whether the application is functioning as per the
requirements is called Functional testing.
21
TESTING CONCEPTS
Exploratory testing
Checking the application without coding knowledge. Testing is done by giving
random input.
Ad-hoc testing
It is a part of the exploratory testing. It is a random testing it means testing a
application proper without test plans. It’s carried out at the end of the project at the
all test cases are executed.
OR
Testing with out a formal test plan or outside of a test plan.
Regression testing
Testing the application to find whether the change in code affects anywhere in
the application.
Checking Re executing the modification of the application. It should not
affect old build.
Retesting:
We check for the particular bug and its dependencies after it is said to be fixed.
OR
Testing that runs test cases that failed the last time they were run, in order to
verify the success of corrective actions.
Usability testing - User-friendliness check. Application flow is tested, Can new user
understand the application easily, Proper help documented whenever user stuck at
any point. Basically system navigation is checked in this testing.
22
TESTING CONCEPTS
Recovery testing - Testing how well a system recovers from crashes, hardware
failures, or other catastrophic problems.
TESTING METHODOLOGY
Testing methodology means what kind of the methods to be selected
that suits for our applications.
It has v & v model, waterfall model., etc….
5.1 V – model
The V-model is a software development model which can be accepted to be
the extension of the waterfall model. Instead of moving down in a linear way, the
process steps are bent upwards after the coding phase, to form the typical V shape.
The V-Model demonstrates the relationships between each phase of the development
life cycle and its associated phase of testing.
Verification Validation
23
TESTING CONCEPTS
Unit testing
Requirement
Design
24
TESTING CONCEPTS
Coding
Unit testing
Integration testing
System testing
The spiral model, also known as the spiral lifecycle model, is a systems
development method (SDM) used in information technology (IT). This model of
development combines the features of the prototyping model and the waterfall
model. The spiral model is intended for large, expensive, and complicated projects.
25
TESTING CONCEPTS
1.The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users representing all the external or
internal users and other aspects of the existing system.
3.A first prototype of the new system is constructed from the preliminary design.
This is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.
4.A second prototype is evolved by a fourfold procedure: (1) evaluating the first
prototype in terms of its strengths, weaknesses, and risks; (2) defining the
requirements of the second prototype; (3) planning and designing the second
prototype; (4) constructing and testing the second prototype.
5.At the customer's option, the entire project can be aborted if the risk is deemed
too great. Risk factors might involve development cost overruns, operating-cost
miscalculation, or any other factor that could, in the customer's judgment, result in a
less-than-satisfactory final product.
6.The existing prototype is evaluated in the same manner as was the previous
prototype, and, if necessary, another prototype is developed from it according to the
fourfold procedure outlined above.
7.The preceding steps are iterated until the customer is satisfied that the refined
prototype represents the final product desired.
Applications
For a typical shrink-wrap application, the spiral model might mean that you
have a rough-cut of user elements (without the polished / pretty graphics) as an
operable application, add features in phases, and, at some point, add the final
26
TESTING CONCEPTS
graphics. The spiral model is used most often in large projects. For smaller projects,
the concept of agile software development is becoming a viable alternative. The US
military has adopted the spiral model for its Future Combat Systems program.
Advantages
1.Estimates (i.e. budget, schedule, etc.) become more realistic as work progresses,
because important issues are discovered earlier.
2.It is more able to cope with the (nearly inevitable) changes that software
development generally entails.
3.Software engineers (who can get restless with protracted design processes) can
get their hands in and start working on a project earlier.
Disadvantages
An iterative lifecycle model does not attempt to start with a full specification
of requirements. Instead, development begins by specifying and implementing just
part of the software, which can then be reviewed in order to identify further
requirements. This process is then repeated, producing a new version of the software
for each cycle of the model. Consider an iterative lifecycle model which consists of
repeating the following four phases in sequence:
27
TESTING CONCEPTS
28
TESTING CONCEPTS
and for acceptance. As the software evolves through successive cycles, tests have to
be repeated and extended to verify each version of the software.
1.STLC:
Requirement
Test plan
Defect tracking
TESTING DOCUMENTS
29
TESTING CONCEPTS
TEST PLAN
Test plan is document. It is a set of activities of the application. Test plan based
when, who, what and how. Test plan involves following details.
Objective
Aim of the project. Introduction, overview of the project.
Scope
1. Features to be tested
2. Features not to be tested
Approach
1. Write a high level scenario.
2. Write a flow graph
Testing functionality
Functionality of the testing, what we done test functionality of the application.
Assumption
Risk analysis
Analyzing risks
Backup plan
Effort estimation
Estimate a effort . ex., Time , cost
Roles and responsibilities
Roles and responsible of the tester.
Entry and exit criteria
Entry and exit criteria of the tester.
Templates
Test automation
Using automation tools.
Environment
Software and hardware requirements.
Defect tracking
Deliverables
Approvals
30
TESTING CONCEPTS
31
TESTING CONCEPTS
BUG TRACKING:
Many bug-tracking systems, such as those used by most open source software
projects, allow users to enter bug reports directly. Other systems are used only
internally in a company or organization doing software development. Typically bug
tracking systems are integrated with other software project management
applications
Bug reporting:
Design of the bug reporting,
S. no
Link
Bug id
Description
Priority
Severity
Status
32
TESTING CONCEPTS
3.Assign: Once the lead changes the state as “OPEN”, he assigns the bug to
corresponding developer or developer team. The state of the bug now is changed to
“ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing
team for next round of testing. Before he releases the software with bug fixed, he
changes the state of bug to “TEST”. It specifies that the bug has been fixed and is
released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to be
fixed in next releases. The reasons for changing the bug to this state have many
factors. Some of them are priority of the bug may be low, lack of time for the release
or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug.
Then the state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “DUPLICATE”.
33
TESTING CONCEPTS
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester
tests the bug. If the bug is not present in the software, he approves that the bug is
fixed and changes the status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the
tester changes the status to “REOPENED”.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that
the bug no longer exists in the software, he changes the status of the bug to
“CLOSED”.
34
TESTING CONCEPTS
Bug status
New
Fixed
Reopen
Closed
Priority bugs
Severity bugs
Conclusion
This will be set by the team lead or the project lead. Based on the severity and the
time constraint that the module has the priority will be set. It contain,
High
Medium
Low
35
TESTING CONCEPTS
This type of testing usually done for 2 tier applications (usually developed for
LAN) Here we will be having front-end and backend.
The application launched on front-end will be having forms and reports which
will be monitoring and manipulating data
Web testing:
Configuration management:
36
TESTING CONCEPTS
Test deliverables:
During the testing what are the documents we delivered, in all the documents are
test deliverables. Like,
Test plan, test case, bug report, RTM, SRS, test summary report.
In automation testing, user actions are simulated using a testing tool. The actions a
manual tester performs are recorded, and then played back to execute the same test
case.
The benefits of automating software testing are many:
● Reducing the elapsed time for testing, getting your product to market faster.
37
TESTING CONCEPTS
GUI Testing
The following is a set of guidelines to ensure effective GUI Testing and can be used
even as a checklist while testing a product / application.
Application
Start Application by Double Clicking on its ICON. The Loading message should
show the application name, version number, and a bigger pictorial representation of
the icon. No Login is necessary. The main window of the application should have the
same caption as the caption of the icon in Program Manager. Closing the application
should result in an "Are you Sure" message box Attempt to start application twice.
This should not be allowed - you should be returned to main window. Try to start the
application twice as it is loading. On each window, if the application is busy, then the
hour glass should be displayed. If there is no hour glass, then some enquiry in
progress message should be displayed. All screens should have a Help button (i.e.)
F1 key should work the same.
If Window has a Minimize Button, click it. Window should return to an icon on
the bottom of the screen. This icon should correspond to the Original Icon under
Program Manager. Double Click the Icon to return the Window to its original size.
The window caption for every application should have the name of the application
and the window name - especially the error messages. These should be checked for
38
TESTING CONCEPTS
spelling, English and clarity, especially on the top of the screen. Check does the title
of the window make sense. If the screen has a Control menu, then use all un-grayed
options.
Never updateable fields should be displayed with black text on a gray background
with a black label. All text should be left justified, followed by a colon tight to it. In a
field that may or may not be updateable, the label text and contents changes from
black to gray depending on the current status. List boxes are always white
background with black text whether they are disabled or not. All others are gray.
Text Boxes
Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change
from arrow to Insert Bar. If it doesn't then the text in the box should be gray or non-
updateable. Refer to previous page. Enter text into Box Try to overflow the text by
typing to many characters - should be stopped Check the field width with capitals W.
Enter invalid characters - Letters in amount fields, try strange characters like + , - *
etc. in All fields. SHIFT and Arrow should Select Characters. Selection should also be
possible with mouse. Double Click should select all text in box.
Left and Right arrows should move 'ON' Selection. So should Up and Down.
Select with mouse by clicking.
39
TESTING CONCEPTS
Check Boxes
Clicking with the mouse on the box, or on the text should SET/UNSET the
box. SPACE should do the same.
Command Buttons
If Command Button leads to another Screen, and if the user can enter or
change details on the other screen then the Text on the button should be followed by
three dots. All Buttons except for OK and Cancel should have a letter Access to
them. This is indicated by a letter underlined in the button text. Pressing ALT+Letter
should activate the button. Make sure there is no duplication. Click each button once
with the mouse - This should activate Tab to each button - Press SPACE - This should
activate Tab to each button - Press RETURN - This should activate The above are
VERY IMPORTANT, and should be done for EVERY command Button. Tab to another
type of control (not a command button). One button on the screen should be default
(indicated by a thick black border). Pressing Return in ANY no command button
control should activate it.
If there is a Cancel Button on the screen, then pressing <Esc> should activate it. If
pressing the Command button results in uncorrectable data e.g. closing an action
step, there should be a message phrased positively with Yes/No answers where Yes
results in the completion of the action.
Pressing the Arrow should give list of options. This List may be scrollable. You should
not be able to type text in the box. Pressing a letter should bring you to the first item
in the list with that start with that letter. Pressing ‘Ctrl - F4’ should open/drop down
the list box. Spacing should be compatible with the existing windows spacing (word
etc.). Items should be in alphabetical order with the exception of blank/none, which
is at the top or the bottom of the list box. Drop down with the item selected should
be display the list with the selected item on the top. Make sure only one space
appears, shouldn't have a blank line at the bottom.
Combo Boxes
Should allow text to be entered. Clicking Arrow should allow user to choose
from list
40
TESTING CONCEPTS
List Boxes
Aesthetic Conditions:
41
TESTING CONCEPTS
20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.
Validation Conditions:
Navigation Conditions:
42
TESTING CONCEPTS
Usability Conditions:
1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is
the default unless otherwise specified.
2. Is all date entry required in the correct format?
3. Have all pushbuttons on the screen been given appropriate Shortcut keys?
4. Do the Shortcut keys work correctly?
5. Have the menu options that apply to your screen got fast keys associated and
should they have?
6. Does the Tab Order specified on the screen go in sequence from Top Left to
bottom right? This is the default unless otherwise specified.
7. Are all read-only fields avoided in the TAB sequence?
8. Are all disabled fields avoided in the TAB sequence?
9. Can the cursor be placed in the micro help text box by clicking on the text
box with the mouse?
10. Can the cursor be placed in read-only fields by clicking in the field with the
mouse?
11. Is the cursor positioned in the first input field or control when the screen is
opened?
12. Is there a default button specified on the screen?
13. Does the default button work correctly?
14. When an error message occurs does the focus return to the field in error
when the user cancels it?
15. When the user Alt+Tab's to another application does this have any impact on
the screen upon return to the application?
16. Do all the fields edit boxes indicate the number of characters they will hold by
there length? e.g. a 30 character field should be a lot longer
43
TESTING CONCEPTS
1. Is the data saved when the window is closed by double clicking on the close
box?
2. Check the maximum field lengths to ensure that there are no truncated
characters?
3. Where the database requires a value (other than null) then this should be
defaulted into fields. The user must either enter an alternative valid value or
leave the default value intact.
4. Check maximum and minimum field values for numeric fields?
5. If numeric fields accept negative values can these be stored correctly on the
database and does it make sense for the field to accept negative numbers?
6. If a set of radio buttons represents a fixed set of values such as A, B and C
then what happens if a blank value is retrieved from the database? (In some
situations rows can be created on the database by other functions, which are
not screen based, and thus the required initial values can be incorrect.)
7. If a particular set of data is saved to the database check that each value gets
saved fully to the database. (i.e.) Beware of truncation (of strings) and
rounding of numeric values.
1. Are the screen and field colors adjusted correctly for read-only mode?
2. Should a read-only mode be provided for this screen?
3. Are all fields and controls disabled in read-only mode?
4. Can the screen be accessed from the previous screen/menu/toolbar in read-
only mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.
General Conditions:
44
TESTING CONCEPTS
5. In drop down list boxes, ensure that the names are not abbreviations / cut
short
6. In drop down list boxes, assure that the list and each entry in the list can be
accessed via appropriate key / hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes
that have been made) and generates a caution message "Changes will be lost
- Continue yes/no"
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates, as a Close button when changes have
been made that cannot be undone.
11. Assure that only command buttons, which are used by a particular window, or
in a particular dialog box, are present. – (i.e) make sure they don't work on
the screen behind the current screen.
12. When a command button is used sometimes and not at other times, assures
that it is grayed out when it should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other
command buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are
names meaningful to system users.
16. Assure that command buttons are all of similar size and shape, and same font
& font size.
17. Assure that each command button can be accessed via a hot key
combination.
18. Assure that command buttons in the same window/dialog box do not have
duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value
(command button, or other object) which is invoked when the Enter key is
pressed - and NOT the Cancel or Close button
20. Assure that focus is set to an object/button, which makes sense according to
the function of the window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not
abbreviations.
45
TESTING CONCEPTS
22. Assure that option button names are not technical labels, but rather are
names meaningful to system users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys
do not exist in the same window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically
grouped together in clearly demarcated areas "Group Box"
26. Assure that the Tab key sequence, which traverses the screens, does so in a
logical way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many
individuals are red-green color blind).
29. Assure that the user will have control of the desktop with respect to general
color and highlighting (the application should not dictate the desktop
background characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
34. Tabbing will go onto the 'Continue' button if on last field of last tab within
tabbed window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should
be no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus
should be on the field causing the error. (i.e the tab is opened, highlighting
the field with the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all
fields filled correctly) will not open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or
previous screen (as appropriate), generating "changes will be lost" message if
necessary.
46
TESTING CONCEPTS
43. Micro help text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open
1. Assure that leap years are validated correctly & do not cause
errors/miscalculations.
2. Assure that month code 00 and 13 are validated correctly & do not cause
errors/miscalculations.
3. Assure that 00 and 13 are reported as errors.
4. Assure that day values 00 and 32 are validated correctly & do not cause
errors/miscalculations.
5. Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/
miscalculations.
6. Assure that Feb. 30 is reported as an error.
7. Assure that century change is validated correctly & does not cause errors/
miscalculations.
8. Assure that out of cycle dates are validated correctly & do not cause
errors/miscalculations.
Numeric Fields
47
TESTING CONCEPTS
48
TESTING CONCEPTS
Interview Questions
A testing where the tester tries to break the software by randomly trying
functionality of software.
In Compatibility testing we can test that software is compatible with other elements
of system.
Multi-user testing geared towards determining the effects of accessing the same
application code, module or database records. Identifies and measures the level of
locking, deadlocking and use of single-threaded code and locking semaphores.
49
TESTING CONCEPTS
The context-driven school of software testing is flavor of Agile Testing that advocates
continuous and creative evaluation of testing opportunities in light of the potential
information revealed and the value of that information to the organization right now.
Testing in which the action of a test case is parameterized by externally defined data
values, maintained as a file or spreadsheet. A common technique in Automated
Testing.
Testing of programs or procedures used to convert data from existing systems for
use in replacement systems.
50
TESTING CONCEPTS
Checks for memory leaks or other problems that may occur with prolonged
execution.
Testing which covers all combinations of input values and preconditions for an
element of the software under test.
Confirms that the application under test recovers from expected or unexpected
events without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions.
This term refers to making software specifically designed for a specific locality.
51
TESTING CONCEPTS
Mutation testing is a method for determining if a set of test data or test cases is
useful, by deliberately introducing various code changes ('bugs') and retesting with
the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources
Testing a system or an Application on the fly, i.e. just few tests here and there to
ensure the system or an application does not crash out.
Testing aimed at showing software works. Also known as "test to pass". See also
Negative Testing.
Testing aimed at showing software does not work. Also known as "test to fail". See
also Positive Testing.
Testing in which all paths in the program source code are tested at least once.
52
TESTING CONCEPTS
Confirms that the program recovers from expected or unexpected events without
loss of data or functionality. Events can include shortage of disk space, unexpected
loss of communication, or power out conditions.
Regression- Check that change in code have not effected the working functionality
Testing which confirms that the program can restrict access to authorized personnel
and that the authorized personnel can access the functions available to their security
level.
Stress testing is a form of testing that is used to determine the stability of a given
system or entity. It involves testing beyond normal operational capacity, often to a
53
TESTING CONCEPTS
Running a system at high load for a prolonged period of time. For example, running
several times more transactions in an entire day (or night) than would be expected
in a busy day, to identify and performance problems that appear after a large
number of transactions have been executed.
We can perform the Volume testing, where the system is subjected to large volume
of data.
When the application has critical problem and it has to be solved after a month then
we can say it as high severity and low priority.
54
TESTING CONCEPTS
When the application has trivial problem (less affected) and it has to be solved with
in a day then we can say it as low severity with high priority.
55
TESTING CONCEPTS
56
TESTING CONCEPTS
Other terms, e.g. software defect and software failure, are more specific. While there
are many who believe the term 'bug' is a reference to insects that caused
malfunctions in early electromechanical computers (1950-1970), the term 'bug' had
been a part of engineering jargon for many decades before the 1950s; even the
great inventor, Thomas Edison (1847-1931), wrote about a 'bug' in one of his letters.
57
TESTING CONCEPTS
58
TESTING CONCEPTS
monkeys" are valuable for load and stress testing, and will find a significant number
of bugs, but they're also very expensive to develop. "Dumb monkeys", on the other
hand, are inexpensive to develop, are able to do some basic testing, but they will
find few bugs. However, the bugs "dumb monkeys" do find will be hangs and
crashes, i.e. the bugs you least want to have in your software product. "Monkey
testing" can be valuable, but they should not be your only testing.
What is PDR?
PDR is an acronym. In the world of software QA/testing, it stands for "peer
design review", or "peer review".
59
TESTING CONCEPTS
Your end client requires a PDR, because they work on a product, and want to come
up with the very best possible design and documentation. Your end client requires
you to have a PDR, because when you organize a PDR, you invite and assemble the
end client's best experts and encourage them to voice their concerns as to what
should or should not go into the design and documentation, and why. When you're a
developer, designer, author, or writer, it's also to your advantage to come up with
the best possible design and documentation. Therefore you want to embrace the idea
of the PDR, because holding a PDR gives you a significant opportunity to invite and
assemble the end client's best experts and make them work for you for one hour, for
your own benefit. To come up with the best possible design and documentation, you
want to encourage your end client's experts to speak up and voice their concerns as
to what should or should not go into your design and documentation, and why.
Remember, PDRs are not about you, but about design and documentation. Please
don't be negative; please do not assume your company is finding fault with your
work, or distrusting you in any way. There is a 90+ per cent probability your
company wants you, likes you and trust you, because you're a specialist, and
because your company hired you after a long and careful selection process.
Your company requires a PDR, because PDRs are useful and constructive. Just about
everyone - even corporate chief executive officers (CEOs) - attend PDRsfrom time to
time. When a corporate CEO attends a PDR, he has to listen for "feedback" from
shareholders. When a CEO attends a PDR, the meeting is called the "annual
shareholders' meeting".
60
TESTING CONCEPTS
61
TESTING CONCEPTS
or other attendees of the PDR, during or near the conclusion of the PDR. By having a
checklist, and by going through a checklist, the facilitator can...
1. Verify that the attendees have inspected all the relevant documents and reports,
and
2. Verify that all suggestions and recommendations for each issue have been
recorded, and
3. Verify that all relevant facts of the meeting have been recorded. The facilitator's
checklist includes the following questions:
1. "Have we inspected all the relevant documents, code blocks, or products?"
2. "Have we completed all the required checklists?"
3. "Have I recorded all the facts relevant to this peer review?"
4. "Does anyone have any additional suggestions, recommendations, or comments?"
5. "What is the outcome of this peer review?" At the end of the peer review, the
facilitator asks the attendees of the peer review to make a decision as to the
outcome of the peer review. I.e., "What is our consensus?" "Are we accepting the
design (or document or code)?" Or, "Are we accepting it with minor modifications?
"Or, "Are we accepting it, after it is modified, and approved
through e-mails to the participants?" Or, "Do we want another peer review?" This is
a phase, during which the attendees of the PDR work as a committee, and the
committee's decision is final.
62
TESTING CONCEPTS
2. Have all the attendees received all the relevant documents and reports?
3. Are all the attendees well prepared for this peer review?
4. Have all the preceding life cycle activities been concluded?
5. Are there any changes to the baseline?
63
TESTING CONCEPTS
attendees can be the most valuable to you and your company. But, in your own best
interest, in order to expedite things, before every peer review it is a good idea to get
together with the additional reviewer and additional attendee, and talk with them
about issues, because if you don't, they will be the ones with the largest number
of questions and usually negative feedback.
When a PDR is done right, it is useful, beneficial, pleasant, and friendly.
Generally speaking, the fewer people show up at the PDR, the easier it tends to be,
and the earlier it can be adjourned. When you're an author, developer, or task lead,
many times you can relax, because during your peer review your facilitator and test
lead are unlikely to ask any tough questions from you. Why? Because, the facilitator
is too busy taking notes, and the test lead is kind of bored (because he had already
asked his toughest questions before the PDR).
When you're a facilitator, every PDR tends to be a pleasant experience. In my
experience, one of the easiest review meetings are PDRs where you're the facilitator
(whose only job is to call the shots and make notes).
64
TESTING CONCEPTS
you have MANY transferable skills! Number two, make a plan! Develop a belief that
getting a job in QA is easy!
HR professionals cannot tell the difference between quality control and quality
assurance! HR professionals tend to respond to keywords (i.e. QC and QA), without
knowing the exact meaning of those keywords! Number three, make it a reality!
Invest your time! Get some hands-on experience! Do some QA work! Do any QA
work, even if, for a few months, you get paid a little less than usual! Your goals,
beliefs, enthusiasm, and action will make a huge difference in your life! Number four,
I suggest you read all you can, and that includes reading product pamphlets,
manuals, books, information on the Internet, and whatever information you can lay
your hands on! If there is a will, there is a way! You CAN do it, if you put your mind
to it! You CAN learn to do QA work, with little or no outside help! Click on a link!
• Number two, make a plan! Develop a belief that getting a job in QA is easy!
HR professionals cannot tell the difference between quality control and quality
assurance! HR professionals tend to respond to keywords (i.e. QC and QA),
without knowing the exact meaning of those keywords!
• Number three, make it a reality! Invest your time! Get some hands-on
experience! Do some QA work! Do any QA work, even if, for a few months,
you get paid a little less than usual! Your goals, beliefs, enthusiasm, and
action will make a huge difference in your life!
• Number four, I suggest you read all you can, and that includes reading
product pamphlets, manuals, books, information on the Internet, and
whatever information you can lay your hands on! If there is a will, there is a
way! You CAN do it, if you put your mind to it! You CAN learn to do QA work,
with little or no outside help! Click on a link!
65
TESTING CONCEPTS
What is CMM?
A: CMM is an acronym that stands for Capability Maturity Model. The idea of CMM is,
as to future efforts in developing and testing software, concepts and experiences do
not always point us in the right direction, therefore we should develop processes,
and then refine those processes.
There are five CMM levels, of which Level 5 is the highest...
CMM Level 1 is called "Initial".
CMM Level 2 is called "Repeatable".
CMM Level 3 is called "Defined".
CMM Level 4 is called "Managed".
CMM Level 5 is called "Optimized".
There are not many Level 5 companies;
most hardly need to be. Within the United States, fewer than 8% of software
companies are rated CMM Level 4, or higher. The U.S. government requires that all
companies with federal government contracts to maintain a minimum of a CMM Level
3 assessment.
CMM assessments take two weeks. They're conducted by a nine-member team led
by a SEI-certified lead assessor.
66
TESTING CONCEPTS
67
TESTING CONCEPTS
68
TESTING CONCEPTS
you make better-informed testing choices because you're better informed; because
you know how the underlying software components operate and interact.
69
TESTING CONCEPTS
5. Verify that the default values are saved in the database, if the user input is not
specified.
6. Verify compatibility with old data, old hardware, versions of operating systems,
and interfaces with other software.
• Difference number two: Data validity errors are more common, while data
integrity errors are less common. Difference number three: Errors in data
validity are caused by HUMANS -- usually data entry personnel – who enter,
for example, 13/25/2005, by mistake, while errors in data integrity are
caused by BUGS in computer programs that, for example, cause the
overwriting of some of the data in the database, when one attempts to
retrieve a blank value from the database.
70
TESTING CONCEPTS
71
TESTING CONCEPTS
• Difference number 9: Dynamic testing finds fewer bugs than static testing.
• Difference number 10: Static testing can be done before compilation, while
dynamic testing can take place only after compilation and linking.
• Difference number 11: Static testing can find all of the following that
dynamic testing cannot find: syntax errors, code that is hard to
maintain, code that is hard to test, code that does not conform to coding
standards, and ANSI violations.
72
TESTING CONCEPTS
testing takes roughly as long as compilation and checks every statement you have
written.
73
TESTING CONCEPTS
In other words, top down design starts the design process with the main
module or system, and then progresses down to lower level modules and
subsystems. To put it differently, top down design looks at the whole system, and
then explodes it into subsystems, or smaller parts. A systems engineer or systems
analyst determines what the top level objectives are, and how they can be met. He
then divides the system into subsystems, i.e. breaks the whole system into logical,
manageable-size modules, and deals with them individually.
74
TESTING CONCEPTS
When you're doing black box testing of e-commerce web sites, you're most
efficient and effective when you're testing the sites' Visual Appeal, Contents, and
Home Pages. When you want to be effective and efficient, you need to verify that the
site is well planned. Verify that the site is customer-friendly. Verify that the choices
of colors are attractive. Verify that the choices of fonts are attractive. Verify that the
site's audio is customer friendly. Verify that the site's video is attractive. Verify that
the choice of graphics is attractive. Verify that every page of the site is displayed
properly on all the popular browsers. Verify the authenticity of facts. Ensure the site
provides reliable and consistent information. Test the site for appearance. Test the
site for grammatical and spelling errors. Test the site for visual appeal, choice of
browsers, consistency of font size, download time, broken links, missing links,
incorrect links, and browser compatibility. Test each toolbar, each menu item, every
window, every field prompt, every pop-up text, and every error message. Test every
page of the site for left and right justifications, every shortcut key, each control,
each push button, every radio button, and each item on every drop-down menu. Test
each list box, and each help menu item. Also check, if the command buttons are
grayed out when they're not in use.
75
TESTING CONCEPTS
76
TESTING CONCEPTS
addition to formal testing. If smoke testing is carried out by a skilled tester, it can
often find problems that are not caught during regular testing. Sometimes, if testing
occurs very early or very late in the software development cycle, this can be the only
kind of testing that can be performed. Smoke tests are, by definition, not exhaustive,
but, over time, you can increase your coverage of smoke testing. A common practice
at Microsoft, and some other software companies, is the daily build and smoke test
process. This means, every file is compiled, linked, and combined into an executable
file every single day, and then the software is smoke tested. Smoke testing
minimizes integration risk, reduces the risk of low quality, supports easier defect
diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but
should expose any major problems. Smoke testing should be thorough enough that,
if it passes, the tester can assume the product is stable enough to be tested more
thoroughly. Without smoke testing, the daily build is just a time wasting exercise.
Smoke testing is the sentry that guards against any errors in development and
future problems during integration. At first, smoke testing might be the testing of
something that is easy to test. Then, as the system grows, smoke testing should
expand and grow, from a few seconds to 30 minutes or more.
• Difference number 4: "Smart monkeys" are valuable for load and stress
testing, but not very valuable for smoke testing, because they are too
expensive for smoke testing.
77
TESTING CONCEPTS
78
TESTING CONCEPTS
• Reason number 2: Having a test strategy does satisfy one important step in
the software testing process.
• Reason number 3: The test strategy document tells us how the software
product will be tested.
• Reason number 7: The test strategy is decided first, before lower level
decisions are made on the test plan, test design, and other testing issues.
79
TESTING CONCEPTS
• Reason number 2: We create a test plan because it can and will help people
outside the test group to understand the why and how of product validation.
• Reason number 7: We create test plan because one of the outputs for
creating a test strategy is an approved and signed off test plan document.
• Reason number 10: We create a test plan document because test plans
should be documented, so that they are repeatable.
80
TESTING CONCEPTS
greatly simplify the test steps to be executed. But, writing such test cases is time
consuming, and project deadlines often prevent us from going that route.
Often the lack of enough time for testing is the reason for bugs to occur in the field.
However, even with ample time to catch the "most important bugs", bugs still
surface with amazing spontaneity. The challenge is, developers do not seem to know
how to avoid providing the many opportunities for bugs to hide, and testers do not
seem to know where the bugs are hiding.
81
TESTING CONCEPTS
What is a parameter?
82
TESTING CONCEPTS
What is a constant?
In software or software testing, a constant is a meaningful name that
represents a number, or string, that does not change. Constants are variables whose
value remain the same, i.e. constant, throughout the execution of a program. Why
do developers use constants? Because if we have code that contains constant values
that keep reappearing, or, if we have code that depends on certain numbers that are
difficult to remember, we can improve both the readability and maintainability of our
code, by using constants. To give you an example, let's suppose we declare a
constant and we call it Pi. We set it to 3.14159265 and use it throughout our code.
Constants, such as Pi, as the name implies, store values that remain constant
throughout the execution of our program. Keep in mind that, unlike variables which
can be read from and written to, constants are read-only variables. Although
constants resemble variables, we cannot modify or assign new values to them, as we
can to variables. But we can make constants public, or private. We can also specify
what data type they are.
83
TESTING CONCEPTS
The requirements test matrix is a table, where requirement descriptions are put in
the rows of the table, and the descriptions of testing efforts are put in the column
headers of the same table. The requirements test matrix is similar to the
requirements traceability matrix, which is a representation of user requirements
aligned against system functionality. The requirements traceability matrix ensures
that all user requirements are addressed by the system integration team and
implemented in the system integration effort. The requirements test matrix is a
representation of user requirements aligned against system testing. Similarly to the
requirements traceability matrix, the requirements test matrix ensures that all user
requirements are addressed by the system test team and implemented in the system
testing effort.
84
TESTING CONCEPTS
raw failure time data for product life analysis. The purpose of reliability testing is to
determine product reliability, and to determine whether the software meets the
customer's reliability requirements. In the system test phase, or after the software is
fully developed, one reliability testing technique we use is a test/analyze/fix
technique, where we couple reliability testing with the removal of faults. When we
identify a failure, we send the software back to the developers, for repair. The
developers build a new version of the software, and then we do another test
iteration. We track failure intensity (e.g. failures per transaction, or failures per hour)
in order to guide our test process, and to determine the feasibility of the software
release, and to determine whether the software meets the customer's reliability
requirements.
What is verification?
85
TESTING CONCEPTS
What is validation?
Validation ensures that functionality, as defined in requirements, is the
intended behavior of the product; validation typically involves actual testing and
takes place after verifications are completed.
What is a walk-through?
A walk-through is an informal meeting for evaluation or informational
purposes. A walk-through is also a process at an abstract level. It's the process of
inspecting software code by following paths through the code (as determined by
input conditions and choices made along the way). The purpose of code walk-
through is to ensure the code fits the purpose. Walk-through also offer opportunities
to assess an individual's or team's competency.
What is an inspection?
An inspection is a formal meeting, more formalized than a walk-through and
typically consists of 3-10 people including a moderator, reader (the author of
whatever is being reviewed) and a recorder (to make notes in the document). The
subject of the inspection is typically a document, such as a requirements document
or a test plan. The purpose of an inspection is to find problems and see what is
missing, not to fix anything. The result of the meeting should be documented in a
written report. Attendees should prepare for this type of meeting by reading through
the document, before the meeting starts; most problems are found during this
preparation. Preparation for inspections is difficult, but is one of the most cost
effective methods of ensuring quality, since bug prevention is more cost effective
than bug detection.
What is quality?
86
TESTING CONCEPTS
87
TESTING CONCEPTS
88
TESTING CONCEPTS
understand them but request them anyway. And the changes require redesign of the
software, rescheduling of resources and some of the work already completed have to
be redone or discarded and hardware requirements can be effected, too. _ Bug
tracking can result in errors because the complexity of keeping track of changes can
result in errors, too. _ Time pressures can cause problems, because scheduling of
software projects is not easy and it often requires a lot of guesswork and when
deadlines loom and the crunch comes, mistakes will be made. _ Code documentation
is tough to maintain and it is also tough to modify code that is poorly documented.
The result is bugs. Sometimes there is no incentive for programmers and software
engineers to document their code and write clearly documented, understandable
code. Sometimes developers get kudos for quickly turning out code, or programmers
and software engineers feel they cannot have job security if everyone can
understand the code they write, or they believe if the code was hard to write, it
should be hard to read. _ Software development tools , including visual tools, class
libraries, compilers, scripting tools, can introduce their own bugs. Other times the
tools are poorly documented, which can create additional bugs.
89
TESTING CONCEPTS
Java script and Flash is more important than backward compatible design, or, he
decides that he doesn't have the resources to maintain multiple styles of backward
compatible web design. This decision of his will inconvenience some users, because
some of the earlier versions of Internet Explorer and Netscape will not display his
web pages properly, as there are some serious improvements in the newer versions
of Internet Explorer and Netscape that make the older versions of these browsers
incompatible with, for example, DHTML. This is when we say, "This design doesn't
continue to work with earlier versions of browser software. Therefore our mythical
designer's web design is not backward compatible". On the other hand, if the same
mythical web designer decides that backward compatibility is more important than
fun, or, if he decides that he has the resources to maintain multiple styles of
backward compatible code, then no user will be inconvenienced. No one will be
inconvenienced, even when Microsoft and Netscape make some serious
improvements in their web browsers. This is when we can say, "Our mythical web
designer's design is backward compatible".
Introduction:
This article attempts to take a close look at the process and techniques in Regression
Testing.
If a piece of Software is modified for any reason testing needs to be done to ensure
that it works as specified and that it has not negatively impacted any functionality
that it offered previously. This is known as Regression Testing.
90
TESTING CONCEPTS
Regression Testing plays an important role in any Scenario where a change has been
made to a previously tested software code. Regression Testing is hence an important
aspect in various Software Methodologies where software changes enhancements
occur frequently.
Any Software Development Project is invariably faced with requests for changing
Design, code, features or all of them.
Each change implies more Regression Testing needs to be done to ensure that the
System meets the Project Goals.
All this affects the quality and reliability of the system. Hence Regression Testing,
since it aims to verify all this, is very important.
Every time a change occurs one or more of the following scenarios may occur:
● More Functionality may be added to the system
● More complexity may be added to the system
● New bugs may be introduced
91
TESTING CONCEPTS
After the change the new functionality may have to be tested along with all the
original functionality.
With each change Regression Testing could become more and more costly.
To make the Regression Testing Cost Effective and yet ensure good coverage one or
more of the following techniques may be applied:
● Test Automation: If the Test cases are automated the test cases may be
executed using scripts after each change is introduced in the system. The execution
of test cases in this way helps eliminate oversight, human errors,. It may also result
in faster and cheaper execution of Test cases. However there is cost involved in
building the scripts.
● Selective Testing: Some Teams choose execute the test cases selectively. They
do not execute all the Test Cases during the Regression Testing. They test only what
they decide is relevant. This helps reduce the Testing Time and Effort.
Since Regression Testing tends to verify the software application after a change has
been made everything that may be impacted by the change should be tested during
Regression Testing. Generally the following areas are covered during Regression
Testing:
92
TESTING CONCEPTS
● Create a Regression Test Plan: Test Plan identified Focus Areas, Strategy, Test
Entry and Exit Criteria. It can also outline Testing Prerequisites, Responsibilities, etc.
● Create Test Cases: Test Cases that cover all the necessary areas are important.
They describe what to Test, Steps needed to test, Inputs and Expected Outputs. Test
Cases used for Regression Testing should specifically cover the functionality
addressed by the change and all components affected by the change. The Regression
Test case may also include the testing of the performance of the components and the
application after the change(s) were done.
● Defect Tracking: As in all other Testing Levels and Types It is important Defects
are tracked systematically, otherwise it undermines the Testing Effort.
Summary:
In this article we studied the importance of ‘Regression Testing’, its role and how it is
done.
What is client-server and web based testing and how to test these
applications
What is the difference between client-server testing and web based testing
and what are things that we need to test in such applications?
Ans:
Projects are broadly divided into two types of:
• 2 tier applications
• 3 tier applications
93
TESTING CONCEPTS
The application launched on front-end will be having forms and reports which will be
monitoring and manipulating data
E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder
etc.,
The backend for these applications would be MS Access, SQL Server, Oracle, Sybase,
Mysql, Quadbase
WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet / xtranet)
Here we will be having Browser, web server and DB server.
Applications for the web server would be developed in Java, ASP, JSP, VBScript,
JavaScript, Perl, Cold Fusion, PHP etc. (All the manipulations are done on the web
server with the help of these programs developed)
The DB server would be having oracle, sill server, sybase, mysql etc. (All data is
stored in the database available on the DB server)
94
TESTING CONCEPTS
The types of tests, which can be applied on this type of applications, are:
1. User interface testing for validation & user friendliness
2. Functionality testing to validate behaviors, i/p, error handling, o/p, manipulations,
services levels, order of functionality, links, content of web page & backend
coverage’s
3. Security testing
4. Browser compatibility
5. Load / stress testing
6. Interoperability testing
7. Storage & data volume testing
95
TESTING CONCEPTS
Some more points to clear the difference between client server, web and
desktop applications:
Desktop application:
1. Application runs in single memory (Front end and Back end in one place)
2. Single user only
Client/Server application:
1. Application runs in two or more machines
2. Application is a menu-driven
3. Connected mode (connection exists always until logout)
4. Limited number of users
5. Less number of network issues when compared to web app.
Web application:
1. Application runs in two or more machines
2. URL-driven
3. Disconnected mode (state less)
4. Unlimited number of users
5. Many issues like hardware compatibility, browser compatibility, version
compatibility, security issues, performance issues etc.
As per difference in both the applications come where, how to access the resources.
In client server once connection is made it will be in state on connected, whereas in
case of web testing http protocol is stateless, then there comes logic of cookies,
which is not in client server.
For client server application users are well known, whereas for web application any
user can login and access the content, he/she will use it as per his intentions.
So, there are always issues of security and compatibility for web application.
I have covered what is White box Testing in previous article. Here I will
concentrate on Black box testing. BBT advantages, disadvantages and and How
Black box testing is performed i.e. the black box testing techniques.
96
TESTING CONCEPTS
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use
Knowledge of the internal structure or code. Or in other words the Test engineer
need not know the internal working of the “Black box” or application.
Each testing method has its own advantages and disadvantages. There are some
bugs that cannot be found using only black box or only white box. Majority of the
application are tested by black box testing method. We need to cover majority of
test cases so that most of the bugs will get discovered by black box testing.
Black box testing occurs throughout the software development and Testing life cycle
i.e. in Unit, Integration, System, Acceptance and regression testing stages.
97
TESTING CONCEPTS
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing
is the art of guessing where errors can be hidden. For this technique there are no
specific tools, writing the test cases that cover all the application paths.
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the
limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
98
TESTING CONCEPTS
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain
of a program into classes of data from which test cases can be derived.
Comparison Testing:
Different independent versions of same software are used to compare to each other
for testing in this method.
What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is
testable before it is released to test team for further testing. These test cases are
core functionality test cases that ensure application is stable and can be tested
thoroughly. Typically BVT process is automated. If BVT fails that build is again get
assigned to developer for fix.
• Build validation
• Build acceptance
99
TESTING CONCEPTS
• The BVT’s are typically run on daily builds and if the BVT fails the build is
rejected and a new build is released after the fixes are done.
• The advantage of BVT is it saves the efforts of a test team to setup and test a
build when major functionality is broken.
• Design BVTs carefully enough to cover basic functionality.
• Typically BVT should not run more than 30 minutes.
• BVT is a type of regression testing, done on each and every new build.
BVT primarily checks for the project integrity and checks whether all the modules are
integrated properly or not. Module integration testing is very important when
different teams develop project modules. I heard many cases of application failure
due to improper module integration. Even in worst cases complete project gets
scraped due to failure in module integration.
Obviously file ‘check in’ i.e. to include all the new and modified project files
associated with respective builds. BVT was primarily introduced to check initial build
health i.e. to check whether - all the new and modified files are included in release,
all file formats are correct, every file version and language, flags associated with
each file.
These basic checks are worth before build release to test team for testing. You will
save time and money by discovering the build flaws at the very beginning using BVT.
This is very tricky decision to take before automating the BVT task. Keep in mind
that success of BVT depends on which test cases you include in BVT.
Here are some simple tips to include test cases in your BVT automation
suite:
100
TESTING CONCEPTS
Also do not includes modules in BVT, which are not yet stable. For some under-
development features you can’t predict expected behavior as these modules are
unstable and you might know some known failures before testing for these
incomplete modules. There is no point using such modules or test cases in BVT.
You can make this critical functionality test cases inclusion task simple by
communicating with all those involved in project development and testing life cycle.
Such process should negotiate BVT test cases, which ultimately ensure BVT success.
Set some BVT quality standards and these standards can be met only by analyzing
major project features and scenarios.
Example: Test cases to be included in BVT for Text editor application (Some
sample tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test case for copy, cut, paste functionality of text editor
4) Test case for opening, saving, deleting text file.
These are some sample test cases, which can be marked as ‘critical’ and for every
minor or major changes in application these basic critical test cases should be
executed. This task can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include
test cases in BVT when there are new stable project modules available.
101
TESTING CONCEPTS
Are you developing any Test plan or test strategy for your project? Have
you addressed all risks properly in your test plan or test strategy?
102
TESTING CONCEPTS
As testing is the last part of the project, it’s always under pressure and time
constraint. To save time and money you should be able to prioritize your testing
work. How will prioritize testing work? For this you should be able to judge more
important and less important testing work. How will you decide which work is more
or less important? Here comes need of risk-based testing.
What is Risk?
“Risk are future uncertain events with a probability of occurrence and a potential for
loss” Risk identification and management are the main concerns in every software
project. Effective analysis of software risks will help to effective planning and
assignments of work.
In this article I will cover what are the “types of risks”. In next articles I will
try to focus on risk identification, risk management and mitigation.
Risks are identified, classified and managed before actual execution of program.
These risks are classified in different categories.
Categories of risks:
Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not
addressed properly.
Schedule risks mainly affect on project and finally on company economy and may
lead to project failure.
Schedules often slip due to following reasons:
103
TESTING CONCEPTS
Budget Risk:
Operational Risks:
Risks of loss due to improper process implementation, failed system or some
external events risks.
Causes of Operational risks:
Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:
Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain
risks are outside the control of the program.
These external events can be:
104
TESTING CONCEPTS
“Looking at the current scenario from the industry it is seen that the testers are
expected to have both technical testing skills as well either need to be from the
domain background or have gathered domain knowledge mainly for BFSI is
commonly seen.
I would like to know why and when is this domain knowledge imparted to the tester
during the testing cycle?”
First of all I would like to introduce three dimensional testing career mentioned
by Danny R. Faught. There are three categories of skill that need to be judged before
hiring any software tester. What are those three skill categories?
1) Testing skill
2) Domain knowledge
3) Technical expertise.
No doubt that any tester should have the basic testing skills like Manual testing and
Automation testing. Tester having the common sense can even find most of the
obvious bugs in the software. Then would you say that this much testing is
sufficient? Would you release the product on the basis of this much testing done?
Certainly not. You will certainly have a product look by the domain expert
before the product goes into the market.
While testing any application you should think like a end-user. But every human
being has the limitations and one can’t be the expert in all of the three dimensions
mentioned above. (If you are the experts in all of the above skills then please let me
know ;-)) So you can’t assure that you can think 100% like how the end-user going
to use your application. User who is going to use your application may be having a
good understanding of the domain he is working on. You need to balance all these
skill activities so that all product aspects will get addressed.
Nowadays you can see the professional being hired in different companies are more
domain experts than having technical skills. Current software industry is also seeing
105
TESTING CONCEPTS
a good trend that many professional developers and domain experts are moving into
software testing.
We can observe one more reason why domain experts are most wanted! When you
hire fresh engineers who are just out of college you cannot expect them to compete
with the experienced professionals. Why? Because experienced professional certainly
have the advantage of domain and testing experience and they have better
understandings of different issues and can deliver the application better and faster.
Here are some of the examples where you can see the distinct edge of
domain knowledge:
1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing
How will you test such applications without knowledge of specific domain?
Are you going to test the BFSI applications (Banking, Financial Services and
Insurance) just for UI or functionality or security or load or stress? You should know
what are the user requirements in banking, working procedures, commerce
background, exposure to brokerage etc and should test application accordingly, then
only you can say that your testing is enough - Here comes the need of subject-
matter experts.
When I know the functional domain better I can better write and execute more test
cases and can effectively simulate the end user actions which is distinctly a big
advantage.
106
TESTING CONCEPTS
• Testing skill
• Bug hunting skill
• Technical skill
• Domain knowledge
• Communication skill
• Automation skill
• Some programming skill
• Quick grasping
• Ability to Work under pressure …
That is going to be a huge list. So you will certainly say, do I need to have these
many skills? Its’ depends on you. You can stick to one skill or can be expert in one
skill and have good understanding of other skills or balanced approach of all the
skills. This is the competitive market and you should definitely take advantage of it.
Make sure to be expert in at least one domain before making any move.
There is no specific stage where you need this domain knowledge. You need to apply
your domain knowledge in each and every software testing life cycle.
How to get your all bugs resolved without any ‘Invalid bug’ label?
I hate “Invalid bug” label from developers for the bugs reported by
me, do you? I think every tester should try to get his/her 100% bugs resolved. This
requires bug reporting skill. See my previous post on “How to write a good bug
report? Tips and Tricks” to report bugs professionally and without any ambiguity.
107
TESTING CONCEPTS
Yes, 50% bugs get marked as “Invalid bugs” only due to testers incomplete
testing setup. Let’s say you found an ambiguity in application under test. You are
now preparing the steps to report this ambiguity as a bug. But wait! Have you done
enough troubleshooting before reporting this bug? Or have you confirmed if it is
really a bug?
Troubleshooting of:
Answer for the first question “what’s not working?” is sufficient for you to report the
bug steps in bug tracking system. Then why to answer remaining three questions?
Think beyond your responsibilities. Act smarter, don’t be a dumb person who
only follow his routine steps and don’t even think outside of that. You should be able
to suggest all possible solutions to resolve the bug and efficiency as well as
drawbacks of each solution. This will increase your respect in your team and will also
reduce the possibility of getting your bugs rejected, not due to this respect but due
to your troubleshooting skill.
108
TESTING CONCEPTS
Before reporting any bug, make sure it isn’t your mistake while testing, you
have missed any important flag to set or you might have not configured your test
setup properly.
Reasons of failure:
1) If you are using any configuration file for testing your application then make
sure this file is up to date as per the application requirements: Many times some
global configuration file is used to pick or set some application flags. Failure to
maintain this file as per your software requirements will lead to malfunctioning of
your application under test. You can’t report it as bug.
2) Check if your database is proper: Missing table is main reason that your
application will not work properly.
I have a classic example for this: One of my projects was querying many monthly
user database tables for showing the user reports. First table existence was checked
in master table (This table was maintaining only monthly table names) and then data
was queried from different individual monthly tables. Many testers were selecting big
date range to see the user reports. But many times it was crashing the application as
those tables were not present in database of test machine server, giving SQL query
error and they were reporting it as bug which subsequently was getting marked as
invalid by developers.
3) If you are working on automation testing project then debug your script
twice before coming to conclusion that the application failure is a bug.
4) Check if you are not using invalid access credentials for authentication.
6) Check if there is any other hardware issue that is not related to your application.
7) Make sure your application hardware and software prerequisites are correct.
109
TESTING CONCEPTS
8 ) Check if all software components are installed properly on your test machine.
Check whether registry entries are valid.
9) For any failure look into ‘system event viewer’ for details. You can trace out many
failure reasons from system event log file.
10) Before starting to test make sure you have uploaded all latest version files to
your test environment.
2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the
110
TESTING CONCEPTS
steps to reproduce the bug. Do not assume or skip any reproducing step. Step by
step described bug problem is easy to reproduce and fix.
3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to
summarize the problem in minimum words yet in effective way. Do not combine
multiple problems even they seem to be similar. Write different reports for each
problem.
Platform: Mention the hardware platform where you found this bug. The various
platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.
Operating system: Mention all operating systems where you found the bug.
Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different
OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug
with highest priority” and P5 as ” Fix when time permits”.
Severity:
This describes the impact of the bug.
Types of Severity:
111
TESTING CONCEPTS
Status:
When you are logging the bug in any bug tracking system then by default the bug
status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.
Assign To:
If you know which developer is responsible for that particular module in which bug
occurred, then you can specify email address of that developer. Else keep it blank
this will assign bug to module owner or Manger will assign bug to developer. Possibly
add the manager email address in CC list.
URL:
The page url on which bug occurred.
Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is
reflecting what the problem is and where it is.
Description:
A detailed description of bug. Use following fields for description field:
These are the important steps in bug report. You can also add the “Report type” as
one more field which will describe the bug type.
112
TESTING CONCEPTS
1) Report the problem immediately: If you found any bug while testing, do not
wait to write detail bug report later. Instead write the bug report immediately. This
will ensure a good and reproducible bug report. If you decide to write the bug report
later on then chances are high to miss the important steps in your report.
2) Reproduce the bug three times before writing bug report:Your bug should
be reproducible. Make sure your steps are robust enough to reproduce the bug
without any ambiguity. If your bug is not reproducible every time you can still file a
bug mentioning the periodic nature of the bug.
113
TESTING CONCEPTS
Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing
good bug reports, spend some time on this task because this is main communication
point between tester, developer and manager. Mangers should make aware to their
team that writing a good bug report is primary responsibility of any tester. Your
efforts towards writing good bug report will not only save company resources but
also create a good relationship between you and developers.
Writing effective status report is as important as the actual work you did!
How to write a effective status report of your weekly work at the end of each
week?
Here I am going to give some tips. Weekly report is important to track the
important project issues, accomplishments of the projects, pending work
and milestone analysis. Even using these reports you can track the team
performance to some extent. From this report prepare future actionables items
according to the priorities and make the list of next weeks actionable.
114
TESTING CONCEPTS
Possible solution:
Issue resolution date:
You can mark these issues in red colour. These are the issues that requires
managements help in resolving them.
These are the issues that not hold the QA team from delivering on time but
management should be aware of them. Mark these issues in Yellow colour. You can
use above same template to report them.
Project accomplishments:
Mark them in Green colour. Use below template.
Project:
Accomplishment:
Accomplishment date:
1) Pending deliverables: Mark them in blue color: These are previous weeks
deliverables which should get released as soon as possible in this week.
Project:
Work update:
Scheduled date:
Reason for extending:
2) New tasks:
List all next weeks new task here. You can use black color for this.
Project:
Scheduled Task:
Date of release:
C) Defect status:
Active defects:
List all active defects here with Reporter, Module, Severity, priority, assigned to.
115
TESTING CONCEPTS
Closed Defects:
List all closed defects with Reporter, Module, Severity, priority, assigned to.
Test cases:
List total number of test cases wrote, test cases passed, test cases failed, test cases
to be executed.
This template should give you the overall idea of the status report. Don’t ignore the
status report. Even if your managers are not forcing you to write these reports they
are most important for your work assessment in future.
Ok, I am asking to many questions without giving answer to any of it. Well, each
question mentioned above will require a separate post to address the problem fairly.
Here we will address in short about - How to hire the right candidates for
software testing positions?
Companies or interviewers, who are not serious about hiring right candidates, often
end with hiring poor performers.
Whichever is the reason, there is definitely loss of organization. Loss in terms of both
revenue and growth.
116
TESTING CONCEPTS
If you need answer to these questions, here is an informative video from Pradeep
Soundararajan - Consulting tester of Satisfice Inc in India. He explained what is the
current situation of software testing interview process in India and how interviewers
are wrong in selecting questions to be asked to candidates. A nice start to spread the
awareness and importance of software testing interviews.
Website Cookie Testing, Test cases for testing web application cookies?
We will first focus on what exactly cookies are and how they work. It
would be easy for you to understand the test cases for testing cookies when you
have clear understanding of how cookies work? How cookies stored on hard drive?
And how can we edit cookie settings?
What is Cookie?
Cookie is small information stored in text file on user’s hard drive by web server.
This information is later used by web browser to retrieve information from that
machine. Generally cookie contains personalized user data or information that is
used to communicate between different web pages.
What if you want the previous history of this user communication with the web
server? You need to maintain the user state and interaction between web browser
and web server somewhere. This is where cookie comes into picture. Cookies serve
the purpose of maintaining the user interactions with web server.
117
TESTING CONCEPTS
the cookies. There are two types of HTTP protocol. Stateless HTTP and State ful HTTP
protocol. Stateless HTTP protocol does not keep any record of previously accessed
web page history. While State ful HTTP protocol do keep some history of previous
web browser and web server interactions and this protocol is used by cookies to
maintain the user interactions.
Whenever user visits the site or page that is using cookie, small code inside that
HTML page (Generally a call to some language script to write the cookie like cookies
in JAVA Script, PHP, Perl) writes a text file on users machine called cookie.
Here is one example of the code that is used to write cookie and can be placed inside
any HTML page:
When user visits the same page or domain later time this cookie is read from disk
and used to identify the second visit of the same user on that domain. Expiration
time is set while writing the cookie. This time is decided by the application that is
going to use the cookie.
1) Session cookies: This cookie is active till the browser that invoked the cookie is
open. When we close the browser this session cookie gets deleted. Some time
session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies: The cookies that are written permanently on user machine
and lasts for months or years.
118
TESTING CONCEPTS
2) Personalized sites:
When user visits certain pages they are asked which pages they don’t want to visit or
display. User options are get stored in cookie and till the user is online, those pages
are not shown to him.
3) User tracking:
To track number of unique visitors online at particular time.
119
TESTING CONCEPTS
4) Marketing:
Some companies use cookies to display advertisements on user machines. Cookies
control these advertisements. When and which advertisement should be shown?
What is the interest of the user? Which keywords he searches on the site? All these
things can be maintained using cookies.
5) User sessions:
Cookies can track user sessions to particular domain using user ID and password.
Drawbacks of cookies:
1) Even writing Cookie is a great way to maintain user interaction, if user has set
browser options to warn before writing any cookie or disabled the cookies completely
then site containing cookie will be completely disabled and can not perform any
operation resulting in loss of site traffic.
3) Security issues:
Some times users personal information is stored in cookies and if someone hack the
cookie then hacker can get access to your personal information. Even corrupted
cookies can be read by different domains and lead to security issues.
4) Sensitive information:
Some sites may write and store your sensitive information in cookies, which should
not be allowed due to privacy concerns.
This should be enough to know what cookies are. If you want more cookie info see
Cookie Central page.
The first obvious test case is to test if your application is writing cookies properly on
disk. You can use the Cookie Tester application also if you don’t have any web
application to test but you want to understand the cookie concept for testing.
120
TESTING CONCEPTS
Test cases:
1) As a Cookie privacy policy make sure from your design documents that no
personal or sensitive data is stored in the cookie.
2) If you have no option than saving sensitive data in cookie make sure data
stored in cookie is stored in encrypted format.
3) Make sure that there is no overuse of cookies on your site under test. Overuse
of cookies will annoy users if browser is prompting for cookies more often and this
could result in loss of site traffic and eventually loss of business.
4) Disable the cookies from your browser settings: If you are using cookies on
your site, your sites major functionality will not work by disabling the cookies. Then
try to access the web site under test. Navigate through the site. See if appropriate
messages are displayed to user like “For smooth functioning of this site make sure
that cookies are enabled on your browser”. There should not be any page crash due
to disabling the cookies. (Please make sure that you close all browsers, delete all
previously written cookies before performing this test)
5) Accepts/Reject some cookies: The best way to check web site functionality is,
not to accept all cookies. If you are writing 10 cookies in your web application then
randomly accept some cookies say accept 5 and reject 5 cookies. For executing this
test case you can set browser options to prompt whenever cookie is being written to
disk. On this prompt window you can either accept or reject cookie. Try to access
major functionality of web site. See if pages are getting crashed or data is getting
corrupted.
6) Delete cookie: Allow site to write the cookies and then close all browsers and
manually delete all cookies for web site under test. Access the web pages and check
the behavior of the pages.
7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are
stored. Manually edit the cookie in notepad and change the parameters to some
vague values. Like alter the cookie content, Name of the cookie or expiry date of the
cookie and see the site functionality. In some cases corrupted cookies allow to read
the data inside it for any other domain. This should not happen in case of your web
121
TESTING CONCEPTS
site cookies. Note that the cookies written by one domain say rediff.com can’t be
accessed by other domain say yahoo.com unless and until the cookies are corrupted
and someone trying to hack the cookie data.
8 ) Checking the deletion of cookies from your web application page: Some
times cookie written by domain say rediff.com may be deleted by same domain but
by different page under that domain. This is the general case if you are testing some
‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on
the action web page and when any action or purchase occurs by user the cookie
written on disk get deleted to avoid multiple action logging from same cookie. Check
if reaching to your action or purchase page deletes the cookie properly and no more
invalid actions or purchase get logged from same user.
10) If your web application is using cookies to maintain the logging state of any
user then log in to your web application using some username and password. In
many cases you can see the logged in user ID parameter directly in browser address
bar. Change this parameter to different value say if previous user ID is 100 then
make it 101 and press enter. The proper access message should be displayed to user
and user should not be able to see other users account.
Have you performed software installation testing? How was the experience?
Well, Installation testing (Implementation Testing) is quite interesting part of
software testing life cycle.
Installation testing is like introducing a guest in your home. The new guest should be
properly introduced to all the family members in order to feel him comfortable.
Installation of new software is also quite like above example.
122
TESTING CONCEPTS
If installation fails then our program will not work on that system not only this
but can leave user’s system badly damaged. User might require to reinstall the full
operating system.
In above case will you make any impression on user? Definitely not! Your first
impression to make a loyal customer is ruined due to incomplete installation testing.
What you need to do for a good first impression? Test the installer
appropriately with combination of both manual and automated processes on
different machines with different configuration. Major concerned of installation
testing is Time! It requires lot of time to even execute a single test case. If you are
going to test a big application installer then think about time required to perform
such a many test cases on different configurations.
We will see different methods to perform manual installer testing and some
basic guideline for automating the installation process.
To start installation testing first decide on how many different system configurations
you want to test the installation. Prepare one basic hard disk drive. Format this HDD
with most common or default file system, install most common operating system
(Windows) on this HDD. Install some basic required components on this HDD. Each
time create images of this base HDD and you can create other configurations on this
base drive. Make one set of each configuration like Operating system and file format
to be used for further testing.
How we can use automation in this process? Well make some systems dedicated for
creating basic images (use software’s like Norton Ghost for creating exact images of
operating system quickly) of base configuration. This will save your tremendous time
in each test case. For example if time to install one OS with basic configuration is say
1 hour then for each test case on fresh OS you will require 1+ hour. But creating
image of OS will hardly require 5 to 10 minutes and you will save approximately 40
to 50 minutes!
You can use one operating system with multiple attempts of installation of installer.
Each time uninstalling the application and preparing the base state for next test
123
TESTING CONCEPTS
case. Be careful here that your uninstallation program should be tested before and
should be working fine.
124
TESTING CONCEPTS
installation then use the same flow diagram in reverse order to test uninstallation of
all the installed files on disk.
4) Use flow diagrams to automate the testing efforts. It will be very easy to
convert diagrams into automated scripts.
5) Test the installer scripts used for checking the required disk space. If installer is
prompting required disk space 1MB, then make sure exactly 1MB is used or whether
more disk space utilized during installation. If yes flag this as error.
6) Test disk space requirement on different file system format. Like FAT16 will
require more space than efficient NTFS or FAT32 file systems.
7) If possible set a dedicated system for only creating disk images. As said above
this will save your testing time.
9) Try to automate the routine to test the number of files to be written on disk. You
can maintain this file list to be written on disk in and excel sheet and can give this
list as a input to automated script that will check each and every path to verify the
correct installation.
11) Forcefully break the installation process in between. See the behavior of
system and whether system recovers to its original state without any issues. You can
test this “break of installation” on every installation step.
12) Disk space checking: This is the crucial checking in the installation-testing
scenario. You can choose different manual and automated methods to do this
checking. In manual methods you can check free disk space available on drive before
125
TESTING CONCEPTS
installation and disk space reported by installer script to check whether installer is
calculating and reporting disk space accurately. Check the disk space after the
installation to verify accurate usage of installation disk space. Run various
combination of disk space availability by using some tools to automatically making
disk space full while installation. Check system behavior on low disk space conditions
while installation.
13) As you check installation you can test for uninstallation also. Before each new
iteration of installation make sure that all the files written to disk are removed after
uninstallation. Some times uninstallation routine removes files from only last
upgraded installation keeping the old version files untouched. Also check for
rebooting option after uninstallation manually and forcefully not to reboot.
How a Product developer will define quality - The product which meets the
customer requirements.
How Customer will define Quality - Required functionality is provided with user
friendly manner.
126
TESTING CONCEPTS
These are some quality definitions from different perspective. Now lets see how can
one measure some quality attributes of product or application.
Following factors are used to measure software development quality. Each
attribute can be used to measure the product performance. These attributes can be
used for Quality assurance as well as Quality control. Quality Assurance activities
are oriented towards prevention of introduction of defects and Quality control
activities are aimed at detecting defects in products and services.
Reliability
Measure if product is reliable enough to sustain in any condition. Should give
consistently correct results.
Product reliability is measured in terms of working of project under different working
environment and different conditions.
Maintainability
Different versions of the product should be easy to maintain. For development its
should be easy to add code to existing system, should be easy to upgrade for new
features and new technologies time to time. Maintenance should be cost effective
and easy. System be easy to maintain and correcting defects or making a change in
the software.
Usability
This can be measured in terms of ease of use. Application should be user friendly.
Should be easy to learn. Navigation should be simple.
The system must be:
Portability
This can be measured in terms of Costing issues related to porting, Technical issues
related to porting, Behavioral issues related to porting.
Correctness
Application should be correct in terms of its functionality, calculations used internally
127
TESTING CONCEPTS
and the navigation should be correct. This means application should adhere to
functional requirements.
Efficiency
To Major system quality attribute. Measured in terms of time required to complete
any task given to the system. For example system should utilize processor capacity,
disk space and memory efficiently. If system is using all the available resources then
user will get degraded performance failing the system for efficiency. If system is not
efficient then it can not be used in real time applications.
Integrity or security
Integrity comes with security. System integrity or security should be sufficient to
prevent unauthorized access to system functions, preventing information loss,
ensure that the software is protected from virus infection, and protecting the privacy
of data entered into the system.
Testability
System should be easy to test and find defects. If required should be easy to divide
in different modules for testing.
Flexibility
Should be flexible enough to modify. Adaptable to other products with which it needs
interaction. Should be easy to interface with other standard 3rd party components.
Reusability
Software reuse is a good cost efficient and time saving development way. Different
code libraries classes should be generic enough to use easily in different application
modules. Dividing application into different modules so that modules can be reused
across the application.
Interoperability
Interoperability of one system to another should be easy for product to exchange
data or services with other systems. Different system modules should work on
different operating system platforms, different databases and protocols conditions.
Applying above quality attributes standards we can determine whether system meets
the requirements of quality or not. As specified above all these attributes are
128
TESTING CONCEPTS
This can be a big debate. Developers testing their own code - what will
be the testing output? All happy endings! Yes, the person who develops the code
generally sees only happy paths of the product and don’t want to go in much
details.
Optimistic developers - Yes, I wrote the code and I am confident it’s working
properly. No need to test this path, no need to test that path, as I know it’s working
properly. And right here developers skip the bugs.
Developer vs. Tester: Developer always wants to see his code working properly. So
he will test it to check if it’s working correctly. But you know why tester will test the
application? To make it fail in any way, and tester surely will test how application is
not working correctly. This is the main difference in developer testing and tester
testing.
129
TESTING CONCEPTS
This is all applicable to a developer who is a good tester! But most of the
developers consider testing as painful job, even they know the system well, due to
their negligence they tend to skip many testing paths, as it’s a very painful
experience for them. If developers find any errors in their code in unit testing then
it’s comparatively easier to fix, as the code is fresh to them, rather than getting the
bug from testers after two-three days. But this only possible if the developer is
interested in doing that much testing.
It’s testers responsibility to make sure each and every path is tested or not.
Testers should ideally give importance to all small possible details to verify
application is not breaking anywhere.
Developers, please don’t review your own code. Generally you will overlook the
issues in your code. So give it to others for review.
Conclusion
So in short there is no problem if developers are doing the basic unit testing
and basic verification testing. Developers can test few exceptional conditions they
know are critical and should not be missed. But there are some great testers out
there. Through the build to test team. Don’t waste your time as well. For success of
any project there should be independent testing team validating your applications.
After all it’s our (testers) responsibility to make the ‘baby’ smarter!!
I will extract only points related to software testing. As a software tester keep in
mind these simple points:
Share everything:
If you are a experienced tester on any project then help the new developers on your
project. Some testers have habit to keep the known bugs hidden till they get
130
TESTING CONCEPTS
implement in code and then they write a big defect report on that. Don’t try to only
pump your bug count, share everything with developers.
Build trust:
Let the developers know any bug you found in design phase. Do not log the bug
repeatedly with small variations just to pump the bug count. Build trust in developer
and tester relation.
Remember to flush
Like the toilets flush all the software’s at some point. While doing performance
testing remember to flush the system cache.
It’s a every testers question. How to be a good tester? Apart from the
technical knowledge, testing skills, tester should have some personal level skills
which will help them to build a good rapport in the testing team.
131
TESTING CONCEPTS
What are these abilities , skills which make a tester as a good tester? Well, I was
reading Dave Whalen’s article “Ugly Baby Syndrome!” and found it very
interesting. Dave compared software developers with the parents who deliver
a baby (software) with countless efforts. Naturally the product managers,
architectures, developers spent their countless time on developing application for the
customer. Then they show it to us (testers) and asks: “ How is the baby
(Application)? “ And testers tell them often that they have and ugly baby.
(Application with Bugs!)
Testers don’t want to tell them that they have ugly baby, but unfortunately its our
job. So effectively tester can convey the message to the developers without hurting
them. How can be this done? Ya that is the skill of a good tester!
Here are the tips sated by Dave to handle such a delicate situation:
132
TESTING CONCEPTS
Be prepared:
A good message in the end, Be prepared for everything! If worst things might not
happened till now but they can happen at any moment in your career. So be ready to
face them.
Some years ago many companies preferred not to have separate test
engineers in the project team. But this I have not seen in past 2-3 years in my
career. As now all the companies have a clear idea of the need of the QA and test
engineers. Also the QA and testers rolls are now concrete and there is no
confusion.
If Managers and management remove this inferiority thinking from their mind then
they can hire these gifted testers in their organization. Such testers can do
complex job well, can find complex bugs and further more can add some
procedures to the way of doing the routine jobs in order to make it more
structured.
In this tutorial you will learn about Effective Software Testing? How do we measure
‘Effectiveness’ of Software Testing? Steps to Effective Software Testing, Coverage
and Test Planning and Process.
133
TESTING CONCEPTS
A 1994 study in US revealed that only about “9% of software projects were
successful”
A large number of projects upon completion do not have all the promised features or
they do not meet all the requirements that were defined when the project was kicked
off.
Whether you are part of a team that is building a book keeping application or a
software that runs a power plant you cannot afford to have less than reliable
software.
Unreliable software can severely hurt businesses and endanger lives depending on
the criticality of the application. The simplest application poorly written can
deteriorate the performance of your environment such as the servers, the network
and thereby causing an unwanted mess.
To ensure software application reliability and project success Software Testing plays
a very crucial role.
Everything can and should be tested –
134
TESTING CONCEPTS
The effectiveness of testing can be measured with the degree of success in achieving
the above goals.
A) Coverage:
• All the scenarios that can occur when using the software application
• Each business requirement that was defined for the project
• Specific levels of testing should cover every line of code written for the
application
There are various levels of testing which focus on different aspects of the software
application. The often-quoted V model best explains this:
135
TESTING CONCEPTS
• Unit Testing
• Integration Testing
• System Testing
• User Acceptance Testing
The goal of each testing level is slightly different thereby ensuring the overall project
reliability.
• Unit testing should ensure each and every line of code is tested.
136
TESTING CONCEPTS
There are various types of System Testing possible which test the various aspects of
the software application.
Having followed the above steps for various levels of testing the product is rolled.
It is not uncommon to see various “bugs”/Defects even after the product is released
to production. An effective Testing Strategy and Process helps to minimize or
eliminate these defects. The extent to which it eliminates these post-production
defects (Design Defects/Coding Defects/etc) is a good measure of the effectiveness
of the Testing Strategy and Process.
As the saying goes - 'the proof of the pudding is in the eating'
Summary:
The success of the project and the reliability of the software application depend a lot
on the effectiveness of the testing effort. This article discusses “What is effective
Software Testing?”
137
TESTING CONCEPTS
http://www.standishgroup.com/sample_research/chaos_1994_1.php
In this tutorial you will learn about unit testing, various levels of testing, various
types of testing based upon the intent of testing, How does Unit Testing fit into the
Software Development Life Cycle? Unit Testing Tasks and Steps, What is a Unit Test
Plan? What is a Test Case? and Test Case Sample, Steps to Effective Unit Testing.
• Unit Testing
• Integration Testing
• System Testing
There are various types of testing based upon the intent of testing such as:
• Acceptance Testing
• Performance Testing
• Load Testing
• Regression Testing
How does Unit Testing fit into the Software Development Life Cycle?
This is the first and the most important level of testing. As soon as the programmer
develops a unit of code the unit is tested for various scenarios. As the application is
built it is much more economical to find and eliminate the bugs early on. Hence Unit
Testing is the most important of all the testing levels. As the software project
progresses ahead it becomes more and more costly to find and fix the bugs.
138
TESTING CONCEPTS
This document describes the Test Plan in other words how the tests will be carried
out.
This will typically include the list of things to be Tested, Roles and Responsibilities,
prerequisites to begin Testing, Test Environment, Assumptions, what to do after a
test is successfully carried out, what to do if test fails, Glossary and so on
Simply put, a Test Case describes exactly how the test should be carried out.
For example the test case may describe a test as follows:
Step 1: Type 10 characters in the Name Field
Step 2: Click on Submit
139
TESTING CONCEPTS
1) Documentation: Early on document all the Test Cases needed to test your code.
A lot of times this task is not given due importance. Document the Test Cases, actual
Results when executing the Test Cases, Response Time of the code for each test
case. There are several important advantages if the test cases and the actual
execution of test cases are well documented.
2) What should be tested when Unit Testing: A lot depends on the type of
program or unit that is being created. It could be a screen or a component or a web
service. Broadly the following aspects should be considered:
a. For a UI screen include test cases to verify all the screen elements that need to
appear on the screens
b. For a UI screen include Test cases to verify the spelling/font/size of all the “labels”
or text that appears on the screen
c. Create Test Cases such that every line of code in the unit is tested at least once in
a test cycle
d. Create Test Cases such that every condition in case of “conditional statements” is
tested once
e. Create Test Cases to test the minimum/maximum range of data that can be
entered. For example what is the maximum “amount” that can be entered or the
140
TESTING CONCEPTS
3) Automate where Necessary: Time pressures/Pressure to get the job done may
result in developers cutting corners in unit testing. Sometimes it helps to write
scripts, which automate a part of unit testing. This may help ensure that the
necessary tests were done and may result in saving time required to perform the
tests.
Summary:
“Unit Testing” is the first level of testing and the most important one. Detecting and
fixing bugs early on in the Software Lifecycle helps reduce costly fixes later on. An
Effective Unit Testing Process can and should be developed to increase the Software
Reliability and credibility of the developer. The Above article explains how Unit
Testing should be done and the important points that should be considered when
doing Unit Testing.
Many new developers take the unit testing tasks lightly and realize the importance of
Unit Testing further down the road if they are still part of the project. This article
serves as a starting point for laying out an effective (Unit) Testing Strategy.
Introduction:
As we covered in various articles in the Testing series there are various levels of
testing:
141
TESTING CONCEPTS
How does Integration Testing fit into the Software Development Life Cycle?
Once unit tested components are delivered we then integrate them together.
These “integrated” components are tested to weed out errors and bugs caused due
to the integration. This is a very important step in the Software Development Life
Cycle.
Before we begin Integration Testing it is important that all the components have
been successfully unit tested.
142
TESTING CONCEPTS
As you may have read in the other articles in the series, this document typically
describes one or more of the following:
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary
Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control
from one component to the other.
So the Integration Test cases should typically focus on scenarios where one
component is being called from another. Also the overall application functionality
should be tested to make sure the app works when the different components are
brought together.
The various Integration Test Cases clubbed together form an Integration Test Suite
Each suite may have a particular focus. In other words different Test Suites may be
created to focus on different areas of the application.
143
TESTING CONCEPTS
ID
There are various factors that affect Software Integration and hence Integration
Testing:
2) Automate Build Process where Necessary: A Lot of errors occur because the
wrong version of components were sent for the build or there are missing
components. If possible write a script to integrate and deploy the components this
helps reduce manual errors.
4) Defect Tracking: Integration Testing will lose its edge if the defects are not
tracked correctly. Each defect should be documented and tracked. Information
144
TESTING CONCEPTS
should be captured as to how the defect was fixed. This is valuable information. It
can help in future integration and deployment processes.
In this tutorial you will learn about metrics used in testing, The Product Quality
Measures - 1. Customer satisfaction index, 2. Delivered defect quantities, 3.
Responsiveness (turnaround time) to users, 4. Product volatility, 5. Defect ratios, 6.
Defect removal efficiency, 7. Complexity of delivered product, 8. Test coverage, 9.
Cost of defects, 10. Costs of quality activities, 11. Re-work, 12. Reliability and
Metrics for Evaluating Application System Testing.
This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are
analyzed:
They are normalized per function point (or per LOC) at product delivery (first 3
months or first year of operation) or Ongoing (per year of operation) by level of
severity, by category or cause, e.g.: requirements defect, design defect, code defect,
documentation/on-line help defect, defect introduced by fixes, etc.
145
TESTING CONCEPTS
4. Product volatility
• Ratio of maintenance fixes (to repair the system & bring it into compliance
with specifications), vs. enhancement requests (requests by users to enhance
or change functionality)
5. Defect ratios
8. Test coverage
146
TESTING CONCEPTS
9. Cost of defects
11. Re-work
12. Reliability
147
TESTING CONCEPTS
Metric = Formula
Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC
represents Lines of Code)
Number of tests per unit size = Number of test cases per KLOC/FP (LOC
represents Lines of Code).
148
TESTING CONCEPTS
This article explains about Different steps in Life Cycle of Testing Process. In Each
phase of the development process will have a specific input and a specific output.
In the whole development process, testing consumes highest amount of time. But
most of the developers oversee that and testing phase is generally neglected. As a
consequence, erroneous software is released. The testing team should be involved
right from the requirements stage itself.
The various phases involved in testing, with regard to the software development life
cycle are:
1. Requirements stage
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.
149
TESTING CONCEPTS
9. Bugs Reporting
10. Reworking on patches.
11. Release to production.
Requirements Stage
Normally in many companies, developers itself take part in the requirements stage.
Especially for product-based companies, a tester should also be involved in this
stage. Since a tester thinks from the user side whereas a developer can’t. A separate
panel should be formed for each module comprising a developer, a tester and a user.
Panel meetings should be scheduled in order to gather everyone’s view. All the
requirements should be documented properly for further use and this document is
called “Software Requirements Specifications”.
Test Plan
Without a good plan, no work is a success. A successful work always contains a good
plan. The testing process of software should also require good plan. Test plan
document is the most important document that brings in a process – oriented
approach. A test plan document should be prepared after the requirements of the
project are confirmed. The test plan document must consist of the following
information:
Test Design
Test Design is done based on the requirements of the project. Test has to be
designed based on whether manual or automated testing is done. For automation
testing, the different paths for testing are to be identified first. An end to end
checklist has to be prepared covering all the features of the project.
150
TESTING CONCEPTS
The test design is represented pictographically. The test design involves various
stages. These stages can be summarized as follows:
Then the design is drawn. The test design is the most critical one, which decides the
test case preparation. So the test design assesses the quality of testing process.
• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios
The software design is done in systematical manner or using the UML language. The
tester can do the reviews over the design and can suggest the ideas and the
modifications needed.
Code Reviews
Code reviews are similar to unit testing. Once the code is ready for release, the
tester should be ready to do unit testing for the code. He must be ready with his own
unit test cases. Though a developer does the unit testing, a tester must also do it.
The developers may oversee some of the minute mistakes in the code, which a
tester may find out.
Once the unit testing is completed and the code is released to QA, the functional
testing is done. A top-level testing is done at the beginning of the testing to find out
151
TESTING CONCEPTS
the top-level failures. If any top-level failures occur, the bugs should be reported to
the developer immediately to get the required workaround.
The test reports should be documented properly and the bugs have to be reported to
the developer after the testing is completed.
Release to Production
Once the bugs are fixed, another release is given to the QA with the modified
changes. Regression testing is executed. Once the QA assures the software, the
software is released to production. Before releasing to production, another round of
top-level testing is done.
The testing process is an iterative process. Once the bugs are fixed, the testing has
to be done repeatedly. Thus the testing process is an unending process.
In this tutorial you will learn about technical terms used in testing world, from Audit,
Acceptance Testng to Validation, Verification and Testing.
Assertion Testing: A dynamic analysis technique which inserts assertions about the
relationship between program variables into the program code. The truth of the
assertions is determined as the program executes.
152
TESTING CONCEPTS
Boundary Value Analysis: A selection technique in which test data are chosen to
lie along "boundaries" of the input domain [or output range] classes, data structures,
procedure parameters, etc. Choices often include maximum, minimum, and trivial
values or parameters.
Branch Coverage: A test coverage criteria which requires that for each decision
point each possible branch be executed at least once.
Boundary Value Testing: A testing technique using input values at, just below, and
just above, the defined limits of an input domain; and with input values causing
outputs to be at, just below, and just above, the defined limits of an output domain.
Branch Testing: Testing technique to satisfy coverage criteria which require that for
each decision point, each possible branch [outcome] be executed at least once.
Contrast with testing, path; testing, statement. See: branch coverage.
Cause Effect Graph: A Boolean graph linking causes and effects. The graph is
actually a digital-logic circuit (a combinatorial logic network) using a simpler notation
than standard electronics notation.
153
TESTING CONCEPTS
Cause Effect Graphing: This is a Test data selection technique. The input and
output domains are partitioned into classes and analysis is performed to determine
which input classes cause which effect. A minimal set of inputs is chosen which will
cover the entire effect set. It is a systematic method of generating test cases
representing combinations of conditions.
Code Inspection: A manual [formal] testing [error detection] technique where the
programmer reads source code, statement by statement, to a group who ask
questions analyzing the program logic, analyzing the code with respect to a checklist
of historically common programming errors, and analyzing its compliance with coding
standards.
Criticality: The degree of impact that a requirement, module, error, fault, failure, or
other item has on the development or operation of a system.
154
TESTING CONCEPTS
Error Guessing: This is a Test data selection technique. The selection criterion is to
pick values that seem likely to cause errors.
Error Seeding: The process of intentionally adding known faults to those already in
a computer program for the purpose of monitoring the rate of detection and removal,
and estimating the number of faults remaining in the program. Contrast with
mutation analysis.
Exhaustive Testing: Executing the program with all possible combinations of values
for program variables. This type of testing is feasible only for small, simple
programs.
155
TESTING CONCEPTS
Parallel Testing: Testing a new or an altered data processing system with the same
source data that is used in another system. The other system is considered as the
standard of comparison.
Path Testing: Testing to satisfy coverage criteria that each logical path through the
program be tested. Often paths through the program are grouped into a finite set of
classes. One path from each class is then tested.
Qualification Testing: Formal testing, usually conducted by the developer for the
consumer, to demonstrate that the software meets its specified requirements.
Quality Assurance: (1) The planned systematic activities necessary to ensure that
a component, module, or system conforms to established technical requirements. (2)
All actions that are taken to ensure that a development organization delivers
products that meet performance requirements and adhere to standards and
procedures. (3) The policy, procedures, and systematic actions established in an
enterprise for the purpose of providing and maintaining some degree of confidence in
data integrity and accuracy throughout the life cycle of the data, which includes
input, update, manipulation, and output. (4) The actions, planned and performed, to
provide confidence that all systems and components that influence the quality of the
product are working as expected individually and collectively.
Quality Control: The operational techniques and procedures used to achieve quality
requirements.
Review: A process or meeting during which a work product or set of work products,
is presented to project personnel, managers, users, customers, or other interested
156
TESTING CONCEPTS
parties for comment or approval. Types include code review, design review, formal
qualification review, requirements review, test readiness review.
Structural Testing: Testing that takes into account the internal mechanism
[structure] of a system or component. Types include branch testing, path testing,
statement testing. (2) Testing to insure each program statement is made to execute
during testing and that each program statement performs its intended function.
157
TESTING CONCEPTS
Test case Generator: A software tool that accepts as input source code, test
criteria, specifications, or data structure definitions; uses these inputs to generate
test input data; and, sometimes, determines expected results.
Test Design: Documentation specifying the details of the test approach for a
software feature or combination of software features and identifying the associated
tests.
Test Documentation: Documentation describing plans for, or results of, the testing
of a system or component, Types include test case specification, test incident report,
test log, test plan, test procedure, test report.
Test Driver: A software module used to invoke a module under test and, often,
provide test inputs, control and monitor execution, and report test results.
Test Incident Report: A document reporting on any event that occurs during
testing that requires further investigation.
Test Log: A chronological record of all relevant details about the execution of a test.
Test Phase: The period of time in the software life cycle in which the components of
a software product are evaluated and integrated, and the software product is
evaluated to determine whether or not requirements have been satisfied.
Test Plan: Documentation specifying the scope, approach, resources, and schedule
of intended testing activities. It identifies test items, the features to be tested, the
testing tasks, responsibilities, required, resources, and any risks requiring
contingency planning. See: test design, validation protocol.
158
TESTING CONCEPTS
Test Procedure: A formal document developed from a test plan that presents
detailed instructions for the setup, operation, and evaluation of the results for each
defined test.
Test Report: A document describing the conduct and results of the testing carried
out for a system or system component.
Test Result Analyzer: A software tool used to test output data reduction,
formatting, and printing.
Traceability Matrix: A matrix that records the relationship between two or more
products; e.g., a matrix that records the relationship between the requirements and
the design of a given software component. See: traceability, traceability analysis.
Unit Testing: Testing of a module for typographic, syntactic, and logical errors, for
correct implementation of its design, and for satisfaction of its requirements (or)
Testing conducted to verify the implementation of the design for one software
element; e.g., a unit or module; or a collection of software elements.
Usability: The ease with which a user can learn to operate, prepare inputs for, and
interpret outputs of a system or component.
159
TESTING CONCEPTS
What I want to do here, however, is state clearly one viewpoint of what the
distinction between positive and negative testing is. Then I want to play Devil's
Advocate and try to undermine that viewpoint by presenting an argument that others
have put forth - an alternative viewpoint. The real point of this will be to show that
sometimes trying to adhere too rigidly to conceptual terms like this can lead to a lot
of stagnating action. Read this section as a sort of extended argument that I am
having with myself as I come to grips with these terms.
160
TESTING CONCEPTS
course, the application should handle boundary problems. But what you are doing is
seeing if the boundary problem is not, in fact, handled. So if the program is
supposed to give an error when the person types in "101" on a field that should be
between "1" and "100", then that is valid if an error shows up. If, however, the
application does not give an error when the user typed "101" then you have a
problem. So really negative testing and positive testing are the same kinds of things
when you really boil it right down.
Now, some make a distinguishing remark from what I said. I said the following:
Positive testing is that testing which attempts to show that a given module of
an application does what it is supposed to do.
Negative testing is that testing which attempts to show that the module does
not do anything that it is not supposed to do.
Playing the Devil's Advocate, others would change this around and say the following
is a better distinction:
Positive testing is that testing which attempts to show that a given module of an
application does not do what it is supposed to do.
Negative testing is that testing which attempts to show that the module does
something that it is not supposed to do.
Let us look at this slightly shifted point of view. By this logic, we would say
that most syntax/input validation tests are positive tests. Even if you give an invalid
input, you are expecting a positive result (e.g., an error message) in the hope of
finding a situation where the module either gives the wrong error message or
actually allows the invalid input. A negative test is, by this logic, more trying to get
the module to do something differently than it was designed to do. For example, if
you are testing a state transition machine and the state transition sequence is: State
1 -> State 2 -> State 3 -> State 4, then trying to get the module to go from State 2
to State 4, skipping State 3, is a negative test. So, negative testing, in this case, is
about thinking of how to disrupt the module and, by extension, positive testing is
examining how well/badly the module does its task.
Now, in response to this, I would agree that most can see looking at it this
way from what the tester hopes to find. Testing pundits often tell testers to look for
161
TESTING CONCEPTS
error because if you look for success, you will often find success - even when there is
error. By proxy, if you do not find an error and you have reliable test cases (that
latter point is crucial), then a positive test case will show that the application did not,
in fact, manifest that error. However, showing an error when it should have done so
is an example of a "positive test" by the strict definition of that term. So in other
words:
Positive Testing = (Not showing error when not supposed to) + (Showing error
when supposed to)
So if either of the situations in parentheses happens you have a positive test in
terms of its result - not what the test was hoping to find. The application did what it
was supposed to do. By that logic:
Negative Testing = (Showing error when not supposed to) + (Not showing error
when supposed to)
(Usually these situations crop up during boundary testing or cause-effect testing.)
Here if either of the situations in parentheses happens you have a negative test in
terms of its result - again, not what the test was hoping to find. The application did
what it was not supposed to do.
However, in both cases, these were good results because they showed you
what the application was doing and you were able to determine if it was working
correctly or not. So, by my original definitions, the testing is all about errors and
finding them. It is just how you are looking for those errors that make the
distinction. (Granted, how you are looking will often dictate what you are hoping to
find but since that is the case, it hardly makes sense to make a grand distinction
between them.) Now, regarding the point I made above, as a Devil's Advocate: "A
negative test is more trying to get the module to do something differently than it
was designed to do." We have to realize, I think, that what we call "negative testing"
is often about exercising boundary conditions - and those boundaries exist within the
context of design. Granted, that can be trying to get a value in to a field that it
should not accept. However, a good application should have, during the
requirements stage, had provisions for invalid input. Thus really what you are testing
here is (a) whether the provisions for invalid input exist and (b) whether they are
working correctly. And, again, that is why this distinction, for me (between positive
162
TESTING CONCEPTS
Your negative test can turn into a positive test just be shifting the emphasis
of what you are looking for. To get the application to do something it is not designed
to do could be looked at as accepting invalid input. However, if you find that the
application does accept invalid input and does not, in fact, give a warning, I would
agree that is a negative test if it was specified in requirements that the application
should respond to invalid input. In this case the application did not, but it was not
also specified that it should. So, here, by strict requirements did the application do
what it was supposed to do? Technically, yes. If requirements did not specify
differently, design was not put in place to handle the issue. Thus you are not testing
something outside the scope of design. Rather, you are testing something that was
not designed in the first place.
So, going back to one of the previous points, one thing we can probably all
agree on: it entirely depends on how you view a test. But are we saying the result of
the test determines whether it was a positive or negative test? If so, many would
disagree with that, indicating that it is the thinking behind the test that should be
positive or negative. In actuality, most experienced testers do not think in terms of
positive or negative, they think in terms of "what can I do to establish the level of
risk?" However, to this point, I would argue that if that is truly how the tester thinks
of things then all concepts of positive/negative go right out of the window (as I think
they mostly should anyway). Obviously you could classify the test design in terms of
negative or positive, but to some extent that is irrelevant. However, without getting
into that, I am not sure we are saying that the result of the test determines positivity
or negativity. What I said earlier, relative to my example, was that "in both cases,
these were good results because they showed you what the application was doing
and you were able to determine if it was working correctly or not." If the application
was behaving correctly or incorrectly, you still determined what the application was
actually doing and, as such, those are good results. Thus the result tells you about
the application and that is good (without recourse to terms like positive and
negative). If the result tells you nothing about how the application is functioning that
is, obviously, bad (and, again, this is without recourse to positive or negative).
We can apply the term "effective" to these types of test cases and we can say
163
TESTING CONCEPTS
that all test cases, positive or negative, should be effective. But what about the idea
of relying on the thinking behind the test? This kind of concept is just a little too
vague for me because people's thinking can be more or less different, even on this
issue, which can often depend on what people have been taught regarding these
concepts. As I showed, you can transform a postive test mentality into a negative
test mentality just by thinking about the results of the test differently. And if
negative testing is just about "disrupting a module" (the Devil's Advocate position),
even a positive test can do that if there is a fault. However I am being a little flip
because with the notion of the thinking behind the test, obviously someone here
would be talking about intent. The intent is to disrupt the module so as to cause a
fault fault and that would constitute a negative test (by the Devil's Advocate
position) while a positive test would not be trying to disrupt the module - even
though disruption might occur (again, by the Devil's Advocate position). The key
differentiator is the intent. I could sort of buy that but, then again, boundary testing
is an attempt to disrupt modules because you are seeing if the system can handle
the boundary violation. This can also happen with results. As I said: "Your negative
test can turn into a positive test just be shifting the emphasis of what you are
looking for." That sort of speaks to the intention of what you are hoping to find but
also how you view the problem. If the disruption you tried to cause in the module is,
in fact, handled by the code then you will get a positive test result - an error
message of some sort.
Now I want to keep on this point because, again, some people state that
negative testing is about exercising boundary conditions. Some were taught that this
is not negative testing; rather that this is testing invalid inputs, which are positive
tests - so it depends how you were taught. And figure that a boundary condition, if
not handled by the code logic, will potentially severely disrupt the module - which is
the point of negative testing according to some views of it. However, that is not the
intent here according to some. And yet while that was not the intent, that might be
the result. That is why the distinction, for me, blurs. But here is where the crux of
the point is for me: you can generlaly forget all about intent of test case design for
the moment and look at the distinction of what the result is in terms of a "positive
result" (the application showed me an error when it should have) and a "negative
result" (the application did not show me an error when it should have). The latter is
definitely a more negative connotation than the former, regardless of the intent of
164
TESTING CONCEPTS
the tester during design of the test case and that is important to realize because
sometimes our intentions for tests are changed by the reality of what exists and
what happens as a result of running the tests. So, in the case of intent for the
situation of the application not showing an error when it was supposed to, this is
simply a matter of writing "negative test cases" (if we stick with the term for a
moment) that will generate conditions that should, in turn, generate error messages.
But the point is that the intent of the test case is to see if the application does
not, in fact, generate that error message. In other words, you are looking for a
negative result. But, then again, we can say: "Okay, now I will look that the
application does generate the error message that it should." Well, in that case, we
are really just running the negative test case! Either way the result is that the error
either will or will not show up and thus the result is, at least to some extent,
determining the nature of the test case (in terms of negative or positive
connotation). If the error does not show up, the invalid input might break the
module. So is the breakdown this:
165
TESTING CONCEPTS
case, design should be in place to mitigate that problem. And, again, you are then
positively testing.
Now, one can argue, "Well, it is possible that the user can try something that there
simply is no way to design around." Okay. But then I ask: "Like what?" If there is no
way you can design around it or even design something to watch for the event, or
have the system account for it, how do you write a valid test case for that? I mean,
you can write a test case that breaks the application by disrupting the module but --
you already knew that was going to happen. However, this is not as cut and dry as I
am sure anyone reading this could point out. After all, in some cases maybe you are
not sure that what you are writing as a test case will be disruptive. Ah, but that is
the rub. We just defined "negative testing" as trying to disrupt the module. Whether
we succeed or not is a different issue (and speaks to the result), but that was the
intent. We are trying to do something that is outside the bounds of design and thus
it is not so much a matter of testing for disruption as it is testing for the effects of
that disruption. If the effects could be mitigated, that must be some sort of design
that is mitigating them and then you are positively testing that mitigating influence.
As an example, a good test case for a word processer might be: "Turn off the
computer to simulate a power failure when an unsaved document is present in the
application." Now, the idea here is that you might have some document saving
feature that automatically kicks in when the application suddenly terminates, say via
a General Protection Fault (GPF). However, strictly speaking, powering down the
computer is different than a GPF. So here you are testing to see what happens if the
application is shut down via a power-off of the PC, which, let us say, the application
was not strictly designed to really handle. So my intent is to disrupt the module.
However, in this case, since I can state the negative condition, I can state a possible
design that could account for it. After all: we already know that the document will
not be saved because nothing was designed to account for that. But the crucial point
is that if nothing was designed into the system to account for the power-off of the
PC, then what are you really testing? You are testing that the application does what
the application does when a power-off occurs. But if nothing is designed to happen
one way or the other, then testing for disruption really does you no good. After all,
you know it is going to be disrupted. That is not in question. What is (or should be)
in question is how you can handle that disruption and then test how that handling
166
TESTING CONCEPTS
Positive testing is that testing which attempts to show that a given module
of an application does not do what it is supposed to do.
In this case of the power-down test case, we are not positive testing because we did
not test that the application did not do what it was supposed to do. The application
was not "supposed" to do anything because nothing was designed to handle the
power-down.
Negative testing is that testing which attempts to show that the module
does something that it is not supposed to do.
In the case of the power-down test case, we are also not negative testing by this
definition because the application, in not saving the document or doing anything
(since it was not designed to do anything in the first place), is not doing something
that it is not supposed to do. Again, the application is not "supposed" to do anything
since it was not designed to do anything in this situation.
Positive Testing = (Not showing error when not supposed to) + (Showing error when
supposed to)
I would have to loosen my language a little but, basically, the application was not
supposed to show an error and, in fact, did not do so in this case. But what if the
application was, in fact, supposed to handle that situation of a power-down? Let us
say the developers hooked into the API so that if a shut-down event was fired off,
the application automatically issues an error/warning and then saves the document
in a recovery mode format. Now let us say I test that and find that the application
did not, in fact, save the file. Consider again my quasi-definition/equation for
negative testing:
Negative Testing = (Showing error when not supposed to) + (Not showing error
when supposed to)
In this case I have done negative testing because the application was supposed to
issue an error/warning but did not. However, notice, that the test case is the same
167
TESTING CONCEPTS
exact test case. The intent of my testing was simply to test this aspect of the
application. The result of the test relative to the stated design is what determines if
the test was negative or positive by my definitions. Now, because I want to be
challenged on this stuff, you could also say: "Yes, but forget the document in the
word processor. What if the application gets corrupted because of the power-off?"
Let us say that the corruption is just part of the Windows environment and there is
nothing that can be done about it. Is this negative testing? By the Devil's Advocate
definition, strictly it is not, because remember by that definition: "Negative testing is
that testing which attempts to show that the module does something that it is not
supposed to do." But, in this case, the module did not do something (become
corrupted) that it was not supposed to do. This simply happened as a by-product of a
Windows event that cannot be handled. But we did, after all, try to disrupt the
module, right? So is it a negative test or not by the definition of disruption?
Incidentally, by my definition, it is not a negative test either. However, what is
common in all of what I have said is that the power-down test case is an effective
test case and this is the case regardless of whether you choose to connote it with a
"positive" or "negative" qualifier. Since that can be the case, then, for me, the use of
the qualifier is irrelevant.
But now let us consider another viewpoint from the Devil's Advocate and one
that I think is pretty good. Consider this example: An application takes mouse clicks
as input. The requirement is for one mouse click to be processed at a time, the user
hitting multiple mouse clicks will cause the application to discard anything but the
first. Any tester will do the obvious and design a test to hit multiple mouse clicks.
Now the application is designed to discard anything but the first, so the test could be
classified (by my definition) a negative one as the application is designed not to
process multiple mouse clicks. The negative test is to try to force the application to
process more than the first. BUT, I hear you say, this is an input validation test that
tests that the application does discard mutiple mouse clicks, therefore it is a positive
test (again, by my definition), and I would then agree, it is a positive test. However,
the tester might also design a test that overflows the input buffer with mouse clicks -
is that a negative test?. Note, this situation is not covered explicitly in the
requirements - and that is crucial to what I would call negative testing, that very
often it is the tester's "what if" analysis that designs negative tests - so, yes, it is a
negative test as you are forcing the application into a situation it may not have been
168
TESTING CONCEPTS
designed and/or coded for - you may not know whether it had or not. The actual
result of the test may be that the application stops accepting any more clicks on its
input buffer and causes an error message or it may crash.
Now, having said all this, it makes me realize how my point starts to coalesce
with the Devil's Advocate. One way that might happen is via the use of the term
"error" that has gotten tossed around a lot. My language seemed too restrictive in
the sense that when I used the word "error" (as in "showing error when not
supposed to") I did not make it clear enough that I was not necessarily talking about
an error screen of some sort, but rather an error condition or a failure. With this, my
negative testing definition really starts to coalesce with the Devil's Advocate's
definition ("does something that it is not supposed to do"). I had said: "Negative
Testing = (Showing error when not supposed to) + (Not showing error when
supposed to)" and broadening my language more, really what I am saying is that the
application is either showing (doing) something it is not supposed to (which matches
the Devil's Advocate thought) but I was also saying that the application is not
showing (doing) something that it was supposed to. And to the Devil's Advocate that
latter is positive testing. Let me restate the two viewpoints somewhat:
169
TESTING CONCEPTS
common Devil's Advocate position, I am having a hard time seeing the major
distinction between positive and negative. The Devil's Advocate's original conception:
"Positive testing is that testing which attempts to show that a given module of an
application does NOT do what it is supposed to do. Negative testing is that testing
which attempts to show that the module does something that it is not supposed to
do."
To me, "doing something you are not supposed to" (the Devil's Advocate
negative test) and "not doing something you are supposed to" (the Devil's Advocate
positive test) are really two sides of the same coin or maybe just two ways of saying
the same thing. So let us say that our requirement is "do not process multiple mouse
clicks". In that case, "not doing something you are supposed to" (Devil's Advocate
positive test) means, in this case, "processing multiple mouse clicks". In other
words, the application should not process multiple mouse clicks. If it does, it is doing
something it is not supposed to. Likewise, "doing something you are not supposed to
do" (Devil's Advocate negative test) means, in this case, "processing multiple mouse
clicks". In other words, the application should not process multiple mouse clicks.
Either way, it is saying the same thing. So what we are testing for is "application not
processing multiple mouse clicks". It would seem that if the application does process
multiple mouse clicks it is both not doing what it is supposed to do (not processing
them) and doing something it is not supposed to do (processing them). The same
statement, just made different ways. Now, let me see if that works with my
definitions.
Again, the lemma is "do not process multiple mouse clicks". If the application
does this then it falls under lemma 1 of my negative test ("Doing something it was
not supposed to do.") If the application does not do this, it falls under lemma 1 of
my positive test ("not doing something it was not supposed to do"). Even with the
mouse click example we have two aspects:
Application designed not to process multiple mouse clicks
Application designed to process only one mouse click
Saying the same thing, and yet a subtle shift in emphasis if you want to go by the
positive and negative distinctions. The difference, however, is also whether you are
dealing with active design or passive design. In other words, does the application
actively make sure that only one mouse click is handled (by closing the buffer) or
170
TESTING CONCEPTS
does it simply only process one click, but allow the buffer to fill up anyway. I like the
idea of tying this whole thing in with "mitgating design factors". I think that we can
encapsulate "intent" and "result" (both of which are important to test casing) by
looking more at the efficient and effective demarcations. We have to consider result
that is part of how you do test case effectiveness metrics as well as proactive defect
detection metrics. If a test case is a tautology test then it is not really efficient or
effective - but that is based solely on the result, not the intent or anything else
Conclusion:
BVT is nothing but a set of regression test cases that are executed each time for new
build. This is also called as smoke test. Build is not assigned to test team unless and
until the BVT passes. BVT can be run by developer or tester and BVT result is
communicated throughout the team and immediate action is taken to fix the bug if
BVT fails. BVT process is typically automated by writing scripts for test cases. Only
critical test cases are included in BVT. These test cases should ensure application
test coverage. BVT is very effective for daily as well as long term builds. This saves
significant time, cost, resources and after all no frustration of test team for
incomplete build.
Definitions
A
abstract test case: See high level test case.
acceptance: See acceptance testing.
acceptance criteria: The exit criteria that a component or system must satisfy in
order to be accepted by a user, customer, or other authorized entity. [IEEE 610]
acceptance testing: Formal testing with respect to user needs, requirements, and
business processes conducted to determine whether or not a system satisfies the
acceptance criteria and to enable the user, customers or other authorized entity to
determine whether or not to accept the system. [After IEEE 610]
accessibility testing: Testing to determine the ease by which users with disabilities
can use a component or system. [Gerrard]
accuracy: The capability of the software product to provide the right or agreed
results or effects with the needed degree of precision. [ISO 9126] See also
functionality testing.
171
TESTING CONCEPTS
172
TESTING CONCEPTS
173
TESTING CONCEPTS
not a component or system satisfies the user/customer needs and fits within the
business processes. Beta testing is often employed as a form of external acceptance
testing for off-the-shelf software in order to acquire feedback from the market.
big-bang testing: A type of integration testing in which software elements,
hardware elements, or both are combined all at once into a component or an overall
system, rather than in stages. [After IEEE 610] See also integration testing.
black-box technique: See black box test design technique.
black-box testing: Testing, either functional or non-functional, without reference to
the internal structure of the component or system.
black-box test design technique: Procedure to derive and/or select test cases
based on an analysis of the specification, either functional or non-functional, of a
component or system without reference to its internal structure.
blocked test case: A test case that cannot be executed because the preconditions
for its execution are not fulfilled.
bottom-up testing: An incremental approach to integration testing where the
lowest level components are tested first, and then used to facilitate the testing of
higher level components. This process is repeated until the component at the top of
the hierarchy is tested. See also integration testing.
boundary value: An input value or output value which is on the edge of an
equivalence partition or at the smallest incremental distance on either side of an
edge, for example the minimum or maximum value of a range.
boundary value analysis: A black box test design technique in which test cases are
designed based on boundary values.
boundary value coverage: The percentage of boundary values that have been
exercised by a test suite.
boundary value testing: See boundary value analysis.
branch: A basic block that can be selected for execution based on a program
construct in which one of two or more alternative program paths are available, e.g.
case, jump, go to, ifthen- else.
branch condition: See condition.
branch condition combination coverage: See multiple condition coverage.
branch condition combination testing: See multiple condition testing.
branch condition coverage: See condition coverage.
174
TESTING CONCEPTS
branch coverage: The percentage of branches that have been exercised by a test
suite. 100% branch coverage implies both 100% decision coverage and 100%
statement coverage.
branch testing: A white box test design technique in which test cases are designed
to execute branches.
bug: See defect.
bug: See defect report.
business process-based testing: An approach to testing in which test cases are
designed based on descriptions and/or knowledge of business processes.
C
Capability Maturity Model (CMM): A five level staged framework that describes
the key elements of an effective software process. The Capability Maturity Model
covers bestpractices for planning, engineering and managing software development
and maintenance. [CMM]
Capability Maturity Model Integration (CMMI): A framework that describes the
key elements of an effective product development and maintenance process. The
Capability Maturity Model Integration covers best-practices for planning, engineering
and managing product development and maintenance. CMMI is the designated
successor of the CMM. [CMMI]
capture/playback tool: A type of test execution tool where inputs are recorded
during manual testing in order to generate automated test scripts that can be
executed later (i.e. replayed). These tools are often used to support automated
regression testing.
capture/replay tool: See capture/playback tool.
CASE: Acronym for Computer Aided Software Engineering.
CAST: Acronym for Computer Aided Software Testing. See also test automation.
cause-effect graph: A graphical representation of inputs and/or stimuli (causes)
with their associated outputs (effects), which can be used to design test cases.
cause-effect graphing: A black box test design technique in which test cases are
designed from cause-effect graphs. [BS 7925/2]
cause-effect analysis: See cause-effect graphing.
cause-effect decision table: See decision table.
certification: The process of confirming that a component, system or person
complies with its specified requirements, e.g. by passing an exam.
175
TESTING CONCEPTS
176
TESTING CONCEPTS
177
TESTING CONCEPTS
178
TESTING CONCEPTS
179
TESTING CONCEPTS
data flow testing: A white box test design technique in which test cases are
designed to execute definition and use pairs of variables.
data integrity testing: See database integrity testing.
database integrity testing: Testing the methods and processes used to access and
manage the data(base), to ensure access methods, processes and data rules
function as expected and that during access to the database, data is not corrupted or
unexpectedly deleted, updated or created.
dead code: See unreachable code.
debugger: See debugging tool.
debugging: The process of finding, analyzing and removing the causes of failures in
software.
debugging tool: A tool used by programmers to reproduce failures, investigate the
state of programs and find the corresponding defect. Debuggers enable
programmers to execute programs step by step, to halt a program at any program
statement and to set and examine program variables.
decision: A program point at which the control flow has two or more alternative
routes. A node with two or more links to separate branches.
decision condition coverage: The percentage of all condition outcomes and
decision outcomes that have been exercised by a test suite. 100% decision condition
coverage implies both 100% condition coverage and 100% decision coverage.
decision condition testing: A white box test design technique in which test cases
are designed to execute condition outcomes and decision outcomes.
decision coverage: The percentage of decision outcomes that have been exercised
by a test suite. 100% decision coverage implies both 100% branch coverage and
100% statement coverage.
decision table: A table showing combinations of inputs and/or stimuli (causes) with
their associated outputs and/or actions (effects), which can be used to design test
cases. 13
decision table testing: A black box test design techniques in which test cases are
designed to execute the combinations of inputs and/or stimuli (causes) shown in a
decision table. [Veenendaal]
decision testing: A white box test design technique in which test cases are
designed toexecute decision outcomes.
decision outcome: The result of a decision (which therefore determines the
branches to be taken).
180
TESTING CONCEPTS
defect: A flaw in a component or system that can cause the component or system to
fail to perform its required function, e.g. an incorrect statement or data definition. A
defect, if encountered during execution, may cause a failure of the component or
system.
defect density: The number of defects identified in a component or system divided
by the size of the component or system (expressed in standard measurement terms,
e.g. lines-ofcode, number of classes or function points).
Defect Detection Percentage (DDP): the number of defects found by a test
phase, divided by the number found by that test phase and any other means
afterwards.
defect management: The process of recognizing, investigating, taking action and
disposing of defects. It involves recording defects, classifying them and identifying
the impact. [After IEEE 1044]
defect management tool: A tool that facilitates the recording and status tracking
of defects. They often have workflow-oriented facilities to track and control the
allocation, correction and re-testing of defects and provide reporting facilities. See
also incident management
tool.
defect masking: An occurrence in which one defect prevents the detection of
another. [After IEEE 610]
defect report: A document reporting on any flaw in a component or system that can
cause the component or system to fail to perform its required function. [After IEEE
829]
defect tracking tool: See defect management tool.
definition-use pair: The association of the definition of a variable with the use of
that variable. Variable uses include computational (e.g. multiplication) or to direct
the execution of a path (“predicate” use).
deliverable: Any (work) product that must be delivered to someone other than the
(work)product’s author.
design-based testing: An approach to testing in which test cases are designed
based on the architecture and/or detailed design of a component or system (e.g.
tests of interfaces between components or systems).
desk checking: Testing of software or specification by manual simulation of its
execution. See also static analysis.
181
TESTING CONCEPTS
182
TESTING CONCEPTS
prevent a task from starting which would entail more (wasted) effort compared to
the effort needed to remove the failed entry criteria. [Gilb and Graham]
entry point: The first executable statement within a component.
equivalence class: See equivalence partition.
equivalence partition: A portion of an input or output domain for which the
behavior of a component or system is assumed to be the same, based on the
specification.
equivalence partition coverage: The percentage of equivalence partitions that
have been exercised by a test suite.
equivalence partitioning: A black box test design technique in which test cases are
designed to execute representatives from equivalence partitions. In principle test
cases are designed to cover each partition at least once.
error: A human action that produces an incorrect result. [After IEEE 610]
15
error guessing: A test design technique where the experience of the tester is used
to anticipate what defects might be present in the component or system under test
as a result of errors made, and to design tests specifically to expose them.
error seeding: The process of intentionally adding known defects to those already
in the component or system for the purpose of monitoring the rate of detection and
removal, and estimating the number of remaining defects. [IEEE 610]
error tolerance: The ability of a system or component to continue normal operation
despite the presence of erroneous inputs. [After IEEE 610].
evaluation: See testing.
exception handling: Behavior of a component or system in response to erroneous
input, from either a human user or from another component or system, or to an
internal failure.
executable statement: A statement which, when compiled, is translated into object
code, and which will be executed procedurally when the program is running and may
perform an action on data.
exercised: A program element is said to be exercised by a test case when the input
value causes the execution of that element, such as a statement, decision, or other
structural element.
exhaustive testing: A test approach in which the test suite comprises all
combinations of input values and preconditions.
183
TESTING CONCEPTS
exit criteria: The set of generic and specific conditions, agreed upon with the
stakeholders, for permitting a process to be officially completed. The purpose of exit
criteria is to prevent a task from being considered completed when there are still
outstanding parts of the task which have not been finished. Exit criteria are used to
report against and to plan when to stop testing. [After Gilb and Graham]
exit point: The last executable statement within a component.
expected outcome: See expected result.
expected result: The behavior predicted by the specification, or another source, of
the component or system under specified conditions.
experienced-based test design technique: Procedure to derive and/or select test
cases based on the tester’s experience, knowledge and intuition.
exploratory testing: An informal test design technique where the tester actively
controls the design of the tests as those tests are performed and uses information
gained while testing to design new and better tests. [After Bach]
F
fail: A test is deemed to fail if its actual result does not match its expected result.
failure: Deviation of the component or system from its expected delivery, service or
result. [After Fenton]
failure mode: The physical or functional manifestation of a failure. For example, a
system in failure mode may be characterized by slow operation, incorrect outputs, or
complete termination of execution. [IEEE 610] 16
Failure Mode and Effect Analysis (FMEA): A systematic approach to risk
identification and analysis of identifying possible modes of failure and attempting to
prevent their occurrence.
failure rate: The ratio of the number of failures of a given category to a given unit
of measure, e.g. failures per unit of time, failures per number of transactions,
failures per number of computer runs. [IEEE 610]
fault: See defect.
fault density: See defect density.
Fault Detection Percentage (FDP): See Defect Detection Percentage (DDP).
fault masking: See defect masking.
fault tolerance: The capability of the software product to maintain a specified level
of performance in cases of software faults (defects) or of infringement of its specified
interface. [ISO 9126] See also reliability.
fault tree analysis: A method used to analyze the causes of faults (defects).
184
TESTING CONCEPTS
feasible path: A path for which a set of input values and preconditions exists which
causes it to be executed.
feature: An attribute of a component or system specified or implied by requirements
documentation (for example reliability, usability or design constraints). [After IEEE
1008]
field testing: See beta testing.
finite state machine: A computational model consisting of a finite number of states
and transitions between those states, possibly with accompanying actions. [IEEE
610]
finite state testing: See state transition testing.
formal review: A review characterized by documented procedures and
requirements, e.g. inspection.
frozen test basis: A test basis document that can only be amended by a formal
change control process. See also baseline.
Function Point Analysis (FPA): Method aiming to measure the size of the
functionality of an information system. The measurement is independent of the
technology. This measurement may be used as a basis for the measurement of
productivity, the estimation of the needed resources, and project control.
functional integration: An integration approach that combines the components or
systems for the purpose of getting a basic functionality working early. See also
integration testing.
functional requirement: A requirement that specifies a function that a component
or system must perform. [IEEE 610]
functional test design technique: Procedure to derive and/or select test cases
based on an analysis of the specification of the functionality of a component or
system without reference to its internal structure. See also black box test design
technique.
functional testing: Testing based on an analysis of the specification of the
functionality of a component or system. See also black box testing.
functionality: The capability of the software product to provide functions which
meet stated and implied needs when the software is used under specified conditions.
[ISO 9126] 17
functionality testing: The process of testing to determine the functionality of a
software product.
G
185
TESTING CONCEPTS
186
TESTING CONCEPTS
187
TESTING CONCEPTS
188
TESTING CONCEPTS
189
TESTING CONCEPTS
190
TESTING CONCEPTS
191
TESTING CONCEPTS
192
TESTING CONCEPTS
path coverage: The percentage of paths that have been exercised by a test suite.
100% path coverage implies 100% LCSAJ coverage.
path sensitizing: Choosing a set of input values to force the execution of a given
path.
path testing: A white box test design technique in which test cases are designed to
execute paths.
peer review: A review of a software work product by colleagues of the producer of
the product for the purpose of identifying defects and improvements. Examples are
inspection, technical review and walkthrough.
performance: The degree to which a system or component accomplishes its
designated functions within given constraints regarding processing time and
throughput rate. [After IEEE 610] See also efficiency.
performance indicator: A high level metric of effectiveness and/or efficiency used
to guide and control progressive development, e.g. lead-time slip for software
development. [CMMI]
performance testing: The process of testing to determine the performance of a
software product. See also efficiency testing.
performance testing tool: A tool to support performance testing and that usually
has two main facilities: load generation and test transaction measurement. Load
generation can simulate either multiple users or high volumes of input data. During
execution, response time measurements are taken from selected transactions and
these are logged. Performance testing tools normally provide reports based on test
logs and graphs of load against response times.
phase test plan: A test plan that typically addresses one test phase. See also test
plan.
portability: The ease with which the software product can be transferred from one
hardware or software environment to another. [ISO 9126]
portability testing: The process of testing to determine the portability of a software
product. 23
postcondition: Environmental and state conditions that must be fulfilled after the
execution of a test or test procedure.
post-execution comparison: Comparison of actual and expected results,
performed after the software has finished running.
precondition: Environmental and state conditions that must be fulfilled before the
component or system can be executed with a particular test or test procedure.
193
TESTING CONCEPTS
194
TESTING CONCEPTS
195
TESTING CONCEPTS
196
TESTING CONCEPTS
review tool: A tool that provides support to the review process. Typical features
include review planning and tracking support, communication support, collaborative
reviews and a repository for collecting and reporting of metrics.
risk: A factor that could result in future negative consequences; usually expressed
as impact and likelihood.
risk analysis: The process of assessing identified risks to estimate their impact and
probability of occurrence (likelihood).
risk-based testing: Testing oriented towards exploring and providing information
about product risks. [After Gerrard]
risk control: The process through which decisions are reached and protective
measures are implemented for reducing risks to, or maintaining risks within,
specified levels.
risk identification: The process of identifying risks using techniques such as
brainstorming, checklists and failure history.
risk management: Systematic application of procedures and practices to the tasks
of identifying, analyzing, prioritizing, and controlling risk.
risk mitigation: See risk control.
robustness: The degree to which a component or system can function correctly in
the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See
also error-tolerance, fault-tolerance.
robustness testing: Testing to determine the robustness of the software product.
root cause: An underlying factor that caused a non-conformance and possibly
should be permanently eliminated through process improvement. 26
S
safety: The capability of the software product to achieve acceptable levels of risk of
harm to people, business, software, property or the environment in a specified
context of use. [ISO 9126]
safety testing: Testing to determine the safety of a software product.
sanity test: See smoke test.
scalability: The capability of the software product to be upgraded to accommodate
increased loads. [After Gerrard]
scalability testing: Testing to determine the scalability of the software product.
scenario testing: See use case testing.
197
TESTING CONCEPTS
scribe: The person who records each defect mentioned and any suggestions for
process improvement during a review meeting, on a logging form. The scribe has to
ensure that the logging form is readable and understandable.
scripting language: A programming language in which executable test scripts are
written, used by a test execution tool (e.g. a capture/playback tool).
security: Attributes of software products that bear on its ability to prevent
unauthorized access, whether accidental or deliberate, to programs and data. [ISO
9126] See also functionality.
security testing: Testing to determine the security of the software product. See
also functionality testing.
security testing tool: A tool that provides support for testing security
characteristics and vulnerabilities.
security tool: A tool that supports operational security.
serviceability testing: See maintainability testing.
severity: The degree of impact that a defect has on the development or operation of
a component or system. [After IEEE 610]
simulation: The representation of selected behavioral characteristics of one physical
or abstract system by another system. [ISO 2382/1]
simulator: A device, computer program or system used during testing, which
behaves or operates like a given system when provided with a set of controlled
inputs. [After IEEE 610, DO178b] See also emulator.
site acceptance testing: Acceptance testing by users/customers at their site, to
determine whether or not a component or system satisfies the user/customer needs
and fits within the business processes, normally including hardware as well as
software.
smoke test: A subset of all defined/planned test cases that cover the main
functionality of a component or system, to ascertaining that the most crucial
functions of a program work, but not bothering with finer details. A daily build and
smoke test is among industry best practices. See also intake test.
software: Computer programs, procedures, and possibly associated documentation
and data pertaining to the operation of a computer system [IEEE 610]
software feature: See feature. 27
software quality: The totality of functionality and features of a software product
that bear on its ability to satisfy stated or implied needs. [After ISO 9126]
software quality characteristic: See quality attribute.
198
TESTING CONCEPTS
199
TESTING CONCEPTS
static code analysis: Analysis of source code carried out without execution of that
software. 28
static code analyzer: A tool that carries out static code analysis. The tool checks
source code, for certain properties such as conformance to coding standards, quality
metrics or data flow anomalies.
static testing: Testing of a component or system at specification or implementation
level without execution of that software, e.g. reviews or static code analysis.
statistical testing: A test design technique in which a model of the statistical
distribution of the input is used to construct representative test cases. See also
operational profile testing.
status accounting: An element of configuration management, consisting of the
recording and reporting of information needed to manage a configuration effectively.
This information includes a listing of the approved configuration identification, the
status of proposed changes to the configuration, and the implementation status of
the approved changes. [IEEE 610]
storage: See resource utilization.
storage testing: See resource utilization testing.
stress testing: Testing conducted to evaluate a system or component at or beyond
the limits of its specified requirements. [IEEE 610] See also load testing.
structure-based techniques: See white box test design technique.
structural coverage: Coverage measures based on the internal structure of a
component or system.
structural test design technique: See white box test design technique.
structural testing: See white box testing.
structured walkthrough: See walkthrough.
stub: A skeletal or special-purpose implementation of a software component, used
to develop or test a component that calls or is otherwise dependent on it. It replaces
a called component. [After IEEE 610]
subpath: A sequence of executable statements within a component.
suitability: The capability of the software product to provide an appropriate set of
functions for specified tasks and user objectives. [ISO 9126] See also functionality.
suspension criteria: The criteria used to (temporarily) stop all or a portion of the
testing activities on the test items. [After IEEE 829]
syntax testing: A black box test design technique in which test cases are designed
based upon the definition of the input domain and/or output domain.
200
TESTING CONCEPTS
201
TESTING CONCEPTS
test closure: During the test closure phase of a test process data is collected from
completed activities to consolidate experience, testware, facts and numbers. The test
closure phase consists of finalizing and archiving the testware and evaluating the test
process, including preparation of a test evaluation report. See also test process.
test comparator: A test tool to perform automated test comparison.
test comparison: The process of identifying differences between the actual results
produced by the component or system under test and the expected results for a test.
Test comparison can be performed during test execution (dynamic comparison) or
after test execution.
test completion criteria: See exit criteria.
test condition: An item or event of a component or system that could be verified by
one or more test cases, e.g. a function, transaction, feature, quality attribute, or
structural element.
test control: A test management task that deals with developing and applying a set
of corrective actions to get a test project on track when monitoring shows a
deviation from what was planned. See also test management.
test coverage: See coverage.
test cycle: Execution of the test process against a single identifiable release of the
test object.
test data: Data that exists (for example, in a database) before a test is executed,
and that affects or is affected by the component or system under test.
test data preparation tool: A type of test tool that enables data to be selected
from existing databases or created, generated, manipulated and edited for use in
testing. 30
test design: See test design specification.
test design specification: A document specifying the test conditions (coverage
items) for a test item, the detailed test approach and identifying the associated high
level test cases. [After IEEE 829]
test design technique: Procedure used to derive and/or select test cases.
test design tool: A tool that supports the test design activity by generating test
inputs from a specification that may be held in a CASE tool repository, e.g.
requirements management tool, from specified test conditions held in the tool itself,
or from code.
test driver: See driver.
202
TESTING CONCEPTS
test driven development: A way of developing software where the test cases are
developed, and often automated, before the software is developed to run those test
cases.
test environment: An environment containing hardware, instrumentation,
simulators, software tools, and other support elements needed to conduct a test.
[After IEEE 610]
test evaluation report: A document produced at the end of the test process
summarizing all testing activities and results. It also contains an evaluation of the
test process and lessons learned.
test execution: The process of running a test on the component or system under
test, producing actual result(s).
test execution automation: The use of software, e.g. capture/playback tools, to
control the execution of tests, the comparison of actual results to expected results,
the setting up of test preconditions, and other test control and reporting functions.
test execution phase: The period of time in a software development life cycle
during which the components of a software product are executed, and the software
product is evaluated to determine whether or not requirements have been satisfied.
[IEEE 610]
test execution schedule: A scheme for the execution of test procedures. The test
procedures are included in the test execution schedule in their context and in the
order in which they are to be executed.
test execution technique: The method used to perform the actual test execution,
either manually or automated.
test execution tool: A type of test tool that is able to execute other software using
an automated test script, e.g. capture/playback. [Fewster and Graham]
test fail: See fail.
test generator: See test data preparation tool.
test leader: See test manager.
test harness: A test environment comprised of stubs and drivers needed to execute
a test.
test incident: See incident.
test incident report: See incident report.
test infrastructure: The organizational artifacts needed to perform testing,
consisting of test environments, test tools, office environment and procedures.
203
TESTING CONCEPTS
test input: The data received from an external source by the test object during test
execution. The external source can be hardware, software or human.
31
test item: The individual element to be tested. There usually is one test object and
many test items. See also test object.
test item transmittal report: See release note.
test leader: See test manager.
test level: A group of test activities that are organized and managed together. A
test level is
linked to the responsibilities in a project. Examples of test levels are component test,
integration test, system test and acceptance test. [After TMap]
test log: A chronological record of relevant details about the execution of tests.
[IEEE 829]
test logging: The process of recording information about tests executed into a test
log.
test manager: The person responsible for project management of testing activities
and resources, and evaluation of a test object. The individual who directs, controls,
administers, plans and regulates the evaluation of a test object.
test management: The planning, estimating, monitoring and control of test
activities, typically carried out by a test manager.
test management tool: A tool that provides support to the test management and
control part of a test process. It often has several capabilities, such as testware
management, scheduling of tests, the logging of results, progress tracking, incident
management and test reporting.
Test Maturity Model (TMM): A five level staged framework for test process
improvement, related to the Capability Maturity Model (CMM) that describes the key
elements of an effective test process.
test monitoring: A test management task that deals with the activities related to
periodically checking the status of a test project. Reports are prepared that compare
the actuals to that which was planned. See also test management.
test object: The component or system to be tested. See also test item.
test objective: A reason or purpose for designing and executing a test.
test oracle: A source to determine expected results to compare with the actual
result of the software under test. An oracle may be the existing system (for a
204
TESTING CONCEPTS
205
TESTING CONCEPTS
206
TESTING CONCEPTS
207
TESTING CONCEPTS
understood, easy to learn, easy to operate and attractive to the users under
specified conditions. [After ISO 9126]
use case: A sequence of transactions in a dialogue between a user and the system
with a tangible result.
use case testing: A black box test design technique in which test cases are
designed to execute user scenarios.
user acceptance testing: See acceptance testing.
user scenario testing: See use case testing.
user test: A test whereby real-life users are involved to evaluate the usability of a
component or system.
V
V-model: A framework to describe the software development life cycle activities
from requirements specification to maintenance. The V-model illustrates how testing
activities can be integrated into each phase of the software development life cycle.
validation: Confirmation by examination and through provision of objective
evidence that the requirements for a specific intended use or application have been
fulfilled. [ISO 9000]
variable: An element of storage in a computer that is accessible by a software
program by referring to it by a name.
verification: Confirmation by examination and through provision of objective
evidence that specified requirements have been fulfilled. [ISO 9000]
vertical traceability: The tracing of requirements through the layers of
development documentation to components.
version control: See configuration control.
volume testing: Testing where the system is subjected to large volumes of data.
See also resource-utilization testing.
W
walkthrough: A step-by-step presentation by the author of a document in order to
gather information and to establish a common understanding of its content.
[Freedman and Weinberg, IEEE 1028] See also peer review.
white-box test design technique: Procedure to derive and/or select test cases
based on an analysis of the internal structure of a component or system.
white-box testing: Testing based on an analysis of the internal structure of the
component or system.
208
TESTING CONCEPTS
Wide Band Delphi: An expert based test estimation technique that aims at making
an accurate estimation using the collective wisdom of the team members. 35
209
TESTING CONCEPTS
210
TESTING CONCEPTS
211