Você está na página 1de 11

ICT Quality Assurance Guide

1 Table of Contents
Introduction 2

Objectives of Testing 3

Testing Methodologies 3
1.1 Agile Methodology 3
1.2 Waterfall Methodology 3

Testing Types 4
1.1 Functional Testing 4
1.2 Non Functional Testing 4
1.3 White Box Testing 4

Test Design Techniques 4


1.1 Black Box Testing 4
1.1.1 Specification-based Techniques 5
1.1.2 Experience-based Techniques 6

Process of Testing to be followed by Companies 6


1.1 Training in the System and Gaining In-Depth Knowledge 6
1.2 Read through the specs or the requirements of the Project 7
1.3 Design Test Cases from Test Scenarios and capture them in a Test Plan template 7
1.4 Incident Management 8

Developing or Updating the System Training manual 9

Tester Developer Relationship 9

Improving communication between the test team and others 10

ICT QA Page | 1
ICT Quality Assurance Guide

Introduction
Welcome to the Yarra Trams ICT Quality Assurance guide.

The process of exercising software to verify that it satisfies specified requirements and to detect
errors is called testing.

There are many reasons as to why a software defect may occur. Few of the reasons are:-

1. Human beings are prone to making errors


2. Because there is time pressure to achieve a certain project deadline
3. Complex code
4. Complexity of infrastructure
5. Interfacing with many systems

The success of a software project is determined from its Quality. The degree to which a system
meets or even exceeds the specified requirements and satisfies the desired needs and expectations
of the user/customer is defined as Quality.

Quality is characterised by:-

1. Functionality:- Whether the system is performing the intended required functions


according to the customer needs and expectations.
2. Reliability:- Whether the system can perform the intended required functions for a
specified period of time or for a specified number of operations.
3. Usability:- Whether the system is easy to use and operate for e.g. easy to navigate
through different screens, online and field help, etc.
4. Maintainability:- Whether the system can be easily modified to correct defects, if there
are new enhancements or features to be introduced or if the system is to be upgraded
to a newer version.
5. Portability:- Whether the system or software can be easily transferred from one
hardware or software environment to another.

ICT QA Page | 2
ICT Quality Assurance Guide

Objectives of Testing
The main objectives of testing are:

1. Finding Defects:- Developers while introducing or making enhancements to certain


features might end up implementing it incorrectly or breaking other existing functions of
the system. Such actions are called defects.
2. Preventing Defects:- Defects can be prevented in the initial specification stage itself by
conducting document reviews and planning the tests in the specs phase. This way any
defect in the spec which can lead to the wrong functioning of the system can be avoided
from creeping into the code. This is possible with agile methodology.
3. Gaining Confidence:- Thorough testing of a system enables you to identify more defects
in the system (defects cropped up relating to the enhancement as well as the existing
and inter-related functionalities of the system) which when successfully fixed,
guarantees delivery of a quality product/system to the customer. This helps the
customer to gain confidence that the system has met the requirements and is working
as expected.

Testing Methodologies
Software testing methodology is defined as strategies and testing types used to confirm that the
Application under Test meets the clients expectations.

1.1 Agile Methodology

By following this methodology, testing begins with the delivery of the specification/requirements
rather than with the delivery of the code. As and when parts of the specifications are delivered, the
test team begins with designing their tests based on those parts of the specification while the rest of
the specifications are still in-progress. Specifications are prone to changes since defects in the specs
might be identified while designing the test cases, the developers might identify potential defects in
the system as a result of implementing certain parts catering to a desired functionality or maybe the
test team might identify certain critical areas which have not been taken into consideration while
designing the specs and needs further investigation. This way, the test cases have to be modified in
order to accommodate those changes. It is easier and cheaper to fix defects since it is identified at
an early stage, development quality improves since additional test scenarios which had not been
initially considered in the specs have been identified and included in the spec. The test cases
designed are executed once the system is delivered for testing. It might be possible that the tester
comes up with additional test cases once a working system is on hand. Bugs/defects are also
identified at this stage, issues are raised in an issue register and sent to the developer for
fixing/resolving. Once the bugs are fixed by the developer, it is retested and a full regression testing
gets carried out to ensure no other bugs have been uncovered as a result of the bug fix. There is a
high chance of lesser defects being identified since the majority of defects were already identified at
the initial stages.

1.2 Waterfall Methodology

By following this methodology, testing begins with the delivery of the code when you have the
working system in hand. It is definitely possible that the tester comes up with test cases with

ICT QA Page | 3
ICT Quality Assurance Guide

relation to those inter-related areas which might have not been taken into consideration while
designing the specs and as a result while coding the solution. Bugs/defects are hereby identified,
issues are raised in an issues register and sent to the developer for fixing/resolving. It is more time-
consuming and costlier to fix defects at this stage. Once the bugs are fixed by the developer, it is
retested and a full regression testing gets carried out to ensure no other bugs have been uncovered
as a result of the bug fix.

Testing Types
The different types of testing are:-

1.1 Functional Testing

Testing conducted on the system to verify that it is performing the required intended functions. This
type of testing uses Black Box testing techniques which also includes regression testing which is
retesting of the system to check if the bugs have been fixed as well as testing other inter-related
areas which might have been affected as a result of the fix.

1.2 Non Functional Testing

Testing conducted on the system to verify the non-functional requirements/characteristics, for e.g.,
the way a system operates rather than its specific behaviours. This type of testing uses non-
functional testing techniques like performance, load and stress testing. Performance testing is
conducted to evaluate the compliance of a system or component with specified performance
requirements. Load testing is conducted to evaluate whether a system can handle large quantities of
data simultaneously. Stress testing is conducted to evaluate a system or component at or beyond
the limits of its specified requirements.

1.3 White Box Testing

Testing conducted on the internal structures of the system relating to coding of the application
rather than of its functionality. This type of testing uses test design techniques like Control Flow
testing, Data Flow testing, Branch testing, Statement Coverage testing and Decision Coverage
testing. This type of testing is carried out by developers and not testers and therefore will not be
covered in this guide.

Test Design Techniques


There are different methods of testing for a software or system and there are different techniques
employed for each of them.

1.1 Black Box Testing

ICT QA Page | 4
ICT Quality Assurance Guide

Test cases are created taking into consideration the documented functional or non-functional
aspects/requirements of the system without reference to its internal structure. In addition, the
knowledge/experience of the tester with the AUT also plays an important role in enabling the above.
The different techniques of black box testing are as under:-

1.1.1 Specification-based Techniques

Specifications are created for the software project relating to future enhancements for an existing
feature, modification of existing features or addition of new features to the software. Test scenarios
are created based on these specifications and test cases are derived based on these scenarios and
testing is carried out against those test cases.

1.1.1.1 Equivalence Partitioning

Equivalence partitioning is based on the premise that by testing one value within a class the result is
representative of all values within that class. A class consists of a range of values (for e.g. 2 20, A -
C), which when inputted within that range will produce a certain result and when not within the
range will produce some other result.

1.1.1.2 Boundary Value Analysis

In boundary value analysis, test cases are generated using the extremes of the input domain, for e.g.
minimum, maximum, just inside/outside boundaries. It is a type of negative testing, wherein if you
enter a value below the minimum and above the maximum, an error message will get generated
since they are treated as invalid values. This is similar to equivalence partitioning except that it tests
for values outside the range of a class which is above the maximum and below the minimum range
values.

1.1.1.3 Decision table Testing

In this testing, a decision table is created by a tester comprising of a complex set of actions and
responses, where selecting a particular set of actions causes a specific outcome. The set of actions
and responses are considered as inputs, and the outcome which is achieved by selecting that set of
actions is considered to be the output. You can select any combination of actions and based on that
a specific defined outcome will be achieved. In this way you can test different permutations of input
and get a specific output. Test cases are derived from these different permutations.

1.1.1.4 Use Case Testing

In this testing, a use case is developed by a tester which comprises of actors (people/user or other
modules/actions of the system) who use the system to accomplish a specific goal or outcome. The
sequences of actions which are performed by the actor on the system to accomplish these goals are
called scenarios. These scenarios are then used as test scenarios from which test cases are derived.

ICT QA Page | 5
ICT Quality Assurance Guide

1.1.2 Experience-based Techniques

Testers ought to be SMEs of the AUT and should put themselves in the users shoes while testing the
system for user acceptance. Knowledge and experience of the tester with the AUT helps them better
understand the enhancements or the new features being introduced in the system and they can well
anticipate which other inter-related areas/modules would have got affected due to these changes.
Test scenarios are derived based on the above knowledge and test cases are created from these
scenarios.

1.1.2.1 Error Guessing

This is a form of negative testing where the tester uses his in-depth knowledge and experience with
the system, to anticipate as to which areas of the system might not work well which may not have
been documented in the specification and bugs may be discovered. In this testing, the tester may
make a list of possible errors and design test scenarios to check whether those errors are occurring.
This testing is only valid and useful in cases where the tester has thorough knowledge of the system
which is not the case always.

1.1.2.2 Exploratory Testing

This is a form of testing is carried out where the tester does not much knowledge about the system
or there are inadequate specifications about the changes made to the system. The tester designs
test cases based on the running a few tests here and there to check the functioning of the system
and this way uses those test cases to design another set of test cases. This process acts as an
exercise to learn the product along with designing test cases and getting rapid feedback from the
stakeholders.

Note: Grey Box testing is a combination of Black box and White box testing. Therefore this
testing will also not be included.

Process of Testing to be followed by Companies

At my workplace which is a government company, most of the projects have followed Waterfall
Methodology and are now slowly moving towards following Agile Methodology where designing
test cases and development of code occurs simultaneously.

1.1 Training in the System and Gaining In-Depth Knowledge

It is very essential that a test analyst who is newly introduced to a system gains a thorough
functional working knowledge of that system. This type of knowledge will help him think analytically
and outside the box thereby enabling him to analyse scenarios which have not been covered or

ICT QA Page | 6
ICT Quality Assurance Guide

considered in the specs. Therefore he will be able to make thorough test cases and carry out in-
depth testing.

Organisations maintain training manuals in the database. Few of them have in-depth training
manuals and there are few which dont have much documented material or maybe there are
materials documented that cover only the final result of the system without any reference to the
fundamentals of the system i.e. how the system has been configured with data. In the case, where
there are in-depth training manuals, the tester can thoroughly understand the fundamentals of the
system from there by getting his hands dirty with the system and then gain an in-depth knowledge
of the system from the training manual and practice itself.

There are very few organisations which maintain thorough training manuals. In the case, where
there is not much documentation available on the fundamentals, the tester has to play around with
the system in understanding the significance and set up of each and every field on different
modules/forms of the system. He can also rely on the experience of the end users and can even
draw knowledge from them as to how they do their work. But the main essence is to understand the
fundamentals thereby coming to know the system data configuration. Understanding the
fundamentals will enable the tester to gain thorough and in-depth knowledge of all the modules and
enable him to understand as to how the data in all the modules are inter-linked and hang together.
This will enable him to easily understand the projects relating to system enhancements and
introduction of new features including upgrade projects.

When a person is newly introduced to a system, he should invest at least 3 weeks of time to
understand the system based on its complexity in the above manner. For very complex systems like
Maximo, it may take 4 weeks.

1.2 Read through the specs or the requirements of the Project

Once the tester has been introduced to the system and gained up to the mark system knowledge, he
is in a good position to read the requirement specs and understand thoroughly as to what the
project is trying to achieve. He will not only thoroughly understand the requirements, but also be
able to point out and analyse the uncovered requirements and the other inter-related areas that
might not have been looked into/considered while writing the specs. This way he will be able to
develop in-depth test scenarios to thoroughly test the project.

The tester should be in a position to question the requirements to suggest a better solution. He
should be able to think in a way based on the system knowledge as to how a solution can be
implemented and suggesting the same to the Project Team.

1.3 Design Test Cases from Test Scenarios and capture them in a Test Plan
template

Test scenarios which are developed are converted into test cases and these test cases outline the
expected result and the actual result to check if there is any deviation between the two and these

ICT QA Page | 7
ICT Quality Assurance Guide

deviations are known as bugs. These test cases are captured and documented in a test plan
template. The test plan template has the following sections:-

a. Scope of the Test for the Project: This section defines the scope of the project and what all
areas of the system will be tested and covered in the test case and also outline the areas
which have not undergone change and for which regression tests will be conducted.
b. Task Timeline: This section outlines the amount of time to be taken for writing the test cases
and executing them.
c. Expected Outcome: This section defines the final expected outcome from the system after
this project is completed.
d. Resources Required to Achieve Thorough Testing of the Application: This section defines
that apart from the tester who else would be required to assist in case of any queries and
who will the bugs raised be forwarded to for fixing.
e. Recommendations: This section gives a final overview of the test results and the tester also
recommends whether the product is fit to be released into production or not.
f. Test Objective: This section defines the objective of the Test Plan document which aligns
with the objective of the Project.
g. Testers: This section defines who all will be executing the test case.
h. Test Scenarios and Dependencies: This section outlines all the test scenarios for which test
cases will be made and the dependencies for the test case and the estimated hours for
executing each test case.
i. Test Scripts: This section defines the test cases for each of the test scenario outlined in the
above section. It outlines the expected result i.e. the result the tester expects after the
scenario in the test case is executed. It outlines the actual result i.e. the result the tester
achieved the execution of the test case. If the actual result does not match with the
expected result, then the tester identifies this as an issue and raises a bug for the same.
j. Test Results and Issue Register: This section outlines the test results of either Pass/Fail
against each of the test scenarios and the Issue ID for each of the failed test scenario.
k. Summary Reports: This section defines the Test Summary Report table which outlines the
number of test scenarios executed, how many passed and how many failed. It also defines
the Issue Summary Report table which outlines the no. of Issues reported and their priority
(High/Medium/Low).

1.4 Incident Management

An incident is any event occurring during testing that requires investigation. When you execute a
test case, if the actual result does not match the expected result then it is identified as an issue and a
bug is raised. These are known as Programming defects. If for e.g. you have covered a test case for a
scenario which had not been considered while designing the specs and as such, is identified beyond
the scope of the project after further investigation, then such findings are known as incidents which
will be documented in an incident or issues list with the status of Deferred as in being fixed in a
later version. Such incidents are Design defects. For projects which follow agile methodology, these
types of design incidents are raised during review where there is a higher probability of such defects
being fixed. Incidents are also raised during the use of live software product which is known as post
implementation incidents. Such incidents are raised by the support team.

Raising of the incidents and bugs enable providing feedback to the developer or the business analyst
about problems with their code or design of the requirements document and acts as a heads-up to
consider similar scenarios while making documents.

ICT QA Page | 8
ICT Quality Assurance Guide

Finding bugs and getting them fixed guarantees quality product being delivered at the end of the day
and thorough testing keeps the bugs in the system to a very minimal level.

The issues are logged in an Issues List Register. The register consists of the following:-

a. Issue No.: The unique no. assigned to the Issue


b. Description: The description of the issue encountered and being raised
c. Expected Behaviour: The description of the behaviour expected when the scenario
identified in the description above is executed.
d. Actual Behaviour: The description of the actual behaviour displayed when the scenario
identified in the description above is executed.
e. Steps to Reproduce: The steps followed in the above scenario to reproduce the problem.
f. Reported By: Name of the tester who reported the issue
g. Assigned To: The name of the person/the stakeholder/the developer to whom the issue has
been assigned.
h. Report Date: The date on which the issue was reported/raised
i. Priority: The priority assigned to the bug. Either High/Medium/Low depending on the impact
of the bug.
j. Impact: The impact of the bug
k. Comments: Any additional comments to be made for the bug like few cases where the issue
does not appear.
l. Status: The current status of the bug depending on the resolution made or not. Possible
values are Resolved, Not Resolved, Deferred

Once the incidents/issues are resolved by the developer, they are sent back to the tester for
retesting. The tester retests those scenarios where the bugs occurred as well as performs a
regression testing of the system including the inter-related modules in relation to the bugs to check
if some bugs have not been uncovered as a result of the fix made.

Developing or Updating the System Training manual


This is not the usual practice as at present but I highly recommend companies implement such a step
after all the testing is done and dusted with. To enable this, the tester will have to be able to
translate the test cases into developing or updating an operating or training manual. He should
possess technical writing skills to enable him to do so.

Tester Developer Relationship


The tester-developer relationship is not an easy one and depends a lot on ones attitude. Especially,
if one of them is an independent contractor, then the developers may at times take an offense to
the bug raised by the developer or at times the same may occur on part of the tester if the
developer points out the issue which has been raised as a logic issue with the tester.

An individual team or team member involved in the SDLC possesses his/her own characteristics
coupled with different knowledge/skill sets. It is very crucial that all teams work harmoniously with
each other in order to guarantee a successful project and towards a successful SLDC process.

ICT QA Page | 9
ICT Quality Assurance Guide

Characteristics of a tester are:-

1. Curiosity and enthusiasm towards learning the product and testing it for quality delivery.
2. Pessimistic attitude towards the system/product signalling that no system is ever perfect or
can be developed without issues which cannot be identified.
3. Attention to detail in order to uncover the hidden parts of the specified/proposed new
functionality in the requirements specs.
4. Ability to think outside the box so that scenarios which have not been specified in the
requirement specs are taken care of.
5. An understanding of critical areas of the system where bugs might occur i.e. error guessing
6. Building rapport with the developers and other stakeholders so that issues or concerns can
be conveyed to them in a subtle manner.

Characteristics of a developer are:-

1. Specialised knowledge of the product since they are the ones who are developing it.
2. Properly trained and qualified.
3. Mostly take an offense when bugs are identified by the testers and avoiding feedback.
4. Confident about the product/system developed by them and do not like anyone testing their
work.

Testers need to consider the different mindsets of the developers and develop a harmonious
relationship that is profitable for both of them and hence towards the SDLC phase.

Improving communication between the test team and others


Once a tester has built a good rapport with the developer and the analysts, the bugs can be
conveyed to the developer in a more subtle manner with the view that they both are driven towards
the success of the project and delivery of a quality product. Things are thereby communicated and
accepted in a more constructive manner.

In the absence of a good rapport, mostly when the above is communicated it might sound offensive
to the developer/analyst because of the developers characteristics of being sensitive towards
criticism and feedback.

Therefore for the tester, it is recommended to do the following:-

a. Bear in mind to make the developers and analysts realise that we are all working towards
the success of the project by delivering a quality product.
b. Communicate issues and concerns preferably as a documentation explaining clearly as to
what scenario you have considered, the steps you have followed and how you think the
system should work in cases where this area has not been covered in the specs. In cases
where scenario covered in the specs, document the actual result and the expected result.
This way, your point appears more understandable without offending the opposite person.
c. Try to understand how the other person feels and why they react as they do.
d. Confirm that the developer/analyst has understood your point of view by, preferably,
providing examples of other similar scenarios which have worked correctly so as to show
that you have a sound technical knowledge about the system.

ICT QA Page | 10
ICT Quality Assurance Guide

ICT QA Page | 11

Você também pode gostar