Você está na página 1de 20

Manual Testing:

 Any testing which we do manually


a. Manual Functional testing
b. Measuring the response time of a web page manually
c. Security test which we perform manually.

It strictly means manual functional testing.

Manual Testing (Manual functional testing) is a process in which we compare the behaviour of a
developed piece of code (Software, module, API, feature, etc.) against the expected behaviour
(Requirement).

How will we know what is the expected behaviour?

We will know it by reading or listening to the requirement carefully and understanding it fully.
Remember, understanding the requirement is very very important.

Qualities of highly effective testers?

Curiosity, attentiveness, discipline, logical thinking, passion for work and ability to dissect things
matters a lot to be a Destructive and successful tester.

Tester should be:


Automation testing versus manual testing:

Automation Manual
When we automate it, our focus is steady, that We not only focus on executing the written test
is automating the steps written. cases, but we also perform a lot of exploratory
while doing so.

Sample testing process:

SRS Walkthrough
SRS review and test scenario preparation
Test plan
Test cases
Application walkthrough and test execution
Defect reporting
Defect verification
QA Sign-off

Procedure (SDLC):

Initiate:

 Once the producer and the customer agree on terms – the software production begins.

 In this phase, business requirements are gathered and analysed. The analysis is
going to involve the decisions on technological considerations, hardware & software
specifications, people, effort, time relevance and improvements among others.
 Business analysts, Project managers and client representative are involved in this
step.
 At the end of this step and basic project plan is prepared.
 Project specific documents like scope document and/or business requirements are
made.

Define:

 The business requirements finalized are the Inputs for this step.
 This phase involves the translation of business requirements into functional requirements
for the software.

Business requirements Functional requirements


 Allow a user to buy something from a  Site format->menu option name and
site. placement->Search product-> shopping
cart->Checkout (registration or not) -
>payment options->confirmation of
sale.

 Developers, Business Analysts, Project managers are involved in this phase.


 The output of this phase is a detailed document containing the functional requirements of
the software. This document is referred to by many names – Software Requirement
Specification (SRS) / Functional Requirement Document (FRD) / Functional Requirement
Specification (FRS)
 QA team gets involved – after the completion of SRS documentation

 While the finalizing on functional requirements and the documentation of the SRS is
going on, the QA manager/lead is involved to draft an initial version of the test plan and
form a QA team.
 At this stage either the development team or the business analyst or sometimes even the
QA team lead will give a walkthrough of the SRS to the QA team.
 In case of a new project a thorough walkthrough in the form of a conference or meeting
works best
 In case of later releases for an existing project, a document is sent via email or
placement in a common repository to the QA team. QA team at this point would read/review
it offline and understand the system thoroughly.
 Since the primary target audience for the SRS document is not just testers, not all of it is
useful for us. We testers should be diligent enough when reviewing this document to decide
what parts of it are useful for us and what parts of it are not.

How to review SRS Document and create test scenarios:


Design:

The functional requirements are translated into the technical details. The dev, design, environment
and data teams are involved in this step. The outcome of this step is typically a Technical Design
Document (TDD).
What is an SRS review?

SRS review is nothing but going through the functional requirements specification document and
trying to understand what the target application is going to be like.

SRS is a document that is created by the development team in collaboration with business analysts
and environment/data teams. Typically, this document once finalized will be shared with the QA
team via a meeting where a detailed walkthrough is arranged. Sometimes, for an already existing
application, we might not need a formal meeting and someone guiding us through this document.

What do we need to get started?

 The correct version of the SRS document


 Clear instructions on who is going to work on what and how much time have they got.
 A template to create test scenarios
 Other information on- who to contact in case of a question or who to report in case of
a documentation inconsistency

Test scenario:

One line pointers of “what to test” for a certain functionality.

Some important observations regarding SRS review:

#1. No information is to be left uncovered.


#2. Perform a feasibility analysis on whether a certain requirement is correct or not and also
if it can be tested or not.
#3. Unless a separate performance/security or any other form of test teams exist- it is our job
to make sure that all nonfunctional requirements have to be taken into consideration.
#4. Not all information is targeted at the testers, so it is important to understand what to note
and what not.
#5. The importance and no. of test cases for a test scenario need not be accurate and can be
filled in with an approximate value or can be left empty.

To sum up, SRS review results in:

 Test scenarios list


 Review results – documentation/requirement errors found by statically going
through/verifying the SRS document
 A list of Questions for better understanding- in case of any
 Preliminary idea of how the test environment is supposed to be like
 Test scope identification and a rough idea on how many test cases we might end up
having- so how much time we need for documentation and eventually execution.
Important points to note:

1. Test scenarios are not external deliverables (not shared with Business Analysts or Dev
teams) but are important for internal QA consumption.
a. Test scenarios once complete undergo a peer review and once that is done

Types of Reviews:

1. Reviewing your own work – Self Checking


2. Peer- review
3. Supervisory

Why review? – For exactly the same reason we test the software, example:

1. To uncover errors
2. To check for completeness
3. To make sure the standards and guidelines are adhered to or not …etc.
Once each requirement is passed through these tests we can evaluate and freeze the functional
requirements.

2. We could use a test management tool like HP ALM or qTest to create the test scenarios.
However, the Test scenarios creation in real time is a manual activity.

Conclusion:

“Requirements should be clear and specific with no uncertainty, requirements should be


measurable in terms of specific values, requirements should be testable having some evaluation
criteria for each requirement, and requirements should be complete, without any contradictions”

Example:

Requirement: “Web application should be able to serve the user queries as early as possible”

Tester should ask the question to stakeholders  How much response time is OK for you?

Stakeholders answer  We will accept the response if it’s within 2 seconds. (i.e. It’s a requirement
measure)

Tester  Freezes the requirement and carry the same procedure for the next requirement too.
What is a test plan?

Test plan is a dynamic document. Test plan is more or less like a blueprint of how the testing
activity is going to take place in a project.

 It is also a document that we share with the Business Analysts, Project Managers, DEV team
and other teams. This helps to enhance the level of transparency of the QA team’s work to
the external teams.
 It is documented by the QA manager/QA lead based on the inputs from the QA team
members.
 Test plan is not static and is updated on an on-demand basis.
 The more detailed and comprehensive the plan is, the more successful will be the testing
activity.

Test Strategy:

Test strategy outlines the testing approach and everything else that surrounds it.

 A test strategy is only a subset of the test plan.


 It is a hardcore test document that is to an extent generic and static.

 Example: Test plan gives the information of who is going to test at what time. For
example, Module 1 is going to be tested by “X tester”. If tester Y replaces X for some
reason, the test plan has to be updated.
 On the contrary, a test strategy is going to have details like – “Individual modules are
to be tested by test team members. “ In this case, it does not matter who is testing it-
so it’s generic and the change in the team member does not have to be updated,
keeping it static.

The test design involves the details on what to test and how to test.
Test Scenario:

This is mostly a one-line definition of “What” we are going to test with respect to a certain feature.

Examples test scenarios:

1. Validate if a new country can be added by the Admin


2. Validate if an existing country can be deleted by the admin
3. Validate if an existing country can be updated

Test condition:

Test conditions, on the other hand, are more specific. It can be roughly defined as the
aim/goal of a certain test.

Example test condition:


In the above example, if we were to test the scenario 1, we can test the following conditions:
1. Enter the country name as “India”(valid )and check for the addition of the country
2. Enter a blank and check if the country gets added.
What is a difference between Test procedure and Test suite?

Test procedure:

The test procedure is a combination of test cases based on a certain logical reason, like executing
an end-to-end situation or something to that effect. The order in which the test cases are to be run
is fixed.

For example, if I was to test the sending of an email from Gmail.com, the order of test cases
that I would combine to form a test procedure would be:

1. The test to check the login


2. The test to compose email
3. The test to attach one/more attachments
4. Formatting the email in the required way by using various options
5. Adding contacts or email addresses to the To, BCC, CC fields
6. Sending email and making sure it is showing in the “Sent Mail” section

All the test cases above are grouped to achieve a certain target at the end of them. Also, test
procedures have a few test cases combined at any point in time.

Test suite:

List of all test cases that have to be executed as a part of a test cycle or a regression phase etc.
There is no logical grouping based on functionality. The order in which the constituent test cases get
executed may or may not be important.

Example of the test suite: If an application’s current version is 2.0. The previous version 1.0
might have had 1000 test cases to test it entirely. For version 2 there are 500 test cases to just
test the new functionality that is added in the new version. So, the current test suite would be
1000+500 test cases that include both regression and the new functionality. The suite is a
combination too, but we are not trying to achieve a target function.

Test suites can contain 100s or even 1000s of test cases.


Components of a Plan Document

Items in a Test Plan


What do they contain
Template

Scope => Test scenarios/Test objectives that will be validated.

Out of scope => Enhanced clarity on what we are not going to cover

All the conditions that need to hold true for us to be able to proceed
Assumptions =>
successfully

Schedules => Test scenario prep

Test documentation- test cases/test data/setting up environment

Test execution

Test cycle- how many cycle

Start and end date for cycles

Roles and Responsibilities


Team members are listed
=>

Who is to do what

module owners are listed and their contact info

What documents(test artifacts) are going to produce at what time


Deliverables =>
frames

What can be expected from each document

Environment => What kind of environment requirements exist

Who is going to be in charge

What to do in case of problems


Items in a Test Plan
What do they contain
Template

Tools => For example: JIRA for bug tracking

Login

How to use JIRA

Defect Management => Who are we going to report the defects to

How are we going to report

What is expected- do we provide screenshot?

Risks and Risk Management


Risks are listed
=>

Risks are analyzed- likelihood and impact is documented

Risk mitigation plans are drawn

Exit criteria => When to stop testing

SDLC’s Code:

While the rest of the project were spending their time on Technical Design Document (TDD), We
QA’s have identified the testing scope (Test scenarios) and created the first dependable Testing plan
draft.

Coding:

 Developers are the primary point of the focus for the entire team in the focus.
 QA team also indulges in “Test case creation”
What is Software Testing Life Cycle (STLC)

 Software Testing Life Cycle refers to a testing process which has specific steps to be
executed in a definite sequence to ensure that the quality goals have been met.
 In STLC process, each activity is carried out in a planned and systematic way.
Test Cases:

Test Scenario: What we are going to test on the AUT.

Test case: How we are going to test a requirement.

Functional Requirement Document (FRD)  Test scenarios  Test cases

Fields in Test Cases:

Let us take a moment, to observe the fields that are part of a test case.

Test case Id and Test case description – these are the generic ones.

The other fields can be explained as follows:

a) Precondition – state of the AUT (the state in which the AUT needs to be for us to get
started)
b) Input – data entry steps. For these steps it is important to note what kind of input info is
required – Test data
c) Validation point/trigger/action – what is causing the validation to happen? (Click of a
button or toggle or the link access. Make sure there is at least one validation point to a test
case- otherwise it is all going to be data entry with nothing to look for. Also to ensure that we
have enough modularity, try not to combine too many validation points into one test case. 1
per test case is optimum.)
d) Output – expected result
e) Postcondition – This is additional information that is provided for the benefit of the tester,
just to make the test case more insightful and informative. This includes an explanation of
what happens or what can be expected of the AUT once all the test case steps are done.

Test Cases Writing/Optimization Methods

 Boundary value analysis


 Equivalence partitioning
 Error guessing – This is a very simple method and relies on a tester’s intuition. An
example is: Say there is a date field on a page. The requirements are going to specify
that a valid date is to be accepted by this field. Now, a tester can try “Feb 30” as a
date- because as far as the numbers are concerned, it is a valid input, but February is a
month that never has 30 days in it- so an invalid input.
 State transition diagrams
 Decision tables
A few important points to note:

 The test cases we create are not only the point of reference for the QA
phase but also to the UAT (User Acceptance Testing).
 Internally test cases are peer reviewed within the team.
 To check whether the test suite we created achieves the 100% test
coverage goal or not. To do so, a traceability matrix can be created.

What is Boundary value analysis and Equivalence partitioning?


Boundary value analysis and equivalence partitioning both are test case design strategies in black
box testing.

Equivalence Partitioning:

In this method, the input domain data is divided into different equivalence data classes. This method
is typically used to reduce the total number of test cases to a finite set of testable test cases, still
covering maximum requirements.

E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no
use in writing thousand test cases for all 1000 valid input numbers plus other test cases for
invalid data.

Using equivalence partitioning method above test cases can be divided into three sets of input
data called as classes. Each test case is a representative of a respective class.

So in above example, we can divide our test cases into three equivalence classes of some
valid and invalid inputs.

Test cases for input box accepting numbers between 1 and 1000 using Equivalence
Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a
valid test case. If you select other values between 1 and 1000 the result is going to be same.
So one test case for valid input data should be sufficient.

2) Input data class with all values below the lower limit. I.e. any value below 1, as an invalid
input data test case.

3) Input data with any value greater than 1000 to represent third invalid input class.

Boundary value analysis:

It's widely recognized that input values at the extreme ends of input domain cause more
errors in the system. More application errors occur at the boundaries of input domain.
‘Boundary value analysis' testing technique is used to identify errors at boundaries rather than
finding those exist in center of input domain.

Boundary value analysis is the next part of Equivalence partitioning for designing test cases
where test cases are selected at the edges of the equivalence classes.

Test cases for input box accepting numbers between 1 and 1000 using Boundary value
analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and
1000 in our case.

2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.

3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.

Boundary value analysis is often called as a part of stress and negative testing.

How to Create Requirements Traceability Matrix (RTM):

Requirement Traceability Matrix helps to link the requirements, test cases, and defects
accurately. The whole of the application is tested by having requirement traceability (End to End
testing of an application is achieved).

Types of Requirement Specifications

1) Business Requirements:
The actual customers’ requirements are listed down in a document known as Business
Requirements Document (BRD). The ‘Software Requirement Specifications’ (SRS) document is
derived from BRS.
#2) Software Requirements Specification Document (SRS):
It is a detailed document which contains all the meticulous details of all functional and non-
functional requirements. This SRS is the baseline for designing and developing the software
application.

#3) Project Requirement Documents (PRD):

The PRD is a reference document for all the team members in a project to tell them exactly
what a product should do. It can be divided into the sections like Purpose of the product,
Product Features, Release Criteria and Budgeting & Schedule of the project.

#4) Use Case Document:

It is the document that helps in designing and implementing the software as per the business
needs. It maps the interactions between an actor and an event with a role that needs to be
performed to achieve a goal.

#5) Defect Verification Document:

It is documented containing all the details related to defects. The team can maintain a ‘Defect
Verification’ document for fixing and retesting of the defects. The testers can refer ‘Defect
Verification’ document, when they want to verify if the defects are fixed or not, retest defects
on different OS, device, different system configuration etc.

#6) User Stories:

The user story is a primarily used in ‘Agile’ development to describe a software feature from
an end-user perspective. User stories define the types of users and in what way and why they
want a certain feature. The requirement is simplified by creating user stories.
Micro Focus ALM Quality Center Tool Tutorial

HP ALM is a software that is designed to manage the various phases of Software


Development Life Cycle(SDLC) right from requirements gathering to testing.

Earlier, it was known as HP Quality Center (QC). HP QC acts as a test management tool
while HP ALM acts as a project management tool. HP QC is named as HP ALM from
version 11.0. I am sure that this tutorial will really be a guide to those who are new to this
tool.

The following are the list of features provided by this tool:

 Release Management: To achieve traceability between test cases to release.


 Requirement Management: To ensure if the test cases cover all the specified
requirements or not.
 Test case management: To maintain the version history of the changes done to test
cases and act as a central repository for all the test cases of an application.
 Test Execution management: To track multiple instances of test case runs and to
ensure the credibility of the testing effort.
 Defect Management: To ensure if the major defects uncovered are visible to all
major stakeholders of the project and to make sure the defects follow a specified life
cycle till closure.
 Reports Management: To ensure if reports and graphs are generated to keep a track
of the project health.

QC Versus ALM

HP Application Lifecycle Management tool provides the core functionality of HP


Quality Center along with the following features:

 Project planning and Tracking: This tool allows the users to create KPI’s (Key Performance
Indicators) using ALM data and tracks them against the project milestones.

 Defect Sharing: This tool provides the ability to share defects across multiple projects.

 Project Reporting: This tool provides customized project reporting across multiple projects
using pre-defined templates.

 Integration with third-party tools: This tool provided integration with third-party tools such
as HP LoadRunner, HP Unified Functional Testing and REST API.

Defect Lifecycle in HP ALM


A defect is raised when there is a deviation between the actual result and the expected result.
Defect lifecycle defines the phases through which a defect has to go through during its
lifetime.

The number of phases and the phase description differs from organization to organization and
from project to project.

In general, a Defect in ALM tool will go through the following phases.

#1) New: A defect will be in New status when a defect is raised and submitted. This is the
default status for every defect initially on HP ALM.

#2) Open: A defect will be in open status when a developer has reviewed the defect and
starts working on it if it is a valid defect.

#3) Rejected: A defect will be in Rejected status when a developer considers the defect to be
invalid.

#4) Deferred: If the defect is a valid defect, but the fix is not delivered in the current release,
a defect will be postponed to the future releases using the status Deferred.

#5) Fixed: Once the developer has fixed the defect and assigned the defect back to the
Quality Assurance Personnel, then it will have Fixed status.

#6) Retest: Once the fix is deployed, Tester has to start retesting the defect.

#7) Reopen: If the retest has failed, a tester has to reopen the defect and assign the defect
back to the developer.
#8) Closed: If the defect fix is delivered and is working as expected, then the tester needs to
close the defect using the status ‘Closed’.

Test Execution:

Once the test cases are written, shared with the BAs and Dev team, reviewed by them, changes are
notified to the QA team (if any), QA team makes necessary amends- Test design phase is complete.
Now getting the Test cases ready does not mean we can initiate the test run. We need to have the
application ready as well among other things.

Você também pode gostar