Escolar Documentos
Profissional Documentos
Cultura Documentos
Manual Testing (Manual functional testing) is a process in which we compare the behaviour of a
developed piece of code (Software, module, API, feature, etc.) against the expected behaviour
(Requirement).
We will know it by reading or listening to the requirement carefully and understanding it fully.
Remember, understanding the requirement is very very important.
Curiosity, attentiveness, discipline, logical thinking, passion for work and ability to dissect things
matters a lot to be a Destructive and successful tester.
Automation Manual
When we automate it, our focus is steady, that We not only focus on executing the written test
is automating the steps written. cases, but we also perform a lot of exploratory
while doing so.
SRS Walkthrough
SRS review and test scenario preparation
Test plan
Test cases
Application walkthrough and test execution
Defect reporting
Defect verification
QA Sign-off
Procedure (SDLC):
Initiate:
Once the producer and the customer agree on terms – the software production begins.
In this phase, business requirements are gathered and analysed. The analysis is
going to involve the decisions on technological considerations, hardware & software
specifications, people, effort, time relevance and improvements among others.
Business analysts, Project managers and client representative are involved in this
step.
At the end of this step and basic project plan is prepared.
Project specific documents like scope document and/or business requirements are
made.
Define:
The business requirements finalized are the Inputs for this step.
This phase involves the translation of business requirements into functional requirements
for the software.
While the finalizing on functional requirements and the documentation of the SRS is
going on, the QA manager/lead is involved to draft an initial version of the test plan and
form a QA team.
At this stage either the development team or the business analyst or sometimes even the
QA team lead will give a walkthrough of the SRS to the QA team.
In case of a new project a thorough walkthrough in the form of a conference or meeting
works best
In case of later releases for an existing project, a document is sent via email or
placement in a common repository to the QA team. QA team at this point would read/review
it offline and understand the system thoroughly.
Since the primary target audience for the SRS document is not just testers, not all of it is
useful for us. We testers should be diligent enough when reviewing this document to decide
what parts of it are useful for us and what parts of it are not.
The functional requirements are translated into the technical details. The dev, design, environment
and data teams are involved in this step. The outcome of this step is typically a Technical Design
Document (TDD).
What is an SRS review?
SRS review is nothing but going through the functional requirements specification document and
trying to understand what the target application is going to be like.
SRS is a document that is created by the development team in collaboration with business analysts
and environment/data teams. Typically, this document once finalized will be shared with the QA
team via a meeting where a detailed walkthrough is arranged. Sometimes, for an already existing
application, we might not need a formal meeting and someone guiding us through this document.
Test scenario:
1. Test scenarios are not external deliverables (not shared with Business Analysts or Dev
teams) but are important for internal QA consumption.
a. Test scenarios once complete undergo a peer review and once that is done
Types of Reviews:
Why review? – For exactly the same reason we test the software, example:
1. To uncover errors
2. To check for completeness
3. To make sure the standards and guidelines are adhered to or not …etc.
Once each requirement is passed through these tests we can evaluate and freeze the functional
requirements.
2. We could use a test management tool like HP ALM or qTest to create the test scenarios.
However, the Test scenarios creation in real time is a manual activity.
Conclusion:
Example:
Requirement: “Web application should be able to serve the user queries as early as possible”
Tester should ask the question to stakeholders How much response time is OK for you?
Stakeholders answer We will accept the response if it’s within 2 seconds. (i.e. It’s a requirement
measure)
Tester Freezes the requirement and carry the same procedure for the next requirement too.
What is a test plan?
Test plan is a dynamic document. Test plan is more or less like a blueprint of how the testing
activity is going to take place in a project.
It is also a document that we share with the Business Analysts, Project Managers, DEV team
and other teams. This helps to enhance the level of transparency of the QA team’s work to
the external teams.
It is documented by the QA manager/QA lead based on the inputs from the QA team
members.
Test plan is not static and is updated on an on-demand basis.
The more detailed and comprehensive the plan is, the more successful will be the testing
activity.
Test Strategy:
Test strategy outlines the testing approach and everything else that surrounds it.
Example: Test plan gives the information of who is going to test at what time. For
example, Module 1 is going to be tested by “X tester”. If tester Y replaces X for some
reason, the test plan has to be updated.
On the contrary, a test strategy is going to have details like – “Individual modules are
to be tested by test team members. “ In this case, it does not matter who is testing it-
so it’s generic and the change in the team member does not have to be updated,
keeping it static.
The test design involves the details on what to test and how to test.
Test Scenario:
This is mostly a one-line definition of “What” we are going to test with respect to a certain feature.
Test condition:
Test conditions, on the other hand, are more specific. It can be roughly defined as the
aim/goal of a certain test.
Test procedure:
The test procedure is a combination of test cases based on a certain logical reason, like executing
an end-to-end situation or something to that effect. The order in which the test cases are to be run
is fixed.
For example, if I was to test the sending of an email from Gmail.com, the order of test cases
that I would combine to form a test procedure would be:
All the test cases above are grouped to achieve a certain target at the end of them. Also, test
procedures have a few test cases combined at any point in time.
Test suite:
List of all test cases that have to be executed as a part of a test cycle or a regression phase etc.
There is no logical grouping based on functionality. The order in which the constituent test cases get
executed may or may not be important.
Example of the test suite: If an application’s current version is 2.0. The previous version 1.0
might have had 1000 test cases to test it entirely. For version 2 there are 500 test cases to just
test the new functionality that is added in the new version. So, the current test suite would be
1000+500 test cases that include both regression and the new functionality. The suite is a
combination too, but we are not trying to achieve a target function.
Out of scope => Enhanced clarity on what we are not going to cover
All the conditions that need to hold true for us to be able to proceed
Assumptions =>
successfully
Test execution
Who is to do what
Login
SDLC’s Code:
While the rest of the project were spending their time on Technical Design Document (TDD), We
QA’s have identified the testing scope (Test scenarios) and created the first dependable Testing plan
draft.
Coding:
Developers are the primary point of the focus for the entire team in the focus.
QA team also indulges in “Test case creation”
What is Software Testing Life Cycle (STLC)
Software Testing Life Cycle refers to a testing process which has specific steps to be
executed in a definite sequence to ensure that the quality goals have been met.
In STLC process, each activity is carried out in a planned and systematic way.
Test Cases:
Let us take a moment, to observe the fields that are part of a test case.
Test case Id and Test case description – these are the generic ones.
a) Precondition – state of the AUT (the state in which the AUT needs to be for us to get
started)
b) Input – data entry steps. For these steps it is important to note what kind of input info is
required – Test data
c) Validation point/trigger/action – what is causing the validation to happen? (Click of a
button or toggle or the link access. Make sure there is at least one validation point to a test
case- otherwise it is all going to be data entry with nothing to look for. Also to ensure that we
have enough modularity, try not to combine too many validation points into one test case. 1
per test case is optimum.)
d) Output – expected result
e) Postcondition – This is additional information that is provided for the benefit of the tester,
just to make the test case more insightful and informative. This includes an explanation of
what happens or what can be expected of the AUT once all the test case steps are done.
The test cases we create are not only the point of reference for the QA
phase but also to the UAT (User Acceptance Testing).
Internally test cases are peer reviewed within the team.
To check whether the test suite we created achieves the 100% test
coverage goal or not. To do so, a traceability matrix can be created.
Equivalence Partitioning:
In this method, the input domain data is divided into different equivalence data classes. This method
is typically used to reduce the total number of test cases to a finite set of testable test cases, still
covering maximum requirements.
E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no
use in writing thousand test cases for all 1000 valid input numbers plus other test cases for
invalid data.
Using equivalence partitioning method above test cases can be divided into three sets of input
data called as classes. Each test case is a representative of a respective class.
So in above example, we can divide our test cases into three equivalence classes of some
valid and invalid inputs.
Test cases for input box accepting numbers between 1 and 1000 using Equivalence
Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a
valid test case. If you select other values between 1 and 1000 the result is going to be same.
So one test case for valid input data should be sufficient.
2) Input data class with all values below the lower limit. I.e. any value below 1, as an invalid
input data test case.
3) Input data with any value greater than 1000 to represent third invalid input class.
It's widely recognized that input values at the extreme ends of input domain cause more
errors in the system. More application errors occur at the boundaries of input domain.
‘Boundary value analysis' testing technique is used to identify errors at boundaries rather than
finding those exist in center of input domain.
Boundary value analysis is the next part of Equivalence partitioning for designing test cases
where test cases are selected at the edges of the equivalence classes.
Test cases for input box accepting numbers between 1 and 1000 using Boundary value
analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and
1000 in our case.
2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.
Boundary value analysis is often called as a part of stress and negative testing.
Requirement Traceability Matrix helps to link the requirements, test cases, and defects
accurately. The whole of the application is tested by having requirement traceability (End to End
testing of an application is achieved).
1) Business Requirements:
The actual customers’ requirements are listed down in a document known as Business
Requirements Document (BRD). The ‘Software Requirement Specifications’ (SRS) document is
derived from BRS.
#2) Software Requirements Specification Document (SRS):
It is a detailed document which contains all the meticulous details of all functional and non-
functional requirements. This SRS is the baseline for designing and developing the software
application.
The PRD is a reference document for all the team members in a project to tell them exactly
what a product should do. It can be divided into the sections like Purpose of the product,
Product Features, Release Criteria and Budgeting & Schedule of the project.
It is the document that helps in designing and implementing the software as per the business
needs. It maps the interactions between an actor and an event with a role that needs to be
performed to achieve a goal.
It is documented containing all the details related to defects. The team can maintain a ‘Defect
Verification’ document for fixing and retesting of the defects. The testers can refer ‘Defect
Verification’ document, when they want to verify if the defects are fixed or not, retest defects
on different OS, device, different system configuration etc.
The user story is a primarily used in ‘Agile’ development to describe a software feature from
an end-user perspective. User stories define the types of users and in what way and why they
want a certain feature. The requirement is simplified by creating user stories.
Micro Focus ALM Quality Center Tool Tutorial
Earlier, it was known as HP Quality Center (QC). HP QC acts as a test management tool
while HP ALM acts as a project management tool. HP QC is named as HP ALM from
version 11.0. I am sure that this tutorial will really be a guide to those who are new to this
tool.
QC Versus ALM
Project planning and Tracking: This tool allows the users to create KPI’s (Key Performance
Indicators) using ALM data and tracks them against the project milestones.
Defect Sharing: This tool provides the ability to share defects across multiple projects.
Project Reporting: This tool provides customized project reporting across multiple projects
using pre-defined templates.
Integration with third-party tools: This tool provided integration with third-party tools such
as HP LoadRunner, HP Unified Functional Testing and REST API.
The number of phases and the phase description differs from organization to organization and
from project to project.
#1) New: A defect will be in New status when a defect is raised and submitted. This is the
default status for every defect initially on HP ALM.
#2) Open: A defect will be in open status when a developer has reviewed the defect and
starts working on it if it is a valid defect.
#3) Rejected: A defect will be in Rejected status when a developer considers the defect to be
invalid.
#4) Deferred: If the defect is a valid defect, but the fix is not delivered in the current release,
a defect will be postponed to the future releases using the status Deferred.
#5) Fixed: Once the developer has fixed the defect and assigned the defect back to the
Quality Assurance Personnel, then it will have Fixed status.
#6) Retest: Once the fix is deployed, Tester has to start retesting the defect.
#7) Reopen: If the retest has failed, a tester has to reopen the defect and assign the defect
back to the developer.
#8) Closed: If the defect fix is delivered and is working as expected, then the tester needs to
close the defect using the status ‘Closed’.
Test Execution:
Once the test cases are written, shared with the BAs and Dev team, reviewed by them, changes are
notified to the QA team (if any), QA team makes necessary amends- Test design phase is complete.
Now getting the Test cases ready does not mean we can initiate the test run. We need to have the
application ready as well among other things.