Você está na página 1de 7

Test plans

A test plan documents the strategy that will be used to verify and ensure that a product or system
meets its design specifications and other requirements. A test plan is usually prepared by or with
significant input from Test Engineers.

Depending on the product and the responsibility of the organization to which the test plan
applies, a test plan may include one or more of the following:

• Design Verification or Compliance test - to be performed during the development or


approval stages of the product, typically on a small sample of units.
• Manufacturing or Production test - to be performed during preparation or assembly of
the product in an ongoing manner for purposes of performance verification and quality
control.
• Acceptance or Commissioning test - to be performed at the time of delivery or installation
of the product.
• Service and Repair test - to be performed as required over the service life of the product.
• Regression test - to be performed on an existing operational product, to verify that
existing functionality didn't get broken when other aspects of the environment are
changed (e.g., upgrading the platform on which an existing application runs).

A complex system may have a high level test plan to address the overall requirements and
supporting test plans to address the design details of subsystems and components.

Test plan document formats can be as varied as the products and organizations to which they
apply. There are three major elements that should be described in the test plan: Test Coverage,
Test Methods, and Test Responsibilities. These are also used in a formal test strategy.

Test coverage in the test plan states what requirements will be verified during what stages of the
product life. Test Coverage is derived from design specifications and other requirements, such as
safety standards or regulatory codes, where each requirement or specification of the design
ideally will have one or more corresponding means of verification. Test coverage for different
product life stages may overlap, but will not necessarily be exactly the same for all stages. For
example, some requirements may be verified during Design Verification test, but not repeated
during Acceptance test. Test coverage also feeds back into the design process, since the product
may have to be designed to allow test access (see Design For Test).

Test methods in the test plan state how test coverage will be implemented. Test methods may be
determined by standards, regulatory agencies, or contractual agreement, or may have to be
created new. Test methods also specify test equipment to be used in the performance of the tests
and establish pass/fail criteria. Test methods used to verify hardware design requirements can
range from very simple steps, such as visual inspection, to elaborate test procedures that are
documented separately.

Test responsibilities include what organizations will perform the test methods and at each stage
of the product life. This allows test organizations to plan, acquire or develop test equipment and
other resources necessary to implement the test methods for which they are responsible. Test
responsibilities also includes, what data will be collected, and how that data will be stored and
reported (often referred to as "deliverables"). One outcome of a successful test plan should be a
record or report of the verification of all design specifications and requirements as agreed upon
by all parties.

A test script in software testing is a set of instructions that will be performed on the system
under test to test that the system functions as expected.

There are various means for executing test scripts.

• Manually. These are more commonly called test cases.


• Automated
o Short program written in a programming language used to test part of the
functionality of a software system. Test scripts written as a short program can
either be written using a special automated functional GUI test tool (such as HP
QuickTest Professional, Borland SilkTest, and Rational Robot) or in a well-
known programming language (such as C++, C#, Tcl, Expect, Java, PHP, Perl,
Powershell, Python, or Ruby).
o Extensively parameterized short programs a.k.a. Data-driven testing
o Reusable steps created in a table a.k.a. keyword-driven - or table-driven testing.

The major advantage of Automated testing is that tests may be executed continuously without the
need for a human intervention. Another advantage over Manual testing in that it is easily
repeatable, and thus is favoured when doing regression testing. It is worth considering
automating tests if they are to be executed several times, for example as part of regression
testing.

Disadvantages of automated testing are that automated tests are often poorly written or simply
break during playback. Since most systems are designed with human interaction in mind, it is
good practice that a human tests the system at some point. Automated tests can only examine
what they have been programmed to examine. A trained manual tester can notice that the system
under test is misbehaving without being prompted or directed. Therefore, when used in
regression testing, manual testers can find new bugs while ensuring that old bugs do not reappear
while an automated test can only ensure the latter.
A test case in software engineering is a set of conditions or variables under which
a tester will determine whether an application or software system is working
correctly or not. The mechanism for determining whether a software program or
system has passed or failed such a test is known as a test oracle. In some settings,
an oracle could be a requirement or use case, while in others it could be a heuristic.
It may take many test cases to determine that a software program or system is
functioning correctly. Test cases are often referred to as test scripts, particularly
when written. Written test cases are usually collected into test suites.

In order to fully test that all the requirements of an application are met, there must be at least two
test cases for each requirement: one positive test and one negative test; unless a requirement has
sub-requirements. In that situation, each sub-requirement must have at least two test cases.
Keeping track of the link between the requirement and the test is frequently done using a
traceability matrix. Written test cases should include a description of the functionality to be
tested, and the preparation required to ensure that the test can be conducted.

What characterizes a formal, written test case is that there is a known input and an expected
output, which is worked out before the test is executed. The known input should test a
precondition and the expected output should test a postcondition

A test case is usually a single step, or occasionally a sequence of steps, to test the correct
behaviour/functionalities, features of an application. An expected result or expected outcome is
usually given.

Additional information that may be included:

• test case ID
• test case description
• test step or order of execution number
• related requirement(s)
• depth
• test category
• author
• check boxes for whether the test is automatable and has been automated.

Additional fields that may be included and completed when the tests are executed:

• pass/fail
• remarks

Larger test cases may also contain prerequisite states or steps, and descriptions.

Software testing is an investigation conducted to provide stakeholders with information about


the quality of the product or service under test.[1] Software Testing also provides an objective,
independent view of the software to allow the business to appreciate and understand the risks at
implementation of the software. Test techniques include, but are not limited to, the process of
executing a program or application with the intent of finding software bugs.

Software Testing can also be stated as the process of validating and verifying that a software
program/application/product:

1. meets the business and technical requirements that guided its design and development;
2. works as expected; and
3. can be implemented with the same characteristics.

User acceptance testing


User Acceptance Testing (UAT) is a process to obtain confirmation by a Subject Matter Expert
(SME), preferably the owner or client of the object under test, through trial or review, that a
system meets mutually agreed-upon requirements. In software development, UAT is one of the
final stages of a project and often occurs before a client or customer accepts the new system.

Users of the system perform these tests, which developers derive from the client's contract or the
user requirements specification.

Test designers draw up formal tests and devise a range of severity levels. It is preferable that the
designer of the user acceptance tests not be the creator of the formal integration and system test
cases for the same system, however there are some situations where this may not be avoided. The
UAT acts as a final verification of the required business function and proper functioning of the
system, emulating real-world usage conditions on behalf of the paying client or a specific large
customer. If the software works as intended and without issues during normal use, one can
reasonably infer the same level of stability in production. These tests, which are usually
performed by clients or end-users, are not usually focused on identifying simple problems such
as spelling errors and cosmetic problems, nor show stopper defects, such as software crashes;
testers and developers previously identify and fix these issues during earlier unit testing,
integration testing, and system testing phases.

The results of these tests give confidence to the clients as to how the system will perform in
production. There may also be legal or contractual requirement for acceptance of the system.

[edit] Q-UAT - Quantified User Acceptance Testing


Quantified User Acceptance Testing (Q-UAT or, more simply, the Quantified Approach) is a
revised Business Acceptance Testing process which aims to provide a smarter and faster
alternative to the traditional UAT phase.[citation needed] Depth-testing is carried out against Business
Requirement only at specific planned points in the application or service under test. A reliance
on better quality code delivery from Development/Build phase is assumed and a complete
understanding of the appropriate Business Process is a pre-requisite. This methodology if carried
out correctly results in a quick turnaround against plan, a decreased number of test scenarios
which are more complex and wider in breadth than traditional UAT and ultimately the equivalent
confidence level attained via a shorter delivery window, allowing products/changes to be brought
to market quicker.

The Approach is based on a 'gated' 3-dimensional model the key concepts of which are:

• Linear Testing (LT, the 1st dimension)


• Recursive Testing (RT, the 2nd dimension)
• Adaptive Testing (AT, the 3rd dimension).

The four 'gates' which conjoin and support the 3-dimensional model act as quality safeguards and
include contemporary testing concepts such as:

• Internal Consistency Checks (ICS)


• Major Systems/Services Checks (MSC)
• Realtime/Reactive Regression (RTR).

The Quantified Approach was shaped by the former "guerilla" method of Acceptance Testing
which was itself a response to testing phases which proved too costly to be sustainable for many
small/medium-scale projects.

n software development, a test suite, less commonly known as a validation suite, is a collection
of test cases that are intended to be used to test a software program to show that it has some
specified set of behaviours. A test suite often contains detailed instructions or goals for each
collection of test cases and information on the system configuration to be used during testing. A
group of test cases may also contain prerequisite states or steps, and descriptions of the following
tests.

Collections of test cases are sometimes incorrectly termed a test plan, a test script, or even a test
scenario.

An executable test suite is a test suite that can be executed by a program. This usually means that
a test harness, which is integrated with the suite, exists. The test suite and the test harness
together can work on a sufficiently detailed level to correctly communicate with the system
under test (SUT).

A test suite for a primality testing subroutine might consist of a list of numbers and their
primality (prime or composite), along with a testing subroutine. The testing subroutine would
supply each number in the list to the primality tester, and verify that the result of each test is
correct.

Static vs. dynamic testing

There are many approaches to software testing. Reviews, walkthroughs, or inspections are
considered as static testing, whereas actually executing programmed code with a given set of test
cases is referred to as dynamic testing. Static testing can be (and unfortunately in practice often
is) omitted. Dynamic testing takes place when the program itself is used for the first time (which
is generally considered the beginning of the testing stage). Dynamic testing may begin before the
program is 100% complete in order to test particular sections of code (modules or discrete
functions). Typical techniques for this are either using stubs/drivers or execution from a
debugger environment. For example, Spreadsheet programs are, by their very nature, tested to a
large extent interactively ("on the fly"), with results displayed immediately after each calculation
or text manipulation.

Black testing takes an external perspective of the test object to derive test cases. These tests can
be functional or non-functional, though usually functional. The test designer selects valid and
invalid inputs and determines the correct output. There is no knowledge of the test object's
internal structure.

This method of test design is applicable to all levels of software testing: unit, integration,
functional testing, system and acceptance. The higher the level, and hence the bigger and more
complex the box, the more one is forced to use black box testing to simplify. While this method
can uncover unimplemented parts of the specification, one cannot be sure that all existent paths
are tested.

White box testing (a.k.a. clear box testing, glass box testing, transparent box testing, or
structural testing) uses an internal perspective of the system to design test cases based on internal
structure. It requires programming skills to identify all paths through the software. The tester
chooses test case inputs to exercise paths through the code and determines the appropriate
outputs. In electrical hardware testing, every node in a circuit may be probed and measured; an
example is in-circuit testing (ICT).

Since the tests are based on the actual implementation, if the implementation changes, the tests
probably will need to change, too. For example ICT needs updates if component values change,
and needs modified/new fixture if the circuit changes. This adds financial resistance to the
change process, thus buggy products may stay buggy. Automated optical inspection (AOI) offers
similar component level correctness checking without the cost of ICT fixtures, however changes
still require test updates.

While white box testing is applicable at the unit, integration and system levels of the software
testing process, it is typically applied to the unit. While it normally tests paths within a unit, it
can also test paths between units during integration, and between subsystems during a system
level test. Though this method of test design can uncover an overwhelming number of test cases,
it might not detect unimplemented parts of the specification or missing requirements, but one can
be sure that all paths through the test object are executed.

System testing of software or hardware is testing conducted on a complete, integrated system to


evaluate the system's compliance with its specified requirements. System testing falls within the
scope of black box testing, and as such, should require no knowledge of the inner design of the
code or logic. [1]
As a rule, system testing takes, as its input, all of the "integrated" software components that have
successfully passed integration testing and also the software system itself integrated with any
applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies
between the software units that are integrated together (called assemblages) or between any of
the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to
detect defects both within the "inter-assemblages" and also within the system as a whole.

Regression testing is any type of software testing that seeks to uncover software errors by
partially retesting a modified program. The intent of regression testing is to provide a general
assurance that no additional errors were introduced in the process of fixing other problems.
Regression testing is commonly used to efficiently test the system by systematically selecting the
appropriate minimum suite of tests needed to adequately cover the affected change. Common
methods of regression testing include rerunning previously run tests and checking whether
previously fixed faults have re-emerged. "One of the main reasons for regression testing is that
it's often extremely difficult for a programmer to figure out how a change in one part of the
software will echo in other parts of the software."[1]

Você também pode gostar