Escolar Documentos
Profissional Documentos
Cultura Documentos
Objectives of Testing
Modes of Testing
* Static Static Analysis doesn¡¦t involve actual program execution. The
code is examined, it is tested without being executed Ex: - Reviews
* Dynamic In Dynamic, The code is executed. Ex:- Unit testing
Testing methods
* White box testing Use the control structure of the procedural design
to derive test cases.
* Black box testing Derive sets of input conditions that will fully
exercise the functional requirements for a program.
* Integration Assembling parts of a system
Impact
is what would happen if this piece somehow malfunctioned. Would it
destroy the customer database? Or would it just mean that the column
headings in a report didn't quite line up?
Likelihood
is an estimate of how probable it is that this piece would fail.
Together, Impact and Likelihood determine Risk for the piece.
Test Planning
Test Planning
* Define what to test
* Identify Functions to be tested
* Test conditions
* Manual or Automated
* Prioritize to identify Most Important Tests
* Record Document References
Test Design
* Define how to test
* Identify Test Specifications
* Build detailed test scripts
* Quick Script generation
* Documents
Test Execution
* Define when to test
* Build test execution schedule
* Record test results
Bug Overview
What is a software error?
A mismatch between the program and its specification is an error in
the Program if and only if the specification exists and is correct.
Example: -
* The date on the report title is wrong
* The system hangs if more than 20 users try to commit at the same
time
* The user interface is not standard across programs
Pre-Alpha
Pre-Alpha is the test period during which QA, Information
Development and other internal users make the product available for
internal testing.
Alpha
Alpha is the test period during which the product is complete and
usable in a test environment but not necessarily bug-free. It is the
final chance to get verification from customers that the tradeoffs
made in the final development stage are coherent.
Entry to Alpha
* All features complete/testable (no urgent bugs or QA blockers)
* High bugs on primary platforms fixed/verified
* 50% of medium bugs on primary platforms fixed/verified
* All features tested on primary platforms
* Alpha sites ready for install
* Final product feature set Determined
Beta
Beta is the test period during which the product should be of "FCS
quality" (it is complete and usable in a production environment). The
purpose of the Beta ship and test period is to test the company's
ability to deliver and support the product (and not to test the product
itself). Beta also serves as a chance to get a final "vote of confidence"
from a few customers to help validate our own belief that the product
is now ready for volume shipment to all customers.
Entry to Beta
GM (Golden Master)
GM is the test period during which the product should require minimal
work, since everything was done prior to Beta. The only planned work
should be to revise part numbers and version numbers, prepare
documentation for final printing, and sanity testing of the final bits.
Entry to Golden Master
The International Software Testing Qualifications Board says that software faults occur
through the following process:
A human being can make an error (mistake), which produces a defect (fault, bug) in the
code, in software or a system, or in a document. If a defect in code is executed, the
system will fail to do what it should do (or do something it shouldn’t), causing a failure.
Defects in software, systems or documents may result in failures, but not all defects do
so.[1]
A fault can also turn into a failure when the environment is changed. Examples of these
changes in environment include the software being run on a new hardware platform,
alterations in source data or interacting with different software.[2]
A problem with software testing is that testing all combinations of inputs and
preconditions is not feasible when testing anything other than a simple product.[3] This
means that the number of defects in a software product can be very large and defects that
occur infrequently are difficult to find in testing.
Another common practice is for test suites to be developed during technical support
escalation procedures.[citation needed] Such tests are then maintained in regression testing
suites to ensure that future updates to the software don't repeat any of the known
mistakes.
Unit tests are maintained along with the rest of the software source code and generally
integrated into the build process (with inherently interactive tests being relegated to a
partially manual build acceptance process).
The software, tools, samples of data input and output, and configurations are all referred
to collectively as a test harness.
Today, software has grown in complexity and size. The software product developed by a
developer is according to the System Requirement Specification.[citation needed] Every
software product has a target audience. For example, a video game software has its
audience completely different from banking software. Therefore, when an organization
invests large sums in making a software product, it must ensure that the software product
must be acceptable to the end users or its target audience. This is where Software Testing
comes into play. Software testing is not merely finding defects or bugs in the software, it
is the completely dedicated discipline of evaluating the quality of the software.[citation needed]
There are many approaches to software testing, but effective testing of complex products
is essentially a to connote the dynamic analysis of the product—putting the product
through its paces.[citation needed] Sometimes one therefore refers to reviews, walkthroughs or
inspections as static testing, whereas actually running the program with a given set of test
cases in a given development stage is often referred to as dynamic testing, to emphasize
the fact that formal review processes form part of the overall testing scope.[citation needed]
Code coverage measures aim to show the degree to which the source code of a program
has been tested.[9] It is inherently a white box testing activity because it looks at the code
directly. This allows the software team to examine parts of a system that are rarely tested
and ensures that the most important function points have been tested.[10] Two common
forms of code coverage are statement coverage, which reports on the number of lines
executed, and path coverage, which reports on the branches executed to complete the test.
They both return a coverage metric, measured as a percentage.
There are a number of common software measures, often called "metrics", which are used
to measure the state of the software or the adequacy of the testing:
• Bug trend over the period in a release (Bugs should converge towards zero as the
project gets closer to release) (It is possible that there are more cosmetic bugs
found closer to release - in which case the number of critical bugs found is used
instead of total number of bugs found)[citation needed]
• Number of test cases executed per person per unit time[citation needed]
• % of test cases executed so far, total Pass, total fail[citation needed]
• Test Coverage[citation needed]
[edit] History
The separation of debugging from testing was initially introduced by Glenford J. Myers
in 1979.[11] Although his attention was on breakage testing ,it illustrated the desire of the
software engineering community to separate fundamental development activities, such as
debugging, from that of verification. Dr. Dave Gelperin and Dr. William C. Hetzel
classified in 1988 the phases and goals in software testing in the following stages:[12]
Black box testing treats the software as a black-box without any understanding as to how
the internals behave. It aims to test the functionality according to the
requirements.[18]Thus, the tester inputs data and only sees the output from the test object.
This level of testing usually requires thorough test cases to be provided to the tester who
then can simply verify that for a given input, the output value (or behaviour), is the same
as the expected value specified in the test case.
White box testing, however, is when the tester has access to the internal data structures,
code, and algorithms. For this reason, unit testing and debugging can be classified as
white-box testing and it usually requires writing code, or at a minimum, stepping through
it, and thus requires more knowledge of the product than the black-box tester.[19] If the
software in test is an interface or API of any sort, white-box testing is almost always
required.[citation needed]
In recent years the term grey box testing has come into common usage. This involves
having access to internal data structures and algorithms for purposes of designing the test
cases, but testing at the user, or black-box level. Manipulating input data and formatting
output do not qualify as grey-box because the input and output are clearly outside of the
black-box we are calling the software under test. This is particularly important when
conducting integration testing between two modules of code written by two different
developers, where only the interfaces are exposed for test.
Grey box testing could be used in the context of testing a client-server environment when
the tester has control over the input, inspects the value in a SQL database, and the output
value, and then compares all three (the input, sql value, and output), to determine if the
data got corrupt on the database insertion or retrieval.
• Verification: Have we built the software right (i.e., does it match the
specification)? Software testing is just one kind of verification, which also uses
techniques such as reviews, inspections, and walkthroughs.[20]
• Validation: Have we built the right software (i.e., is this what the customer
wants)?[21]
• Unit testing tests the minimal software component, or module. Each unit (basic
component) of the software is tested to verify that the detailed design for the unit
has been correctly implemented. In an Object-oriented environment, this is
usually at the class level, and the minimal unit tests include the constructors and
destructors.[22]
• Integration testing exposes defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated
and tested until the software works as a system.[citation needed]
• Functional testing tests at any level (class, module, interface, or system) for
proper functionality as defined in the specification. The use of a traceability
matrix often helps with functional testing.[citation needed]
• System testing tests a completely integrated system to verify that it meets its
requirements.[23]
• System integration testing verifies that a system is integrated to any external or
third party systems defined in the system requirements.[citation needed]
• Performance Testing validates whether the quality of service (sometimes called
Non-functional requirements) parameters defined at the requirements stage is met
by the final product.[citation needed]
• Acceptance testing can be conducted by the end-user, customer, or client to
validate whether or not to accept the product. Acceptance testing may be
performed as part of the hand-off process between any two phases of
development.[citation needed] See also software release life cycle
It should be noted that although both Alpha and Beta are referred to as testing it is in fact
use immersion. The rigors that are applied are often unsystematic and many of the basic
tenets of testing process are not used. The Alpha and Beta period provides insight into
environmental and utilization conditions that can impact the software.
Regression testing can be performed at any or all of the above test levels. These
regression tests are often automated.
More specific forms of regression testing are known as Sanity testing, when quickly
checking for bizarre behaviour, and Smoke testing when testing for basic functionality.
The term test script is the combination of a test case, test procedure, and test data.
Initially the term was derived from the product of work created by automated regression
test tools. Today, test scripts can be manual, automated, or a combination of both.
The most common term for a collection of test cases is a test suite. The test suite often
also contains more detailed instructions or goals for each collection of test cases. It
definitely contains a section where the tester identifies the system configuration used
during testing. A group of test cases may also contain prerequisite states or steps, and
descriptions of the following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They might
correctly be called a test specification. If sequence is specified, it can be called a test
script, scenario, or procedure.
The developers are well aware what test plans will be executed and this information is
made available to the developers. This makes the developers more cautious when
developing their code.This ensures that the developers code is not passed through any
suprise test case or test plans.
A lot of activities will be carried out during testing, so that a plan is needed.
3. Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use
in testing software.
4. Test Execution: Testers execute the software based on the plans and tests and
report any errors found to the development team.
5. Test Reporting: Once testing is completed, testers generate metrics and make final
reports on their test effort and whether or not the software tested is ready for
release.
6. Retesting the Defects
Not all errors or defects reported must be fixed by a software development team. Some
may be caused by errors in configuring the test software to match the development or
production environment. Some defects can be handled by a workaround in the production
environment. Others might be deferred to future releases of the software, or the
deficiency might be accepted by the business user. There are yet other defects that may be
rejected by the development team (of course, with due reason) if they deem it.
[edit] Controversy
Main article: Software testing controversies
• Who watches the watchmen? - The idea is that any form of observation is also
an interaction, that the act of testing can also affect that which is being
tested.[citation needed]
[edit] Certification
Several certification programs exist to support the professional aspirations of software
testers and quality assurance specialists. No certification currently offered actually
requires the applicant to demonstrate the ability to test software. No certification is based
on a widely accepted body of knowledge. This has led some to declare that the testing
field is not ready for certification.[27] Certification itself cannot measure an individual's
productivity, their skill, or practical knowledge, and cannot guarantee their competence,
or professionalism as a tester.[28]
Software testing may be viewed as an important part of the Software Quality Assurance
(SQA) process.[citation needed] In SQA, software process specialists and auditors take a
broader view on software and its development. They examine and change the software
engineering process itself to reduce the amount of faults that end up in defect rate. What
constitutes an acceptable defect rate depends on the nature of the software. An arcade
video game designed to simulate flying an airplane would presumably have a much
higher tolerance for defects than software used to control an actual airliner. Although
there are close links with SQA testing departments often exist independently, and there
may be no SQA areas in some companies.