Escolar Documentos
Profissional Documentos
Cultura Documentos
Testing the system with the intent of confirming readiness of the product and
customer acceptance. Acceptance testing, which is a black box testing, will give the
client the opportunity to verify the system functionality and usability prior to the
system being moved to production. The acceptance test will be the responsibility of
the client; however, it will be conducted with full support from the project team. The
Test Team will work with the client to develop the acceptance criteria.
Ad Hoc Testing
Testing without a formal test plan or outside of a test plan. With some projects this
type of testing is carried out as an adjunct to formal testing. If carried out by a
skilled tester, it can often find problems that are not caught in regular testing.
Sometimes, if testing occurs very late in the development cycle, this will be the only
kind of testing that can be performed. Sometimes ad hoc testing is referred to as
exploratory testing.
Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior
to users being involved. Sometimes a select group of users are involved. More often
this testing will be performed in-house or by an outside testing firm in close
cooperation with the software engineering department.
Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and
when the importance of having a person manually testing is diminished. Automated
testing still requires a skilled quality assurance professional with knowledge of the
automation tool and the software being tested to set up the tests.
Beta Testing
Testing after the product is code complete. Betas are often widely distributed or even
distributed to the public at large in hopes that they will buy the final product when it
is released.
Black Box Testing
Testing software without any knowledge of the inner workings, structure or language
of the module being tested. Black box tests, as most other kinds of tests, must be
written from a definitive source document, such as a specification or requirements
document.
Compatibility Testing
Testing used to determine whether other system software components such as
browsers, utilities, and competing software will conflict with the software being
tested.
Configuration Testing
Testing to determine how well the product works with a broad range of
hardware/peripheral equipment configurations as well as on different operating
systems and software.
End-to-End Testing
Similar to system testing, the 'macro' end of the test scale involves testing of a
complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other
hardware, applications, or systems if appropriate.
Functional Testing
Testing two or more modules together with the intent of finding defects,
demonstrating that defects are not present, verifying that the module performs its
intended functions as stated in the specification and establishing confidence that a
program does what it is supposed to do.
Independent Verification and Validation (IV&V)
The process of exercising software with the intent of ensuring that the software
system meets its requirements and user expectations and doesn't fail in an
unacceptable manner. The individual or group doing this work is not part of the group
or organization that developed the software. A term often applied to government
work or where the government regulates the products, as in medical devices.
Installation Testing
Testing with the intent of determining if the product will install on a variety of
platforms and how easily it installs. Testing full, partial, or upgrade install/uninstall
processes. The installation test for a release will be conducted with the objective of
demonstrating production readiness. This test is conducted after the application has
been migrated to the client's site. It will encompass the inventory of configuration
items (performed by the application's System Administration) and evaluation of data
readiness, as well as dynamic tests focused on basic system functionality. When
necessary, a sanity test will be performed following the installation testing.
Integration Testing
Testing two or more modules or functions together with the intent of finding interface
defects between the modules or functions. Testing completed at as a part of unit or
functional testing, and sometimes, becomes its own standalone test phase. On a
larger level, integration testing can involve a putting together of groups of modules
and functions with the goal of completing and verifying that the system meets the
system requirements. (see system testing)
Load Testing
Testing with the intent of determining how well the product handles competition for
system resources. The competition may come in the form of network traffic, CPU
utilization or memory allocation.
Parallel/Audit Testing
Testing where the user reconciles the output of the new system to the output of the
current system to verify the new system performs the operations correctly.
Performance Testing
Testing with the intent of determining how quickly a product handles a variety of
events. Automated test tools geared specifically to test and fine-tune performance
are used most often for this type of testing.
Pilot Testing
Testing that involves the users just before actual release to ensure that users
become familiar with the release contents and ultimately accept it. Often is
considered a Move-to-Production activity for ERP releases or a beta test for
commercial products. Typically involves many users, is conducted over a short period
of time and is tightly controlled. (see beta testing)
Recovery/Error Testing
Testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
Regression Testing
Testing with the intent of determining if bug fixes have been successful and have not
created any new problems. Also, this type of testing is done to ensure that no
degradation of baseline functionality has occurred.
Sanity Testing
Sanity testing will be performed whenever cursory testing is sufficient to prove the
application is functioning according to specifications. This level of testing is a subset
of regression testing. It will normally include a set of core tests of basic GUI
functionality to demonstrate connectivity to the database, application servers,
printers, etc.
Security Testing
Testing of database and network software in order to keep company data and
resources secure from mistaken/accidental users, hackers, and other malevolent
attackers.
Software Testing
The process of exercising software with the intent of ensuring that the software
system meets its requirements and user expectations and doesn't fail in an
unacceptable manner. The organization and management of individuals or groups
doing this work is not relevant. This term is often applied to commercial products
such as internet applications. (contrast with independent verification and validation)
Stress Testing
Testing with the intent of determining how well a product performs when a load is
placed on the system resources that nears and then exceeds capacity.
System Integration Testing
Testing a specific hardware/software installation. This is typically performed on a
COTS (commercial off the shelf) system or any other system comprised of disparent
parts where custom configurations and/or unique installations are the norm.
Unit Testing
Unit Testing is the first level of dynamic testing and is first the responsibility of the
developers and then of the testers. Unit testing is performed after the expected test
results are met or differences are explainable / acceptable.
Usability Testing
Testing for 'user-friendliness'. Clearly this is subjective and will depend on the
targeted end-user or customer. User interviews, surveys, video recording of user
sessions, and other techniques can be used. Programmers and testers are usually
not appropriate as usability testers.
White Box Testing
Testing in which the software tester has knowledge of the inner workings, structure
and language of the software, or at least its purpose.
Bug Life Cycle is nothing but the various phases a Bug undergoes after it is raised or
reported.
• New or Opened
• Assigned
• Fixed
• Tested
• Closed
Error Guessing is a test case design technique where the tester has to guess what
faults might occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the
reason of monitoring the rate of detection & removal and also to estimate the
number of faults remaining in the program.
Defect: It is found in the product itself after it is shipped to the respective customer.
Test Data is that run through a computer program to test the software. Test data can
be used to test the compliance with effective controls in the software.
Negative testing - Testing the system using negative data is called negative
testing, e.g. testing the password where it should be minimum of 8 characters so
testing it using 6 characters is negative testing.
Load Testing and Performance Testing are commonly said as positive testing where
as Stress Testing is said to be as negative testing.
Say for example there is a application which can handle 25 simultaneous user logins
at a time. In load testing we will test the application for 25 users and check how
application is working in this stage, in performance testing we will concentrate on the
time taken to perform the operation. Where as in stress testing we will test with
more users than 25 and the test will continue to any number and we will check
where the application is cracking.
SDLC
• Requirement phase
• Designing phase (HLD, DLD (Program spec))
• Coding
• Testing
• Release
• Maintenance
STLC
• System Study
• Test planning
• Writing Test case or scripts
• Review the test case
• Executing test case
• Bug tracking
• Report the defect
Ad hoc testing is concern with the Application Testing without following any rules or
test cases.
For Ad hoc testing one should have strong knowledge about the Application.
Structural testing is a "white box" testing and it is based on the algorithm or code.
Functional testing is a "black box" (behavioral) testing where the tester verifies the
functional specification.
Re- test - Retesting means we testing only the certain part of an application again
and not considering how it will effect in the other part or in the whole application.
UAT testing - UAT stands for 'User acceptance Testing. This testing is carried out
with the user perspective and it is usually done before the release.
What are the basic solutions for the software development problems?
Black box testing – This type of testing doesn’t require any knowledge of the
internal design or coding. These Tests are based on the requirements and
functionality.
White box testing – This kind of testing is based on the knowledge of internal logic
of a particular application code. The Testing is done based on the coverage of code
statements, paths, and conditions.
Unit testing – the 'micro' scale of testing; this is mostly used to test the particular
functions or code modules. This is typically done by the programmer and not by
testers; it requires detailed knowledge of the internal program design and code. It
cannot be done easily unless the application has a well-designed architecture with
tight code; this type may require developing test driver modules or test harnesses.
Functional testing – This a commonly used black-box testing geared to check the
functional requirements of an application; this type of testing should be done by
testers.
Regression testing – This is testing the whole application again after the fixes or
the modifications are done on the software. This is mostly done at the end of the
Software development life cycle. Mostly Automated testing tools are used for these
type of testing.
System testing – This is a type of black-box type testing that is based on overall
requirements specifications; covers all combined parts of a system.
UAT (User Acceptance Testing ) – This type of testing comes on the final stage
and mostly done on the specifications of the end-user or client.
Mutation testing – This is another method for determining if a set of test data or
test cases is useful, by purposely introducing various code changes or bugs and
retesting with the original test data or cases to determine whether the 'bugs' are
detected.
Validation is done during actual testing and it takes place after all the verifications
are being done.
Software testing is the process used to help identify the Correctness, Completeness,
Security and Quality of the developed Computer Software.
Software Testing is the process of executing a program or system with the intent of
finding errors.