Você está na página 1de 8

Software Definitions - Testing Levels / Phases -

Testing Roles - Testing Techniques - Categories


of Testing Tools

Jagan Mohan Julooru


Software Definitions

Software Assurance: The planned and systematic set of activities that ensure that
software life cycle processes and products conform to requirements, standards, and
procedures [IEEE 610.12 IEEE Standard Glossary of Software Engineering
Terminology]. For NASA this includes the disciplines of Software Quality (functions of
Software Quality Engineering, Software Quality Assurance, Software Quality Control),
Software Safety, Software Reliability, Software Verification and Validation, and IV&V.

Software Quality: The discipline of software quality is a planned and systematic set of
activities to ensure quality is built into the software. It consists of software quality
assurance, software quality control, and software quality engineering. As an attribute,
software quality is (1) the degree to which a system, component, or process meets
specified requirements. (2) The degree to which a system, component, or process meets
customer or user needs or expectations [IEEE 610.12 IEEE Standard Glossary of
Software Engineering Terminology].

Software Quality Assurance: The function of software quality that assures that the
standards, processes, and procedures are appropriate for the project and are correctly
implemented.

Software Quality Control: The function of software quality that checks that the project
follows its standards, processes, and procedures, and that the project produces the
required internal and external (deliverable) products.

Software Quality Engineering: The function of software quality that assures that quality
is built into the software by performing analyses, trade studies, and investigations on the
requirements, design, code and verification processes and results to assure that reliability,
maintainability, and other quality factors are met.

Software Reliability: The discipline of software assurance that 1) defines the


requirements for software controlled system fault/failure detection, isolation, and
recovery; 2) reviews the software development processes and products for software error
prevention and/ or controlled change to reduced functionality states; and 3) defines the
process for measuring and analyzing defects and defines/ derives the reliability and
maintainability factors.

Software Safety: The discipline of software assurance that is a systematic approach to


identifying, analyzing, tracking, mitigating and controlling software hazards and
hazardous functions (data and commands) to ensure safe operation within a system.

Verification: Confirmation by examination and provision of objective evidence that


specified requirements have been fulfilled [ISO/IEC 12207, Software life cycle
processes]. In other words, verification ensures that “you built it right”.

Jagan Mohan Julooru


Validation: Confirmation by examination and provision of objective evidence that the
particular requirements for a specific intended use are fulfilled [ISO/IEC 12207, Software
life cycle processes.] In other words, validation ensures that “you built the right thing”.

Independent Verification and Validation (IV&V): Verification and validation


performed by an organization that is technically, managerially, and financially
independent. IV&V, as a part of Software Assurance, plays a role in the overall NASA
software risk mitigation strategy applied throughout the life cycle, to improve the safety
and quality of software.

Testing Levels / Phases

Testing levels or phases should be applied against the application under test when the
previous phase of testing is deemed to be complete . or .complete enough.. Any defects
detected during any level or phase of testing need to be recorded and acted on
appropriately.

Design Review

"The objective of Design Reviews is to verify all documented design criteria before
development begins." The design deliverable or deliverables to be reviewed should be
complete within themselves. The environment of the review should be a professional
examination of the deliverable with the focus being the deliverable not the author (or
authors). The review must ensure each design deliverable for: completeness, correctness,
and fit (both within the business model, and system architecture).

Design reviews should be conducted by: system matter experts, testers, developers, and
system architects to ensure all aspects of the design are reviewed.

Unit Test

"The objective of unit test is to test every line of code in a component or module." The
unit of code to be tested can be tested independent of all other units. The environment of
the test should be isolated to the immediate development environment and have little, if
any, impact on other units being developed at the same time. The test data can be
fictitious and does not have to bear any relationship to .real world. business events. The
test data need only consist of what is required to ensure that the component and
component interfaces conform to the system architecture. The unit test must ensure each
component: compiles, executes, interfaces, and passes control from the unit under test to
the next component in the process according to the process model.

The developer in conjunction with a peer should conduct unit test to ensure the
component is stable enough to be released into the product stream.

Jagan Mohan Julooru


Function Test

"The objective of function test is to measure the quality of the functional (business)
components of the system." Tests verify that the system behaves correctly from the user /
business perspective and functions according to the requirements, models, storyboards, or
any other design paradigm used to specify the application. The function test must
determine if each component or business event: performs in accordance to the
specifications, responds correctly to all conditions that may be presented by incoming
events / data, moves data correctly from one business event to the next (including data
stores), and that business events are initiated in the order required to meet the business
objectives of the system.

Function test should be conducted by an independent testing organization to ensure the


various components are stable and meet minimum quality criteria before proceeding to
System test.

System Test

"The objective of system test is to measure the effectiveness and efficiency of the system in
the "real-world" environment." System tests are based on business processes (workflows)
and performance criteria rather than processing conditions. The system test must
determine if the deployed system: satisfies the operational and technical performance
criteria, satisfies the business requirements of the System Owner / Users / Business
Analyst, integrates properly with operations (business processes, work procedures, user
guides), and that the business objectives for building the system were attained.

There are many aspects to System testing the most common are:

Security Testing: The tester designs test case scenarios that attempt to subvert or
bypass security.
Stress Testing: The tester attempts to stress or load an aspect of the system to the
point of failure; the goal being to determine weak points in the system architecture.
Performance Testing: The tester designs test case scenarios to determine if the
system meets the stated performance criteria (i.e. A Login request shall be responded
to in 1 second or less under a typical daily load of 1000 requests per minute.)

Install (Roll-out) Testing: The tester designs test case scenarios to determine if the
installation procedures lead to an invalid or incorrect installation.
Recovery Testing: The tester designs test case scenarios to determine if the system
meets the stated fail-over and recovery requirements.

System test should be conducted by an independent testing organization to ensure the


system is stable and meets minimum quality criteria before proceeding to User
Acceptance test.

Jagan Mohan Julooru


User Acceptance Test

"The objective of User Acceptance test is for the user community to measure the
effectiveness and efficiency of the system in the "real-world" environment.". User
Acceptance test is based on User Acceptance criteria, which can include aspects of
Function and System test. The User Acceptance test must determine if the deployed
system: meets the end Users expectations, supports all operational requirements (both
recorded and non-recorded), and fulfills the business objectives (both recorded and non-
recorded) for the system.

User Acceptance test should be conducted by the end users of the system and monitored
by an independent testing organization. The Users must ensure the system is stable and
meets the minimum quality criteria before proceeding to system deployment (roll-out).

Testing Roles

As in any organization or organized endeavor there are Roles that must be fulfilled within
any testing organization. The requirement for any given role depends on the size,
complexity, goals, and maturity of the testing organization. These are roles, so it is quite
possible that one person could fulfill many roles within the testing organization.

Test Lead or Test Manager

The Role of Test Lead / Manager is to effectively lead the testing team. To fulfill this role
the Lead must understand the discipline of testing and how to effectively implement a
testing process while fulfilling the traditional leadership roles of a manager. What does
this mean? The manager must manage and implement or maintain an effective testing
process.

Test Architect

The Role of the Test Architect is to formulate an integrated test architecture that supports
the testing process and leverages the available testing infrastructure. To fulfill this role
the Test Architect must have a clear understanding of the short-term and long-term goals
of the organization, the resources (both hard and soft) available to the organization, and a
clear vision on how to most effectively deploy these assets to form an integrated test
architecture.

Test Designer or Tester

The Role of the Test Designer / Tester is to: design and document test cases, execute
tests, record test results, document defects, and perform test coverage analysis. To fulfill
this role the designer must be able to apply the most appropriate testing techniques to test

Jagan Mohan Julooru


the application as efficiently as possible while meeting the test organizations testing
mandate.

Test Automation Engineer

The Role of the Test Automation Engineer to is to create automated test case scripts that
perform the tests as designed by the Test Designer. To fulfill this role the Test
Automation Engineer must develop and maintain an effective test automation
infrastructure using the tools and techniques available to the testing organization. The
Test Automation Engineer must work in concert with the Test Designer to ensure the
appropriate automation solution is being deployed.

Test Methodologist or Methodology Specialist

The Role of the Test Methodologist is to provide the test organization with resources on
testing methodologies. To fulfill this role the Methodologist works with Quality
Assurance to facilitate continuous quality improvement within the testing methodology
and the testing organization as a whole. To this end the methodologist: evaluates the test
strategy, provides testing frameworks and templates, and ensures effective
implementation of the appropriate testing techniques.

Testing Techniques

Overtime the IT industry and the testing discipline have developed several techniques for
analyzing and testing applications.

Black-box Tests

Black-box tests are derived from an understanding of the purpose of the code; knowledge
on or about the actual internal program structure is not required when using this
approach. The risk involved with this type of approach is that .hidden. (functions
unknown to the tester) will not be tested and may not been even exercised.

White-box Tests or Glass-box tests

White-box tests are derived from an intimate understanding of the purpose of the code
and the code itself; this allows the tester to test .hidden. (undocumented functionality)
within the body of the code. The challenge with any white-box testing is to find testers
that are comfortable with reading and understanding code.

Jagan Mohan Julooru


Regression tests

Regression testing is not a testing technique or test phase; it is the reuse of existing tests
to test previously implemented functionality--it is included here only for clarification.

Equivalence Partitioning

Equivalence testing leverages the concept of "classes" of input conditions. A "class" of


input could be "City Name" where testing one or several city names could be deemed
equivalent to testing all city names. In other word each instance of a class in a test covers
a large set of other possible tests.

Boundary-value Analysis

Boundary-value analysis is really a variant on Equivalence Partitioning but in this case


the upper and lower end of the class and often values outside the valid range of the class
are used for input into the test cases. For example, if the Class in "Numeric Month of the
Year" then the Boundary-values could be 0, 1, 12, and 13.

Error Guessing

Error Guessing involves making an itemized list of the errors expected to occur in a
particular area of the system and then designing a set of test cases to check for these
expected errors. Error Guessing is more testing art than testing science but can be very
effective given a tester familiar with the history of the system.

Output Forcing

Output Forcing involves making a set of test cases designed to produce a particular
output from the system. The focus here is on creating the desired output not on the input
that initiated the system response.

Categories of Testing Tools

A number of different types of automated and manual testing tools are required to support
an automated testing framework.

Test Design Tools. Tools that are used to plan software testing activities. These tools are
used to create test artifacts that drive later testing activities.

Static Analysis Tools. Tools that analyze programs without machines executing them.
Inspections and walkthroughs are examples of static testing tools.

Dynamic Analysis Tools. Tools that involve executing the software in order to test it.

Jagan Mohan Julooru


GUI Test Drivers and Capture/Replay Tools. Tools that use macrorecord-ing
capabilities to automate testing of applications employing GUIs.

Load and Performance Tools. Tools that simulate different user load conditions for
automated stress and volume testing.

Non-GUI Test Drivers and Test Managers. Tools that automate test execution of
applications that do not allow tester interaction via a GUI.

Other Test Implementation Tools. Miscellaneous tools that assist test implementation.
We include the MS Office Suite of tools here.

Test Evaluation Tools. Tools that are used to evaluate the quality of a testing effort.

Jagan Mohan Julooru

Você também pode gostar