Você está na página 1de 7


Write a scenario/ practical example to explain the difference between

White Box, Black Box and Gray box testing.
Decos Software Development private limited company in Pune has a
software development team. In building a software, there is a testing
mechanism for the software that is built from the testers, customers or user
and developers.
The primary objective of testing is to increase the confidence that the target
system will delivers functionality in a robust, scalable, interoperable and secure
manner. Therefore Decos software development Company should software
testing techniques: given in this assignment.
Black box testing
White box testing and
Gray box testing
Those listed above should be applied in the software development progress.
Thus an approach has its own benefits and drawbacks and the correct approach
for any typical organization depends on their objective.
Black box testing:

The development team attempts to compromise the system by testing that

external attackers target the system. The tester didnt know the internal
working of the application. Black box testing has a benefits of perfectly
simulating a motivated external attackers that has no knowledge of the system
operation and infrastructures. This type of approach is not time consuming as
compared to other testing approaches that the customer can test the software
as per their objective.
White box testing:

In this approach the software development team in Decos has full information
or we can say knowledge about target system. This approach usually reveals
more vulnerability and much faster since the development team has
transparent access to key information and details required in the system.
Testers and developers will do testing in this approach and it is exhaustive,
and time consuming type of testing.
Gray Box Testing

In the company, managers and some other testers would be given partial
information about target system. This approach provides a cost effective
testing while focusing on areas that are important to the customer`s
organization. Like others it is time consuming but this testing approach lies in
between of the above two approaches for the company to apply.
2. How alpha, beta and acceptance testing are done in Industry. Write in
your own words.
Alpha testing: the first test newly developed; when the first round bugs have
been fixed the product goes with actual user for testing for customer software
the actual user for testing. For customer software the customer may be invited
into the vendor`s facilities for an alpha test to ensure that clients vision has
been interpreted properly by the developer.
Simulated or actual operational testing by potential users/customers or an
independent test team at the developers site, but outside the development
organization, alpha testing is often employed for off-the shelf software as a
form of internal acceptance testing.
Beta testing:

Operational testing by potential and/or existing users/customers at an

external site not a component otherwise involved with the developers, to
determine whether or not component or a system satisfies the user/customer
needs and fits within the business processes. Beta testing is often employed as
a form of external acceptance testing for off-the shelf software in order to
acquire feedback from the market.
Beta testing follows alpha testing. Vendors of packed software often offer their
customers the opportunity of beta testing new releases or versions and the
beta testing of elaborate products such as operating system can take months.
Acceptance Testing:

Formal testing with respect to user needs, requirements and business process
conducted to determine whether or not a system satisfies the acceptance
criteria and to enable the users, customers or others authorized entry to
determine whether or not accept the system. This testing is performed by the
client of the application to determine the application is developed as per the
requirements specified by organization.
3. What is the need of the following models: a) Verification Model
and b) Validation Model?

Using verification and in system testing need to ensure that product is being
built according to the requirement and design specifications.
The first one is to ensure that works products meet their specified requirement
as verification and validation needed to ensure the product actually meets the
user`s needs, and the specifications takes corrected.
The second is to demonstrate that the product fulfils its intended use when
placed in its intended environment.
A) Verification model
Verification is like debuggingit is intended to ensure that the model does
what it is intended to do. Models, especially simulation models, are often large
computer programs. Therefore all techniques that can help develop, debug or
maintain large computer programs are also useful for models. For example,
many authors advocate modularity and top-down design. Since these are
general software engineering techniques. Modifications of such techniques to
make them suitable for modelling, and modelling-specific techniques are
discussed below.
Anti-bugging: Anti-bugging consists of including additional checks and
outputs in a model that may be used to capture bugs if they exist. These are
features of the model which do not have a role in representing the system, or
even necessarily in calculating performance measures. Their only role is to
check the behaviour of the model.
Structured walk-through/one-step analysis: Explaining the model to
another person, or group of people, can make the modeller focus on different
aspects of the model and therefore discover problems with its current
implementation. Even if the listeners do not understand the details of the
model, or the system, the developer may become aware of bugs simply by
studying the model carefully and trying to explain how it works. Preparing
documentation for a model can have a similar effect by making the modeller
look at the model from a different perspective
Simplified models It is sometimes possible to reduce the model to its minimal
possible behaviour. A closed queuing network model we might consider the
model with only a single customer, or in a simulation model we might only
instantiate one entity of each type. Since one-step analysis can be extremely
time consuming it is often applied to a simplified model.
Deterministic models: For simulation models the presence of random
variables can make it hard for the modeller to reason about the behaviour of a
model and check that it is as expected or required. Replacing random variables
which govern delays or scheduling with deterministic values may help the
modeller to see whether the model is behaving correctly
Tracing: Trace outputs can be extremely useful in isolating incorrect
behaviour in a model, although in general other techniques will be used to
identify the presence of a bug in the first place. Since tracing causes
considerable additional processing overhead it should be used sparingly in all
except the simplest models.
Continuity testing At an abstract level all systems and models can be thought
of as generating a function from input values to output values, and in most
cases we expect that function to be continuous
Consistency testing: For most models and systems it is reasonable to assume
that similarly loaded systems will exhibit similar characteristics, even if the
arrangement of the workload varies. Consistency tests are used to check that a
model produces similar results for input parameter values that have similar
effects. For example, in a communication network, two sources with an arrival
rate of 100 packets per second each should cause approximately the same level
of traffic in the network as four sources with arrival rate of 50 packets per
second each. If the model output shows a significant difference, either it should
be possible to explain the difference from more detailed knowledge of the
system, or the possibility of a modelling error should be investigated.

b) Validation Model?

Validation is the task of demonstrating that the model is a reasonable

representation of the actual system: that it reproduces system behaviour with
enough fidelity to satisfy analysis objectives. Whereas model verification
techniques are general the approach taken to model validation is likely to be
much more specific to the model, and system, in question. For most models
there are three separate aspects which should be considered during model
Input parameter values and distributions
Output values and conclusions.
However, in practice it may be difficult to achieve such a full validation of the
model, especially if the system being modelled does not yet exist. In general,
initial validation attempts will concentrate on the output of the model, and
only if that validation suggests a problem will more detailed validation be
undertaken. Broadly speaking there are three approaches to model validation
and any combination of them may be applied as appropriate to the different
aspects of a particular model. These approaches are:
Expert intuition
Real system measurements
Theoretical results/analysis.
4. Brief the life cycle of Defect and how can we track this Defect?
Explain this with the help of an example.

Life cycle of defect or bug:

Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and
Description to Reproduce
In above list you can add some optional fields if you are using manual Bug
submission template:
These Optional Fields are: Customer name, Browser, Operating system, File
Attachments or screenshots.
The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and Assigned to fields them
you can specify these fields. Otherwise Test manager will set status, Bug
priority and assign the bug to respective module owner.

An example of a simple defect life cycle

Rejected - Closed: This will tell something about the quality of defects
that are raised. Rejects occur most often when registering duplicates or
when the desired functionality is not well understood.
Tolerated - Closed: Defects that end in this status are defects that are
known. When going to production, these are the known issues that the
product will be shipped with. We need to keep them separated from the
rejected and fixed defects.
Fixed - Closed: Defects that end in this status are unambiguously fixed
defects that have been found resolved.
Some roles have been defined, regarding the defect flow

Issuer: Anyone who registers a defect. This is not necesarily a tester.

Tester: Someone who is trained in testing, knows the functionality of
the product to be delivered and has a brief technical insight into
development, infrastructure and architecture. The tester can challenge
the not accepted defects and is able to retest defects in their context.
Defect manager: Someone who has development insight and decision
power to accept, not accept and assign defects. This can role often maps
on a development management role.
Developer: A wide role for anyone who can fix defects in the code,
infrastructure, parametrization or even in the design documentation. A
defect does not necessarily reside only in code. A defect can also be an
inconsistency in the design documentation.

The following basic levels:

Urgent - Defect needs a patch - testers or users are blocked and no
workaround can be put in place - Patch required within 2 working days
High - Defect might need a patch but a workaround can be put in place
or a less substantial amount of functionality is blocked. - Update
required in a week
Medium - Defect needs to be resolved, but there is a workaround
possible. - All defects need to be resolved before go-live
Low - Cosmetic defects. - 70 percent of defects need to be resolved
before go-live
Sometimes cosmetic defects increase in ranking, when for example they
occur on documents that are sent out to customers.

The status flow and its description below depicts the minimum
requirements for a defect flow to be implemented.