Você está na página 1de 29

The term software engineering

Dennis [Dennis 1975]:


Software engineering is the application of principles, skills, and art to the design
and construction of programs and systems of programs.
Pomberger and Blaschek [Pomberger 1996]:
Software engineering is the ractical alication of scientific knowledge
for the economical rod!ction and !se of high"#!alit$ software%
Software is:
Instructions (computer programs) which when executed, provide desired
function and performance.
What are software characteristics?
Software is logical unlike hardware, which is physical (contains chips,
circuit boards, power supplies etc.,) Hence its characteristics are
entirely different.
Software does not wear out as do hardware components, from
dust, abuse, temperature and other environmental factors.
A software component should e uilt such that it can e reused in
many different programs
What is the 'software Development life cycle'?
Software life cycle models descrie phases of the software cycle and the order
in which those phases are e!ecuted."here are tons of models, and many
companies adopt their own, ut all ha#e #ery similar patterns."he general, asic
model is shown elow$
General Life Cycle Model
Each phase produces deliverables required by the next phase in the life
cycle.Requirements are translated into design.Code is produced during
implementation that is driven by the design.Testing verifies the
deliverable of the implementation phase against requirements.
"hey are$
Requirements:%usiness re&uirements are gathered in this phase.
'eetings with managers, stake holders and users are held in order to
determine the re&uirements.(ho is going to use the system)*ow will they
use the system)(hat data should e input into the system) (hat data
should e output y the system)how the user interface should work)"he
o#erall result is the system as a whole and how it performs, not how it is
actually going to do it.
Design$"he software system design is produced from the results of
the re&uirements phase.This is where the details on how the system will
work is produced.Design focuses on high level design like, what
Requirements
esign
!mplementation Testing
programs are needed and how are they going to interact, low-level
design (how the individual programs are going to work), interface
design (what are the interfaces going to look like) and data design
(what data will be required). During these phases, the softwares
overall structure is defined. !nalysis and Design are very crucial in
the whole development cycle. !ny glitch in the design phase could be
very e"pensive to solve in the later stage of the software
development.
Coding$ +ode is produced from the deli#erales of the
design phase during implementation, and this is the longest phase
of the software de#elopment life cycle.,or a de#eloper, this is the
main focus of the life cycle ecause this is where the code is
produced.Different high level programming languages like #, #$$,
%ascal, &ava are used for coding. 'ith respect to the type of
application, the right programming language is
chosen.(mplementation my overlap with both the design and testing
phases.
Testing"-uring testing, the implementation is tested against the
re&uirements to make sure that the product is actually sol#ing the needs
addressed and gathered during the re&uirements phase.)ormally
programs are written as a series of individual modules, the system is
tested to ensure that interfaces between modules work (integration
testing), the system works on the intended platform and with the
e"pected volume of data (volume testing) and that the system does
what the user requires (acceptance*beta testing)
nstallation! "peration and maintenance: +oftware will
definitely undergo change once it is delivered to the customer. There
are many reasons for the change. #hange could happen because of
some une"pected input values into the system. (n addition, the
changes in the system could directly affect the software operations.
Water #all $ethod:
"his is the most common and classic of life cycle models, also
referred to as a linear.se&uential life cycle model.It is #ery simple to
understand and use.In a waterfall model, each phase must e completed in
its entirety efore the ne!t phase can egin.At the end of each phase, a
re#iew takes place to determine if the pro/ect is on the right path and
whether or not to continue or discard the pro/ect.0nlike what I mentioned in
the general model, phases do not o#erlap in a waterfall model.
Advantages"
#imple and easy to use.
Easy to manage due to the rigidity of the model $ each phase has
specific deliverables and a revie% process.
&hases are processed and completed one at a time.
'or(s %ell for smaller pro)ects %here requirements are very %ell
understood.
Disadvantages"
*. Time consuming+
,. -ever bac(%ard+
.. ifficulty responding to change %hile developing.
/. -o early prototypes of the soft%are are produced.
V- model" &ust like the waterfall model, the ,-+haped life cycle is a
sequential path of e"ecution of processes.-ach phase must be
completed before the ne"t phase begins.Testing is emphasi.ed in this
model more so than the waterfall model though.The testing
procedures are developed early in the life cycle before any coding is
done, during each of the phases preceding implementation.
/equirements begin the life cycle model 0ust like the
waterfall model.1efore development is started, a system test plan is
created.The test plan focuses on meeting the functionality specified in
the requirements gathering.
The high-level design phase focuses on system
architecture and design.!n integration test plan is created in this
phase as well in order to test the pieces of the software systems
ability to work together.
The low-level design phase is where the actual software
components are designed, and unit tests are created in this phase as
well.
The implementation phase is, again, where all coding
takes place.2nce coding is complete, the path of e"ecution continues
up the right side of the , where the test plans developed earlier are
now put to use.
Advantages"
+imple and easy to use.
Each phase has specific deliverables.
0igher chance of success over the %aterfall model due to the
development of test plans early on during the life cycle.
'or(s %ell for small pro)ects %here requirements are easily
understood.
isadvantages"
1ery rigid+ li(e the %aterfall model.
Little flexibility and ad)usting scope is difficult and expensive.
#oft%are is developed during the implementation phase+ so no
early prototypes of the soft%are are produced.
Incremental Model:The incremental model is an intuitive approach
to the %aterfall model.Multiple development cycles ta(e place here+
ma(ing the life cycle a 2multi3%aterfall4 cycle.Cycles are divided up into
smaller+ more easily managed iterations.Each iteration passes through the
requirements+ design+ implementation and testing phases.
5 %or(ing version of soft%are is produced during
the first iteration+ so you have %or(ing soft%are early on during the
soft%are life cycle. #ubsequent iterations build on the initial soft%are
produced during the first iteration
Advantages
Generates %or(ing soft%are quic(ly and early during the soft%are
life cycle.
More flexible $ less costly to change scope and requirements.
Easier to manage ris( because ris(y pieces are identified and
handled during its iteration.
Each iteration is an easily managed milestone.
Disadvantages
Each phase of an iteration is rigid and do not overlap each other.
&roblems may arise pertaining to system architecture because not
all requirements are gathered up front for the entire soft%are life
cycle.
The spiral model:
The spiral model is similar to the incremental model+ %ith more
emphases placed on ris( analysis.The spiral model has four phases"
&lanning+ Ris( 5nalysis+ Engineering and Evaluation.5 soft%are pro)ect
repeatedly passes through these phases in iterations 6called #pirals in this
model7.The baseline spiral+ starting in the planning phase+ requirements
are gathered and ris( is assessed.Each subsequent spirals builds on the
baseline spiral.
Requirements are gathered during the planning phase.!n
the ris( analysis phase+ a process is underta(en to identify ris( and
alternate solutions.5 prototype is produced at the end of the ris( analysis
phase.
#oft%are is produced in the engineering phase+ along
%ith testing at the end of the phase.The evaluation phase allo%s the
customer to evaluate the output of the pro)ect to date before the pro)ect
continues to the next spiral.
!n the spiral model+ the angular component represents
progress+ and the radius of the spiral represents cost.The spiral model is
intended for large+ expensive+ and complicated pro)ects.
Advantages
0igh amount of ris( analysis
Good for large and mission3critical pro)ects.
#oft%are is produced early in the soft%are life cycle.
Disadvantages
Can be a costly model to use.
Ris( analysis requires highly specific expertise.
&ro)ect8s success is highly dependent on the ris( analysis phase.
oesn8t %or( %ell for smaller pro)ects.
(hat are the four components of the Software Development
Process
a. 1lan
. -o
c. +heck
d. Act
What are common pro%lems in the software development
process?
2.&oor requirements $If re&uirements are unclear, incomplete, too general, or
not testale, there will e prolems.
3.'nrealistic Schedule$If too much work is crammed in too little time, prolems
are ine#itale
4.nadequate testing$5o one will know whether or not the program is any good
until the customer complains or systems crash.
/.Features"5 request to pile on ne% features after development is
under%ay9 extremely common.
6.$iscommunication$If de#elopers don7t know what7s needed or customer7s
ha#e erroneous e!pectations, prolems are guaranteed
(or)
Reasons for defect injections
*. !ncorrect estimations $ !f effort is not estimated properly+
defect in)ection rate %ill increase.
,. 1ague requirements 3 if requirements are unclear+
incomplete+ too general+ and not testable+ there %ill be
problems.
.. Changes after development 3 requests to pile on ne%
features after development is under%ay9 extremely
common.
8. 'iscommunication . if de#elopers don7t know what7s
needed or customers7 ha#e erroneous e!pectations,
prolems are guaranteed.
9ife +ycle "esting means perform testing in parallel with systems development
(ife Cycle testing)Role of Testers
Concept &hase
:#aluate +oncept -ocument,
9earn as much as possile aout the product and pro/ect
Analy;e *ardware<software =e&uirements,
Strategic 1lanning
Requirement &hase
Analy;e the =e&uirements
>erify the =e&uirements using the re#iew methods.
1repare "est 1lan
Identify and de#elop re&uirement ased test cases
Design &hase!
Analy;e design specifications
>erify design specifications
Identify and de#elop ,unction ased test cases
%egin performing 0saility "ests
Coding &hase
Analy;e the code
>erify the code
+ode +o#erage
0nit test
ntegration * Test &hase
Integration "est
,unction "est
System "est
1erformance "est
=e#iew user manuals
"peration+$aintenance &hase
'onitor Acceptance "est
-e#elop new #alidation tests for confirming prolems
#oft%are TE#T!-G "
The process of executing computer soft%are in order to determine
%hether the results it produces are correct4+ Glass :;<
2The process of executing a program %ith the intent of finding errors4+ Myers 8;<
5 DESTRUCTIVE, yet creative process
Testing is the measure of soft%are quality4
&esting is the rocess of e'ercising or e(al!ating a s$stem or
s$stem comonent b$ man!al or a!tomated means to (erif$ that it
satisfies secified re#!irements) *or+ to identif$ differences between
e'ected and act!al res!lts%,*or+#oft%are testing is an important
technique for assessing the quality of a soft%are product. 5 purpose of
testing is
to cause failures in order to ma(e faults visible =*>? so that the faults can
be fixed and not be delivered in the code that goes to customers. &rogram
testing can be used to sho% the presence of bugs+ but never to sho% their
absence.
1%&esting is the rocess of showing the resence of defects%
-%&here is no absol!te notion of .correctness,%
/%&esting remains the most cost effecti(e aroach to b!ilding
confidence within most software s$stems%
Test plan:
5 test plan is a systematic approach to testing a system such as a machine or soft%are.
The plan typically contains a detailed understanding of %hat the eventual %or( flo% %ill
be 6or7
The test plan defines the ob)ectives and scope of the testing effort+ and identifies the
methodology that your team %ill use to conduct tests. !t also identifies the hard%are+ soft%are+
and tools required for testing and the features and functions that %ill be tested. 5 %ell3rounded
test plan notes any ris( factors that )eopardi@e testing and includes a testing schedule6or7
5 test plan is a document describing the scope+ approach+ resources+and
schedule of intended test activities. !t identifies test items+ the features to be
tested+the testing tas(s+ %ho %ill do each tas(+ and any ris(s requiring
contingency plans. 5n important component of the test plan is the individual
test cases. Some form of test plan should e de#eloped prior to any test.
T,STC-S,:5 test case is a set of test inputs+ execution conditions+ and
expected results to determine if a feature of an application is %or(ing
correctly+ such as to exercise a particular program path or to verify
compliance %ith a specific requirement. Thin( diabolicallyA 'hat are the
%orst things someone could try to do to your programB 'rite test for
these.
Testing process:"he test de#elopment life cycle contains the following
components!steps:
System study.Assess de#elopment plan and status
+ollect S,S<S=S
-e#elop "est plan
-esign test cases
"ests specifications$"his document includes technical
details ( Software re&uirements )re&uired prior to the
testing.
:!ecute tests
:#aluate tests
:#aluate results
Acceptance test
-ocument results(defect reports, summary reports)
Recreating the problem is essentially important in
testing so that problems that are identified can be repeated and corrected.
Testing strategy:"est #trategy provides the road map describing the
steps to be underta$en while testing, and the effort, time and resources
re%uired for the testing.
&."he test #trategy should incorporate test planning, test case design, test
execution, resultant data collection and data analysis.
$on.ey Testing:
Testing the application randomly li(e hitting (eys irregularly and try to
bra(e do%n the system there is no specific test cases and scenarios for mon(ey testing
Verification refers to set of activities that involves reviews and
meetings to evaluate documents, plans, code, requirements and specificationsto
ensure that software correctl! implements a specific function, imposed at the start
of that phase"# this can $e done with chec%lists, issues lists, and wal%throughs and
inspection meetings&
Cor example+ in the soft%are for the Monopoly game+ %e can
verify that t%o players cannot o%n the same house.
Validation is the process of evaluating a sstem or
component during or at the end of the development process to
determine whether it satisfies specifie! re"uirements# $or%
Validation refers to a different set of activities that the software that has
$een $uilt is tracea$le to customer requirements
Cor example+ %e validate that %hen a player lands on 2Cree &ar(ing+4
they get all the money that %as collected. validation al%ays involves
comparison against requirements.
!n the language of 1D1+ blac( box testing is often used for
validation 6are %e building the right soft%areB7 and %hite box testing is
often used for verification 6are %e building the soft%are rightB7.

/uality -ssurance 0/-1 The system implemented %y
an organi2ation which assures outside %odies that the data generated is
of proven and .nown quality and meets the needs of the end user. "his
assurance relies hea#ily on documentation of processes! procedures!
capa%ilities! and monitoring of such3
*elps estalish processes
Identifies weaknesses in processes and impro#es them
?A is the responsiility of the entire team
'ualit! (ontrol '(" 'echanism to ensure that the
re&uired &uality characteristics e!ist in the finished product (or)
)t is a s!stem of routine technical activities, to measure and control the qualit! of
the product as it is $eing developed&3
'uality control activities may be fully automated, entirely manual, or a
combination of automated tools and human interaction
implements the process.
Identifies defects for the primary purpose of correcting defects.
?+ is the responsiility of the tester.

*hite-+o, Testing
'hite3box testing is a verification technique soft%are
engineers can use to examine if their code %or(s as expected or
(hite.o! test design allows one to peek inside the @o!@, and it focuses
specifically on using internal knowledge of the software to guide the selection
of test data.
The tests %ritten based on the %hite box testing strategy incorporate
coverage of the code %ritten+ branches+ paths+ statements and internal
logic of the code etc.Test data required for such testing is less exhaustive
than that of Elac( Eox testing.
'hite3box testing is also (no%n as structural testing+ clear box testing+ and glass box
testing 6Eei@er+ *<<F7. The connotations of 2clear box4 and 2glass box4 appropriately
indicate that you have full visibility of the internal %or(ings of the soft%are product+
specifically+ the logic and the structure of the code.
Gsing the %hite3box testing techniques outlined in this chapter+ a soft%are engineer can
design test cases that
6*7 exercise independent paths %ithin a module or unit9
6,7 exercise logical decisions on both their true and false side9
6.7 execute loops at their boundaries and %ithin their operational bounds9 and
6/7 exercise internal data structures to ensure their validity

White Box Testing Techniques
(a)Basis Path Testing : In most software units, there is a
potentially (near) infinite number of ifferent paths through the coe,
so complete path co!erage is impractical

"yclomatic "omplexity :
Cyclomatic complexity is a software metric that provides a
quantitative measure of the logical complexity of a program. When
use in the context of a basis path testing metho, the !alue
compute for "yclomatic complexity efines the number for
inepenent paths in the basis set of a program an pro!ies us an
upper boun for the number of tests that must be conucte to
ensure that all statements ha!e been execute at least once#
'hite3box testing is used for the follo%ing types"

&.Unit testing, This is done to ensure %hether a particular module in a system is
%or(ing properly.Typically done by the programmer+ as it requires detailed (no%ledge
of the internal program design and code. 5 unit is a soft%are component that cannot be
subdivided into other components 6!EEE+ *<<>7.Gnit testing is important for ensuring
the code is solid before it is integrated %ith other code. Appro,imatel! -./ of all $ugs
can $e caught in unit testing. May require developing test driver modules or test
harnesses.
Testing stubs are frequently used in unit testing. 5 stub
simulates a module %hich is being called by the program. This is very
important because most of the called modules are not ready at this stage.
0&)ntegration testing, which is testing in which software components, hardware
components+ or both are combined and tested to evaluate the interaction bet%een them
6!EEE+ *<<>7. Test cases are %ritten %hich explicitly examine the interfaces bet%een
the various units.
There are two common ways to conduct integration testing.
)on-incremental (ntegration Testing
(ncremental (ntegration Testing
$on%incremental Integration Testing (big bang or umbrella): !ll
the software units are assembled into the entire program. This assembly is
then tested as a whole from the beginning, usually resulting in a chaotic
situation, as the causes of defects are not easily isolated and corrected
Incremental Integration Testing: The program is constructed and
tested in small increments by adding a minimum number of components at
each interval. Therefore, the errors are easier to isolate and correct, and the
interfaces are more likely to be tested completely.
There are two common approaches to conduct incremental integration
testing3
Top-Down (ncremental (ntegration Testing
1ottom-up (ncremental (ntegration Testing
(a)Top%&own Incremental Integration Testing: The top-down
approach to integration testing requires the highest-level modules be test and
integrated first. 4odules are integrated from the main module (main
program) to the subordinate modules either in the depth-first or breadth-first
manner. (ntegration testing starts with the highest-level, control module, or
5main program6, with all its subordinates replaced by stubs.
(b)Bottom%up Incremental Integration Testing: The bottom-up
approach requires the lowest-level units be tested and integrated first. The
lowest level sub-modules are integrated and tested, then the successively
superior level components are added and tested, transiting the hierarchy
from the bottom, upwards
1caffolding is defined as computer programs and data files built
to support soft%are development and testing but not intended to be
included in the final product. #caffolding code is code that simulates the
functions of components that don8t exist yet and allo% the program to
execute =*H?. #caffolding code involves the creation of stubs and test
drivers.
Test ri!er allows you to call a function an isplay its return
!alues# Test drivers are defined as a soft%are module used to involve a
module under test and often+ provide test inputs+ controls+ and monitor
execution and report test results =**?. Test drivers simulate the calling
components 6e.g. hard3coded method calls7

' stub returns a !alue that is sufficient for testing
Stubs are modules that simulate components that
aren8t %ritten yet+ formally defined as a computer program statement
substituting for the bo! of a soft'are mo!ule that is or 'ill be !efine!
else'here =**?. Cor example+ you might %rite a s(eleton of a method
%ith )ust the method signature and a hard3coded but valid return value.
(xample - 7or 8nit Testing of 5+ales 2rder %rinting6 program, a
5Driver6 program will have the code which will create +ales 2rder
records using hard coded data and then call 5+ales 2rder %rinting6
program. +uppose this printing program uses another unit which
calculates +ales discounts by some comple" calculations. Then call to
this unit will be replaced by a 5+tub6, which will simply return fi"
discount data.
Mock objects are temporary substitutes for domain code that emulates the
real code. Cor example+ if the program is to interface %ith a database+ you
might not %ant to %ait for the database to be fully designed and created
before you %rite and test a partial program. Iou can create a moc( ob)ect
of the database that the program can use temporarily. The interface of the
moc( ob)ect and the real ob)ect %ould be the same.
B)'"* B+, T(-TI$.
!lso known as behavioral, functional, opaque-bo", closed-bo",
concrete bo" testing. Test engineer need not know the internal
working of the 91lack bo":. (t focuses on the functionality part of the
module. 1lack bo" testing is testing against the specification.Test data
needs to be e"haustive in such testing.(or)The test designer selects
valid and invalid input and determines the correct output. There is no
(no%ledge of the test ob)ectJs internal structure.'hile this method can
uncover unimplemented parts of the specification+ one cannot be sure that
all existent paths are tested.
Testing 1trateg!: The base of the Elac( box testing strategy lies in the
selection of appropriate data as per functionality and testing it against the
functional specifications in order to chec( for normal and abnormal
behavior of the system.
Strategies for 4lac. 4o5 Testing

!deally+ %e8d li(e to test every possible thing that
&ri!er
/oule
0ner Test
-tub
-tub
/eso
can be done %ith our program. Eut+ as %e said+ %riting and executing test
cases is expensive. 'e %ant to ma(e sure that %e definitely %rite test
cases for the (inds of things that the customer %ill do most often or even
fairly often. Kur ob)ective is to find as many defects as possible in as fe%
test cases as possible. To accomplish this ob)ective+ %e use some
strategies that %ill be discussed in this subsection. 'e %ant to avoid
%riting redundant test cases that %on8t tell us anything ne% 6because they
have similar conditions to other test cases %e already %rote7. Each test
case should probe a different mode of failure. 'e also %ant to design the
simplest test cases that could possibly reveal this mode of failure $ test
cases themselves can be error3prone if %e don8t (eep this in mind.
Blac1 Box Testing Techniques:
2#(rror .uessing:
3#(qui!alence Partitioning:To (eep do%n our testing costs+ %e don8t
%ant to %rite several test cases that test the same aspect of our program.
5 good test case uncovers a different class of errors 6e.g.+ incorrect
processing of all character data7 than has been uncovered by prior test
cases. Equivalence partitioning is a strategy that can be used to reduce
the number of test cases that need to be developed. Equivalence
partitioning divides the input domain of a program into classes. Cor each
of these equivalence classes+ the set of data should be treated the same by
the module under test and should produce the same ans%er. Test
cases should be designed so the inputs lie %ithin these equivalence
classes. Knce you have identified these partitions+ you choose test cases
from each partition. To start+ choose a tpical value some'here in the
mi!!le of $or 'ell into% each of these t'o ranges.
*.To reduce the number of test cases to a necessary minimum.
,.To select the right test cases to cover all possible scenarios.
5n additional effect by applying this technique is that you also find
the so called LdirtyL test cases
4#Bounary 5alue 'nalysis:Eoris Eei@er+ %ell3(no%n author of
testing boo( advises+ 2Eugs lur( in corners and congregate at
boundaries.4 &rogrammers often ma(e mista(es on the boundaries of
the equivalence classesMinput domain. 5s a result+ %e need to focus
testing at these boundaries. This type of testing is called Eoundary 1alue
5nalysis 6E157 and guides you to create test cases at the 2edge4 of the
equivalence classes. (oun!ar value is defined as a !ata value that
correspon!s to a minimum or ma)imum

*hen creating (V+ test cases, consi!er the follo'ing ,&-./
*. !f input conditions have a range from a to $ 6such as aN*>> to bN.>>7+
create test
cases"
immediately belo% a 6<<7
at a 6*>>7
immediately above a 6*>*7
immediately belo% $ 6,<<7
at $ 6.>>7
immediately above $ 6.>*7
3#!f input conditions specify a number of values that are allo%ed+ test
these limits. Cor example+ input conditions specify that only one train is
allo%ed to start in each direction on each station. !n testing+ try to add a
second train to the same stationMsame direction. !f 6someho%7 three trains
could start on one stationMdirection+ try to add t%o trains 6pass7+ three
trains 6pass7+ and four trains 6fail7.
Decision Table Testing:
ecision tables are used to record complex business
rules that must be implemented in the program+ and therefore tested. 5
sample decision table is found in Table ;. !n the table+ the conditions
represent possible input conditions. The actions are the events that should
trigger+ depending upon the ma(eup of the input conditions. Each column
in the table is a unique combination of input conditions 6and is called a
rule7 that result in triggering the action6s7 associated %ith the rule. Each
rule 6or column7 should become a test case.
If a 0laer $+% lan!s on propert o'ne! b another plaer $(%, + must
pa rent to (# If + !oes not have enough mone to pa (, + is out of the
game#
Decision ta$le:

Rule * Rule , Rule .
(onditions
5 lands on E8s property Ies Ies -o
5 has enough money to pay rent Ies -o 3333
Actions
5 stays in game Ies -o Ies
Blac1 Box Testing Types:
(6)Integration testing 7 This involves testing of combined parts of
an application to determine if they function together correctly. The
parts can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially
relevant to client*server and distributed systems.
(2)8unctional Testing:
7unction testing is used to find out differences between the program
and its e"ternal specifications. Testing that ignores the internal
mechanism or structure of a system or component and focuses on the
outputs generated in response to selected inputs and e"ecution
conditions. This stage will include 5aliation Testing#
(3)-mo1e Testing(sanity testing):
+moke is the initial level of testing effort to determine if
the new software version is performing well enough for its ma0or level
of testing effort# The +moke test scenarios should emphasi.e breadth
more than depth. !ll components should be touched, and every ma0or
feature should be tested briefly. This is normally done when a change
in program is made and the tester wants to ensure that this change
has not impacted the program at some other location. 5 subset of the
regression test cases can be set aside as smo(e tests. 5 smo1e test is a
group of test cases that establish that the sstem is stable an! all ma2or
functionalit is present an! 'or1s un!er normal con!itions# The
purpose of smo(e tests is to demonstrate stability+ not to find bugs %ith
the system. (or)
Sanity Testing is performed to test validate the stability
of the build ( software application under test). Under
sanity testing we check whether the application is readily
accessible and the check for basic functionalities. Once
this are verified we go ahead with the test case
execution. e start when the build released
(4)(xploratory Testing( a%hoc):
-"ploratory Tests are categori.ed under 1lack 1o" Tests and are
aimed at testing in conditions when sufficient time is not available for
testing or proper documentation is not available. This type of testing is
done %ithout any formal Test &lan or Test Case creation..'ith ad hoc
testing+ people )ust start trying anything they can thin( of %ithout any
rational road map through the customer requirements.Requires highly
s(illed resources %ho can smell errors.6or7
5 type of testing %here %e explore soft%are+ %rite and execute the test
scripts simultaneously.
Exploratory testing is a type of testing %here tester does not have
specifically planned test cases+ but heMshe does the testing more %ith a
point3of3vie% to explore the soft%are features and tries to brea( it in
order to find out un(no%n bugs.
Exploratory Testing is a learn and %or( type of testing activity %here a
tester can at least learn more and understand the soft%are if at all heMshe
%as not able to reveal any potential bug
The only limit to the extent to %hich you can perform exploratory testing
is your imagination and creativity+ more you can thin( of %ays to
explore+ understand the soft%are+ more test cases you %ill be able %rite
and execute simultaneously.
,or smaller groups or pro/ects, an ad.hoc process is more
appropriate
Advantages of 2,plorator! Testing:
!t helps testers in learning ne% strategies+ expand the hori@on of
their imagination that helps them in understanding D executing
more and more test cases and finally improve their productivity.
Exploratory testing helps tester in confirming that heMshe
understands the application and its functionality properly and has
no confusion about the %or(ing of even a smallest part of it+ hence
covering the most important part of requirement understanding.
5s in case of exploratory testing+ %e %rite and execute the test
cases simultaneously. !t helps in collecting result oriented test
scripts and shading of load of unnecessary test cases %hich do not
yield and result.
(9):egression Testing
/erunning test cases which a program has
previously e"ecuted correctly in order to detect errors spawned by
changes or corrections made during software development and
maintenance. -nsures that changes have not propagated unintended
side affects.(or)
Regression testing, which is selective retesting of a s!stem or component to verif!
that modifications have not caused unintended effects and that the system or component
still complies %ith its specified requirements
This is a more detailed kind of testing which starts once the
application passes the +moke testing.
4ost of the time the testing team is asked to check last minute
changes in the code 0ust before making a release to the client, in this
situation the testing team needs to check only the affected areas.
Regression tests are a subset of the original set of test cases. The purpose
of running the regression test case is to ma(e a 2spot chec(4 to examine
%hether the ne% code %or(s properly and has not damaged any
previously3%or(ing functionality by propagating unintended side effects.
Most often+ it is impractical to re3run all the test cases
%hen changes are made. #ince regression tests are run throughout the
development cycle+ there can be %hite box regression tests at the unit and
integration levels and blac( box tests at the integration+ function+ system+
and acceptance test levels.
The /egression test suit contains different classes of test cases3
(a)Retesting:Tests that focus on the software components that have
been changed.(or)
Retesting
!n the Test "xecution if we come across any bugs for
that will be reported to the #evelopment. Once the bugs
are fixed in the subse$uent build the bugs that are fixed
will be %"T"ST"#. This is %etesting.
%"&%"SS!O' T"ST!'&
!n the regression testing instead of validating the
bugs that were fixed ( we validate the impact of bugs
on the dependent functionalities.....
(b)Regional Regression Testing:
(c)ull Regression Testing: Tests that will e"ercise all software
functions
(;)-ystem Testing:
+ystem testing concentrates on testing the complete system with a
variety of techniques and methods. This is done to compare the
program against its original ob0ectives.
(or),
#ystem testing involves putting the ne% program in
many different environments to ensure the program %or(s in typical
customer environments %ith various versions and types of operating
systems
,arious type of system testing are3
(a)Compatibility Testing(portability testing):Testing whether the
system is compatible with other systems with which it should
communicate.
when you develop applications on one platform, you need to check if
the application works on other operating systems as well. This is the
main goal of #ompatibility Testing.
(b):eco!ery Testing:
Testing aimed at verifying the systems ability to recover from varying
degrees of failure. Testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems.(or)
Recovery testing is basically done in order to chec( ho% fast and
better the application can recover against any type of crash or hard%are
failure etc. Type or extent of recovery is specified in the requirement
specifications

(c)-ecurity Testing:
+ecurity testing attempts to verify that protection mechanisms built
into a system will, in fact, protect it from improper penetration.
During +ecurity testing, password cracking, unauthori.ed entry into
the software, network security are all taken into consideration.(t is
done to check the security ob0ectives of a program. This testing might
not be required in some cases.
(<)0sability Testing( 0I testing):
Usability is the degree to which a user can easily learn and use a
product to achieve a goal. A simpler description is testing the
software from a users point of view.This is done to check whether the
user interfaces are proper output of the program is relevant
!re the communication device(s) designed in a manner such that the
information is displayed in a understandable fashion enabling the
operator to correctly interact with the system;
(=)Performance Testing:
This is done to validate whether the output of a particular program is
given within a particular time period. (n general, we want to measure
the :esponse Time, Throughput, and 0tili>ation of the 'eb site
while simulating attempts by virtual users to simultaneously access
the site. 2ne of the main ob0ectives of performance testing is to
maintain a 'eb site with low response time, high throughput, and low
utili.ation.(or)
Testing con!ucte! to evaluate the compliance of a sstem
or component 'ith specifie! performance re"uirements . To continue the
above example+ a performance requirement might state that the price
loo(up must complete in less than * second. &erformance testing
evaluates %hether the system can loo( up prices in less than * second
6even if there are .> cash registers running simultaneously7.
(a):esponse Time
/esponse Time is the delay e"perienced when a request is made to
the server and the servers response to the client is received. (t is
usually measured in units of time, such as seconds or milliseconds.

(b)Throughput
Throughput refers to the number of client requests processed within a
certain unit of time. Typically, the unit of measurement is requests
per second or pages per second. 7rom a marketing perspective,
throughput may also be measured in terms of visitors per day or page
views per day,

(c)0tili>ation
8tili.ation refers to the usage level of different system resources,
such as the servers #%8(s), memory, network bandwidth, and so
forth. (t is usually measured as a percentage of the ma"imum
available level of the specific resource. 8tili.ation versus user load for
a 'eb server typically produces a curve, as shown in 7igure

Types of performance testing:
)oa testing:<oad testing is defined as the testing to
determine whether the system is capable of handling
anticipated number of users or not.
The ob0ective of load testing is to check whether the system can
perform well for specified load. The system may be capable of
accommodating more than =>>> concurrent users. 1ut, validating
that is not under the scope of load testing. )o attempt is made to
determine how many more concurrent users the system is capable of
servicing.
-tress testing:8nlike load testing where testing is conducted
for specified number of users, stress testing is conducted for
the number of concurrent users beyond the specified limit. The
ob0ective is to identify the ma"imum number of users the
system can handle before breaking down or degrading
drastically. +ince the aim is to put more stress on system,
think time of the user is ignored and the system is e"posed to
e"cess load
<et us take the same e"ample of on line shopping application to
illustrate the ob0ective of stress testing. (t determines the ma"imum
number of concurrent users an on line system can service which can
be beyond =>>> users (specified limit). ?owever, there is a possibility
that the ma"imum load that can be handled by the system may found
to be same as the anticipated limit.
5olume Testing:,olume Testing, as its name implies, is
testing that purposely sub0ects a system (both hardware and
software) to a series of tests where the volume of data being
processed is the sub0ect of the test. +uch systems can be
transactions processing systems capturing real time sales or
could be database updates and or data retrieval.
,olume Testing is to find weaknesses in the system with respect to its
handling of large amounts of data during short time periods.This is an
e"pensive type of testing and done only when it is essential.
-calability testing
'e perform scalability testing to determine how effectively your 'eb
site will e"pand to accommodate an increasing load. +calability
testing allows you to plan 'eb site capacity improvements as your
business grows and to anticipate problems that could cost you
revenue down the line. +calability testing also reveals when your site
cannot maintain good performance at higher usage levels, even with
increased capacity.
(?)Installation Testing
(nstallation testing is often the most under tested area in testing. This
type of testing is performed to ensure that all (nstalled features and
options function properly. (t is also performed to verify that all
necessary components of the application are, indeed, installed. The
8ninstallation of the product also needs to be tested to ensure that all
data, e"ecutables, and D<< files are removed.
(@)'lpha Testing
!lpha testing happens at the development site 0ust before the roll out
of the application to the customer. !lpha tests are conducted
replicating the live environment where the application would be
installed and running. ! software prototype stage when the software
is first available for run. ?ere the software has the core functionalities
in it but complete functionality is not aimed at. (t would be able to
accept inputs and give outputs. The test is conducted at the
developer6s site only.
(n a software development cycle, depending on the functionalities the
number of alpha phases required is laid down in the pro0ect plan
itself.
During this, the testing is not a through one, since only the prototype
of the software is available. 1asic installation @ uninstallation tests,
the completed core functionalities are tested.
(26)'cceptance testing (0'T or Beta testing) 3- also called beta
testing, application testing, and end user testing - is a phase of
software development in which the software is tested in the Areal
worldA by the intended audience. 8!T can be done by in-house
testing in which volunteers use the software by making the test
version available for downloading and free trial over the 'eb .The
e"periences of the early users are forwarded back to the developers
who make final changes before releasing the software commercially.
(t is the formal means by which we ensure that the new system or
process does actually meet the essential user requirements.
!cceptance testing allows customers to ensure that
the system meets their business requirements. (n fact, depending on
how your other tests were performed, this final test may not always
be necessary. (f the customers and users participated in system tests,
such as requirements testing and usability testing, they may not need
to perform a formal acceptance test. ?owever, this additional test will
probably be required for the customer to give final approval for the
system.
The actual acceptance test follows the general
approach of the !cceptance Test %lan. !fter the test is completed, the
customer either accepts the system or identifies further changes that
are required. !fter these subsequent changes are completed, either
the entire test is performed again or 0ust those portions in question
are retested.
/esults of these tests will allow both the customers
and the developers to be confident that the system will work as
intended.
33&Ro$ustness testing/Test cases are chosen outside the domain to test
robustness to unexpected+ erroneous input.
30&Defensive testing"'hich includes tests un!er both normal an!
abnormal con!itions

34&1oa% testing 2ndurance testing":
!t is performed by running high volume of data for a
prolonged period. This is to chec( the behavior of a system on a busy
day. -ormally this (ind of testing is a must for ban(ing products.
35&2nd to 2nd Testing - This is the testing of a complete application
environment in a situation that simulates actual use+ such as interacting
%ith other applications if applicable
6owever, methods with ver! straightforward functionalit!, for e,ample, getter and
setter methods, don7t need unit tests unless the! do their getting and setting in some
8interesting8 wa!& A good guideline to follow is to write a unit test whenever !ou
feel the need to comment some $ehavior in the code& )f !ou7re li%e man!
programmers who aren7t fond of commenting code, unit tests are a wa! of
documenting !our code $ehavior& Avoid using domain o$jects in unit tests
'hile %riting unit tests+ ! have used the follo%ing guidelines to determine if the unit test
being %ritten is actually a functional test"
!f a unit test crosses class boundaries+ it might be a functional test.
!f a unit test is becoming very complicated+ it might be a functional test.
!f a unit test is fragile 6that is+ it is a valid test but it has to change continually to
handle different user permutations7+ it might be a functional test.
)f a unit test is harder to write than the code it is testing, it might $e a
functional test&
Conclusion
Gnit tests are %ritten from the developerJs perspective and focus on particular methods
of the class under test. Gse these guidelines %hen %riting unit tests"
'rite the unit test before %riting code for class it tests.
Capture code comments in unit tests.
Test all the public methods that perform an LinterestingL function 6that is+ not
getters and setters+ unless they do their getting and setting in some unique %ay7.
&ut each test case in the same pac(age as the class itJs testing to gain access to
pac(age and protected members.
5void using domain3specific ob)ects in unit tests.
Functional tests are written from the user7s perspective and focus on
s!stem $ehavior& After there is a suite of functional tests for the s!stem, the
mem$ers of the development team responsi$le for functional testing should
$om$ard the s!stem with variations of the initial tests&
A test case suite is simpl! a ta$le of contents for the individual test cases
9rgani:ing the suite of test cases $! priorit!, functional area, actor,
$usiness o$ject, or release can help identif! parts of the s!stem that need
additional test cases&
*e can test the application for ;egative testing $! giving the invalid
inputs...for example in order to testing pin for 5TM it only accepts / lettered
&!-...try giving . lettered or F3lettered.... all @eroes and so on.......
*hat is meant $! Framewor%<)s the framewor% related onl! to
A=T9>AT)9 or else it will applica$le to >A;=A? testing too<
a) )ramework is nothing but how to execute the test
case process
b) That is clearly what steps ! will follow to the
given test cases
(or)
)rame ork means( what type of actions or process we r
following before going to test the applications.....in
simple words we can say *+rchitecture*.it will be
applicable for both( but in terms of terminology
we use only for automation.
Whether we write one test case for one use case?
Some use cases are very simple so that only one test case
is re$uired to cover the functionality many times use cases
are very complicated so that a lot of test cases are
re$uired to cover the whole functionality
example,*use case-.+pplicant login the screen
This use case needs a minimum of /0 test cases id to cover
the whole functionality

!ow to Report "ugs #ffectively !ow to Report "ugs #ffectively
(n a nutshell, the aim of a bug report is to enable the
programmer to see the program failing in front of them. Bou can
either show them in person, or give them careful and detailed
instructions on how to make it fail. (f they can make it fail, they will
try to gather e"tra information until they know the cause. (f they
cant make it fail, they will have to ask you to gather that information
for them.
!n bug reports+ try to ma(e very clear %hat are actual facts 6L!
%as at the computer and this happenedL7 and %hat are speculations 6L!
thin1 the problem might be thisL7. Leave out speculations if you %ant to+
but donJt leave out facts.
They donJt (no% %hatJs happened+ and they canJt get close
enough to %atch it happening for themselves+ so they are searching for
clues that might give it a%ay. Error messages+ incomprehensible strings
of numbers+ and even unexplained delays are all )ust as important as
fingerprints at the scene of a crime. Oeep themA
!f you manage to get out of the problem+ %hether by
closing do%n the affected program or by rebooting the computer+ a good
thing to do is to try to ma(e it happen again. &rogrammers li(e problems
that they can reproduce more than once. 0appy programmers fix bugs
faster and more efficiently.
you describe the symptoms+ the actual discomforts
and aches and pains and rashes and fevers+ and you let the doctor do the
diagnosis of %hat the problem is and %hat to do about it. Kther%ise the
doctor dismisses you as a hypochondriac or crac(pot+ and quite rightly so.
!tJs the same %ith programmers. &roviding your
o%n diagnosis might be helpful sometimes+ but al%ays state the
symptoms. The diagnosis is an optional extra+ and not an alternative to
giving the symptoms. Equally+ sending a modification to the code to fix
the problem is a useful addition to a bug report but not an adequate
substitute for one.
!f a programmer as(s you for extra information+
donJt ma(e it upA #omebody reported a bug to me once+ and ! as(ed him
to try a command that ! (ne% %ouldnJt %or(. The reason ! as(ed him to
try it %as that ! %anted to (no% %hich of t%o different error messages it
%ould give. Ono%ing %hich error message came bac( %ould give a vital
clue. Eut he didnJt actually try it 3 he )ust mailed me bac( and said L-o+
that %onJt %or(L. !t too( me some time to persuade him to try it for real.
#ay Lintermittent faultL to any programmer and
%atch their face fall. The easy problems are the ones %here performing a
simple sequence of actions %ill cause the failure to occur. The
programmer can then repeat those actions under closely observed test
conditions and %atch %hat happens in great detail. Too many problems
simply donJt %or( that %ay" there %ill be programs that fail once a %ee(+
or fail once in a blue moon+ or never fail %hen you try them in front of
the programmer but al%ays fail %hen you have a deadline coming up.
Most intermittent faults are not truly intermittent. Most of them have
some logic some%here. #ome might occur %hen the machine is running
out of memory+ some might occur %hen another program tries to modify
a critical file at the %rong moment+ and some might occur only in the first
half of every hourA 6!Jve actually seen one of these.7
5lso+ if you can reproduce the bug but the programmer canJt+ it could
very %ell be that their computer and your computer are different in some
%ay and this difference is causing the problem. ! had a program once
%hose %indo% curled up into a little ball in the top left corner of the
screen+ and sat there and sul1e!. Eut it only did it on P>>xH>> screens9 it
%as fine on my *>,/x;HP monitor
(e specific . !f you can do the same thing t%o different %ays+ state
%hich one you used. L! selected LoadL might mean L! clic(ed on
LoadL or L! pressed 5lt3LL. #ay %hich you did. #ometimes it
matters.
(e verbose. Give more information rather than less. !f you say too
much+ the programmer can ignore some of it. !f you say too little+
they have to come bac( and as( more questions. Kne bug report !
received %as a single sentence9 every time ! as(ed for more
information+ the reporter %ould reply %ith another single
sentence. !t too( me several %ee(s to get a useful amount of
information+ because it turned up one short sentence at a time.
(e careful of pronouns. onJt use %ords li(e LitL+ or references
li(e Lthe %indo%L+ %hen itJs unclear %hat they mean. Consider
this" L! started Coo5pp. !t put up a %arning %indo%. ! tried to
close it and it crashed.L !t isnJt clear %hat the user tried to close.
id they try to close the %arning %indo%+ or the %hole of
Coo5ppB !t ma(es a difference. !nstead+ you could say L! started
Coo5pp+ %hich put up a %arning %indo%. ! tried to close the
%arning %indo%+ and Coo5pp crashed.L This is longer and more
repetitive+ but also clearer and less easy to misunderstand.
Rea! 'hat ou 'rote. Read the report bac( to yourself+ and see if
ou thin( itJs clear. !f you have listed a sequence of actions %hich
should produce the failure+ try follo%ing them yourself+ to see if
you missed a step.
1ummar!
The first aim of a bug report is to let the programmer see the
failure %ith their o%n eyes. !f you canJt be %ith them to ma(e it
fail in front of them+ give them detailed instructions so that they
can ma(e it fail for themselves.
!n case the first aim doesnJt succeed+ and the programmer can3t see
it failing themselves+ the second aim of a bug report is to describe
%hat %ent %rong. escribe everything in detail. #tate %hat you
sa%+ and also state %hat you expected to see. 'rite do%n the error
messages+ especiall if they have numbers in.
'hen your computer does something unexpected+ free4e. o
nothing until youJre calm+ and donJt do anything that you thin(
might be dangerous.
Ey all means try to diagnose the fault yourself if you thin( you
can+ but if you do+ you should still report the symptoms as %ell.
Ee ready to provide extra information if the programmer needs it.
!f they didnJt need it+ they %ouldnJt be as(ing for it. They arenJt
being deliberately a%(%ard. 0ave version numbers at your
fingertips+ because they %ill probably be needed.
'rite clearly. #ay %hat you mean+ and ma(e sure it canJt be
misinterpreted.
5bove all+ be precise. &rogrammers li(e precision.
Don7t write titles li%e these:
*.LCanJt installL 3 'hy canJt you installB 'hat happens %hen
you try to installB
,.L#evere &erformance &roblemsL 3 ...and they occur %hen
you do %hatB
..Lbac( button does not %or(L 3 EverB 5t allB
@ood $ug titles:
*.L*.> upgrade installation fails if Mo@illa M*P pac(age
presentL 3 Explains problem and the context.
,.LR&M / installer crashes if launched on Red 0at H., 6R&M
.7 systemL 3 Explains %hat happens+ and the context.
'hat are the different types of Eugs %e normally see in any of the
&ro)ectB !nclude the severity as %ell.
*. Gser !nterface efects 33333333333333333333333333333333 Lo%
,. Eoundary Related efects 3333333333333333333333333333333 Medium
.. -rror ?andling efects 333333333333333333333333333333333 Medium
/. Calculation efects 333333333333333333333333333333333333 0igh
F. !mproper #ervice Levels 6Control flo% defects7 333333333 0igh
H. !nterpreting ata efects 333333333333333333333333333333 0igh
;. Race Conditions 6Compatibility and !ntersystem defects73 0igh
P. Load Conditions 6Memory Lea(ages under load7 33333333333 0igh
<. 0ard%are Cailures"33333333333333333333333333333333333333 0igh
6or7
*7GG! Related6spelling mista(e+non3uniformity of text
box+ colors733Lo%33&.
,7Cunctionality related33Medium33&,
.7#ystemMapplication crashes3330igh333&*
Generally high priority defect are those on basis of those
build may be re)ected
+=@ ?)F2 (A(?2:
The Life Cycle of a bug in general context is"
#o let me explain in terms of a testerJs perspective"
5 tester finds a ne% defectMbug+ so using a defect tracking tool logs it.
*. !ts status is J-E'J and assigns to the respective ev team 6Team lead
orManager7.
,. The team lead assigns it to the team member+ so the status is
J5##!G-E TKJ
.. The developer %or(s on the bug fixes it and re3assigns to the tester for
testing. -o% the status is JRE35##!G-EJ
/. The tester+ chec( if the defect is fixed+ if its fixed he changes the status
to J1ER!C!EJ
F. !f the tester has the authority 6depends on the company7 he can after
verifying change the status to JC!QEJ. !f not the test lead can verify it
and change the status to JfixedJ.
H. !f the defect is not fixed he re3assigns the defect bac( to the ev team
for re3fixing.
#o this is the life cycle of a bug.
+ug status description:
These are various stages of bug life cycle. The status caption may vary
depending on the bug trac(ing system you are using.
3" ;ew: 'hen R5 files ne% bug.
0" Deferred: !f the bug is not related to current build or can not be fixed
in this release or bug is not important to fix immediately then the pro)ect
manager can set the bug status as deferred.
4" Assigned: :5ssigned to8 field is set by pro)ect lead or manager and
assigns bug to developer.
5" ResolvedBFi,ed: 'hen developer ma(es necessary code changes and
verifies the changes then heMshe can ma(e bug status as :Cixed8 and the
bug is passed to testing team.
." (ould not reproduce: !f developer is not able to reproduce the bug by
the steps given in bug report by R5 then developer can mar( the bug as
:C-R8. R5 needs action to chec( if bug is reproduced and can assign to
developer %ith detailed reproducing steps.
-" ;eed more information: !f developer is not clear about the bug
reproduce steps provided by R5 to reproduce the bug+ then heMshe can
mar( it as 2-eed more information8. !n this case R5 needs to add
detailed reproducing steps and assign bug bac( to dev for fix.
C" Reopen: !f R5 is not satisfy %ith the fix and if bug is still
reproducible even after fix then R5 can mar( it as :Reopen8 so that
developer can ta(e appropriate action.
D " (losed: !f bug is verified by the R5 team and if the fix is o( and
problem is solved then R5 can mar( bug as :Closed8.
E" RejectedB)nvalid: #ome times developer or team lead can mar( the
bug as Re)ected or invalid if the system is %or(ing according to
specifications and bug is )ust due to some misinterpretation.
<(7- #B#<- 27 +27T'!/- T-+T()C3
Requirement gathering" !ncludes collecting requirements+
functional specifications+ and other necessary documents
Kbtain end dates for completion of testing
!dentify resources and their responsibilities+ required standards
and processes.
!dentify applicationJs high ris( area+ prioriti@e testing+ and
determine scope and assumptions of testing
etermine test strategy and types of testing to be used $
#mo(e+ #ystem+ &erformance etc
etermine hard%are and soft%are requirements
etermine test data requirements
ivide the testing into various tas(s
#chedule testing process
Create test cases" 5 test case is a document that describes an
input+ action+ or event and an expected response+ to determine
if a feature of an application is %or(ing correctly.
Revie% the test cases
Cinali@e test environment and test data. Ma(e sure the test
environment is separated from development environment.
Get and install soft%are deliveries 6basically the developed
application7
&erform tests using the test cases available
5naly@e and inform results
The defect needs to be informed and assigned to programmers
%ho can fix those.
Trac( reported defects till closure
&erform another cycleMs if needed " 5fter the defects is fixed+
the program needs to be re3tested+ for the fix done and also to
confirm that defect fixing has not in)ected some other defect.
8pdate test cases, test environment, and test data through
life cycle as per the change in requirements, if any.
What is an inspection?
-: An inspection is a formal meeting, more formali;ed than a walk.through and typically
consists of 4.2A people including a moderator, reader (the author of whate#er is eing
re#iewed) and a recorder (to make notes in the document). "he su/ect of the inspection
is typically a document, such as a re&uirements document or a test plan. "he purpose of
an inspection is to find prolems and see what is missing, not to fi! anything. "he result
of the meeting should e documented in a written report.
What makes a good test engineer?
-: A person is a good test engineer if he
*as a @test to reak@ attitude,
"akes the point of #iew of the customer,
*as a strong desire for &uality,
*as an attention to detail, *e7s also
"actful and diplomatic and
*as good a communication skill, oth oral and written. And he
*as pre#ious software de#elopment e!perience, too.

Bood test engineers ha#e a @test to reak@ attitude, they take the
point of #iew of the customer, ha#e a strong desire for &uality and an attention to detail.
A5-
In case of BCC- ?0A9I"D ASS0=A5+: :5BI5::=S (Bood ?A engineers),they need
to understand the entire software de#elopment process and how it fits into the usiness
approach and the goals of the organi;ation. +ommunication skills and the aility to
understand #arious sides of issues are important.

What makes a good QA/Test Manager?
-: ?A<"est 'anagers are familiar with the software de#elopment processE ale to
maintain enthusiasm of their team and promote a positi#e atmosphereE ale to promote
teamwork to increase producti#ityE ale to promote cooperation etween Software and
"est<?A :ngineers
How do you know when to stop testing?
-: "his can e difficult to determine. 'any modern software applications are so comple!
and run in such an interdependent en#ironment, that complete testing can ne#er e
done. +ommon factors in deciding when to stop are...
-eadlines, e.g. release deadlines, testing deadlinesE
"est cases completed with certain percentage passedE
"est udget has een depletedE
+o#erage of code, functionality, or re&uirements reaches a specified pointE
%ug rate falls elow a certain le#elE or
%eta or alpha testing period ends.

What i there isn!t enough time or thorough testing?
-: Since it7s rarely possile to test e#ery possile aspect of an application, e#ery possile
comination of e#ents, e#ery dependency, or e#erything that could go wrong, risk
analysis is appropriate to most software de#elopment pro/ects. 0se risk analysis to
determine where testing should e focused. "his re&uires /udgment skills, common
sense and e!perience. "he checklist should include answers to the following &uestions$
(hich functionality is most important to the pro/ect7s intended purpose)
(hich functionality is most #isile to the user)
(hich functionality has the largest safety impact)
(hich functionality has the largest financial impact on users)
(hich aspects of the application are most important to the customer)
(hich aspects of the application can e tested early in the de#elopment cycle)
(hich parts of the code are most comple! and thus most su/ect to errors)
(hich parts of the application were de#eloped in rush or panic mode)
(hich aspects of similar<related pre#ious pro/ects caused prolems)
(hich aspects of similar<related pre#ious pro/ects had large maintenance
e!penses)
(hich parts of the re&uirements and design are unclear or poorly thought out)
(hat do the de#elopers think are the highest.risk aspects of the application)
(hat kinds of prolems would cause the worst pulicity)
(hat kinds of prolems would cause the most customer ser#ice complaints)
(hat kinds of tests could easily co#er multiple functionalities)
(hich tests will ha#e the est high.risk.co#erage to time.re&uired ratio)
0eliabilit$ of a software s$stem is defined as the robabilit$ that this
s$stem f!lfills a f!nction *determined b$ the secifications+ for a
secified n!mber of in!t trials !nder secified in!t conditions in a
secified time inter(al *ass!ming that hardware and in!t are free of
errors+%
1 software s$stem is rob!st if the conse#!ences of an error in its
oeration) in the in!t) or in the hardware) in relation to a gi(en
alication) are in(ersel$ roortional to the robabilit$ of the
occ!rrence of this error in the gi(en alication%
2re#!ent errors *e%g% erroneo!s commands) t$ing
errors+ m!st be handled with artic!lar care
3ess fre#!ent errors *e%g% ower fail!re+ can be handled
more la'l$) b!t still m!st not lead to irre(ersible
conse#!ences%

Você também pode gostar