Você está na página 1de 16

1.

Advantages of path coverage metric of software testing:


Path coverage requires extremely thorough testing.

Disadvantages of path coverage metric of software testing:

1) Since loops introduce an unbounded number of paths, this metric considers only a limited
number of looping possibilities.

2) The number of paths is exponential to the number of branches. For example, a function
containing 10 if-statements has 1024 paths to test. Adding just one

more if-statement doubles the count to 2048.

3) Many paths are impossible to exercise due to relationships of data.

2. Decision coverage has the main advantage of simplicity & is free from many problems
of statement coverage.

Disadvantage of decision coverage is that this metric ignores branches within Boolean
expressions which occur due to short-circuit operators.

3. Ans: Drawbacks of statement coverage metric of software testing:

1) It is insensitive to some of the control structures.

2) It does not report whether loops reach their termination condition - only whether the loop body
was executed. With C, C++, and Java, this limitation

affects loops that contain break statements.

3) It is completely insensitive to the logical operators (|| and &&).

4) It cannot distinguish consecutive switch labels.

4. Advantages of statement coverage metric of software testing

1) The main advantage of statement coverage metric is that it can be applied directly to object
code and does not require processing source code. Usually the

performance profilers use this metric.

2) Bugs are evenly distributed through code; therefore the percentage of executable statements
covered reflects the percentage of faults discovered.
5: Automation Testing versus Manual Testing Guidelines:
I met with my team’s automation experts a few weeks back to get their input on when to automate
and when to manually test. The general rule of thumb has always been to use common sense. If
you’re only going to run the test one or two times or the test is really expensive to automation, it is
most likely a manual test. But then again, what good is saying “use common sense” when you
need to come up with deterministic set of guidelines on how and when to

automate?
Pros of Automation
• If you have to run a set of tests repeatedly, automation is a huge win for you
• It gives you the ability to run automation against code that frequently changes to catch
regressions in a timely manner
• It gives you the ability to run automation in mainstream scenarios to catch regressions in a
timely manner (see What is a Nightly)
• Aids in testing a large test matrix (different languages on different OS platforms). Automated
tests can be run at the same time on different machines,

whereas the manual tests would have to be run sequentially.


Cons of Automation
• It costs more to automate. Writing the test cases and writing or configuring the automate
framework you’re using costs more initially than running the

test manually.
• Can’t automate visual references, for example, if you can’t tell the font color via code or the
automation tool, it is a manual test.
Pros of Manual
• If the test case only runs twice a coding milestone, it most likely should be a manual test. Less
cost than automating it.
• It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are
found via ad-hoc than via automation. And, the more

time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
• Running tests manually can be very time consuming
• Each time there is a new build, the tester must rerun all required tests - which after a while
would become very mundane and tiresome.
Other deciding factors:
• What you automate depends on the tools you use. If the tools have any limitations, those tests
are manual.
• Is the return on investment worth automating? Is what you get out of automation worth the cost
of setting up and supporting the test cases, the automation framework, and the system that runs
the test cases?
6. Test Scenario is the user workflow in the application.

Example: Checking Mail in Gmail is a scenario, where user login, check the mail in inbox and
then logoff. This application can have 2 different test case one for login and other one for inbox.

So Test Scenario can consist of different test cases.

7. Ans: The simple answer to the question ‘Can automated testing replace all manual
testing’ン is ‘No.’ン

Automated functional tests can be used for regression testing (which is a small part of the overall
testing effort). If an organization is running the same manual regression tests repeatedly, then the
automated tests can replace some of that effort, but they also add the effort to maintain the tests,
which is sometimes more than the work required to just run the tests manually. When I say some
of the effort, I mean that test failures from an automated test run still must be analyzed manually.
Also, any part of the process of provisioning and setting up the machine to run the tests, kicking
off the test run, and babysitting it along the way that isn’t automated will still require manual
attention.

8. Ans: The basic assumptions behind coverage analysis tell us about the strengths and
limitations of this testing technique. Some fundamental assumptions are listed

below.

* Bugs relate to control flow and you can expose Bugs by varying the control flow [Beizer1990
p.60]. For example, a programmer wrote "if (c)" rather than

"if (!c)".
* You can look for failures without knowing what failures might occur and all tests are reliable, in
that successful test runs imply program correctness

[Morell1990]. The tester understands what a correct version of the program would do and can
identify differences from the correct behaviour.
* Other assumptions include achievable specifications, no errors of omission, and no unreachable
code.

Clearly, these assumptions do not always hold. Coverage analysis exposes some plausible bugs
but does not come close to exposing all classes of bugs.

Coverage analysis provides more benefit when applied to an application that makes a lot of
decisions rather than data-centric applications, such as a

database application.
9. Ans: For a test automation the entry criteria are:
* Availability of a stable application under test (around
some 80% of test cases passed).
* Availability of automation test tool with the required
Add-ins and Patches.
* Availability of stable and controlled test environment.
* Automation test strategy sign-off
-scope (type of tests)
-functionalities (features to be automated)
-assumptions
-features to be automated
* SIT or UAT sign off
* Signed off manual test cases to be provided
* Availability of stable test bed.

10. Ans : You are a Lead Automation engineer then what questions you would ask to
yourself and your manager while deciding to automate the tests

Best approach would be to raise the following questions:

1) Automating this test and running it once will cost more than simply running it manually once.
How much more?

2. An automated test has a finite lifetime, during which it must recoup that additional cost. Is this
test likely to die sooner or later? What events are likely to end it?

3. During its lifetime, how likely is this test to find additional bugs (beyond whatever bugs it found
the first time it ran)? How does this uncertain benefit balance against the cost of automation?
4. Return of Investment.

11. Ans: Keyword driven testing:

This requires the development of data tables and keywords, independent of the test automation
tool used to execute them and the test script code that "drives" the application-under-test and the
data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the
functionality of the application-under-test is documented in a table as well as in step-by-step
instructions for each test. In this method, the entire process is data-driven, including functionality
The merits of the Keyword Driven Testing are as follows,
- The Detail Test Plan can be written in Spreadsheet format containing all input and verification
data.
- If "utility" scripts can be created by someone proficient in the automated tool’s Scripting
language prior to the Detail Test Plan being written, then the tester can use the Automated Test
Tool immediately via the "spreadsheet-input" method, without needing to learn the Scripting
language.
- The tester need only learn the "Key Words" required, and the specific format to use within the
Test Plan. This allows the tester to be productive with the test tool very quickly, and allows more
extensive training in the test tool to be scheduled at a more convenient time.

Demerits of keyword driven testing


The demerits of the Keyword Driven Testing are as follows:
- Development of "customized" (Application-Specific) Functions and Utilities requires proficiency
in the tool’s Scripting language. (Note that this is also true for any method)
- If application requires more than a few "customized" Utilities, this will require the tester to learn a
number of "Key Words" and special formats. This can be time-consuming, and may have an initial
impact on Test Plan Development. Once the testers get used to this, however, the time required
to produce a test case is greatly improved.

12. Ans: The architecture of the “Test Plan Driven” method appears similar to that of the
“Functional Decomposition” method, but in fact, they are substantially

different:

* Driver Script
* Performs initialization, if required;
* Calls the Application-Specific “Controller” Script, passing to it the file-names of the Test Cases
(which have been saved from the spreadsheets as a

“tab-delimited” files);

* The “Controller” Script


* Reads and processes the file-name received from Driver;
* Matches on “Key Words” contained in the input-file
* Builds a parameter-list from the records that follow;
* Calls “Utility” scripts associated with the “Key Words”, passing the created parameter-list;

* Utility Scripts
* Process input parameter-list received from the “Controller” script;
* Perform specific tasks (e.g. press a key or button, enter data, verify data, etc.), calling “User
Defined Functions” if required;
* Report any errors to a Test Report for the test case;
* Return to “Controller” script;

* User Defined Functions


* General and Application-Specific functions may be called by any of the above script-types in
order to perform specific tasks;
Advantages

This method has all of the advantages of the “Functional Decomposition” method, as well as the
following:

* The Detail Test Plan can be written in Spreadsheet format containing all input and verification
data. Therefore the tester only needs to write this

once, rather than, for example, writing it in Word, and then creating input and verification files as
is required by the “Functional Decomposition” method.
* Test Plan does not necessarily have to be written using MS Excel. Any format can be used from
which either “tab-delimited” or “comma-delimited” files

can be saved (e.g. Access Database, etc.).


* If “utility” scripts can be created by someone proficient in the Automated tool’s Scripting
language prior to the Detail Test Plan being written, then

the tester can use the Automated Test Tool immediately via the "spreadsheet-input" method,
without needing to learn the Scripting language. The tester need

only learn the “Key Words” required, and the specific format to use within the Test Plan. This
allows the tester to be productive with the test tool very

quickly, and allows more extensive training in the test tool to be scheduled at a more convenient
time.
* If Detailed Test Cases already exists in some other format, it is not difficult to translate these
into the “spreadsheet” format.
* After a number of “generic” Utility scripts have already been created for testing an application,
we can usually re-use most of these if we need to

test another application. This would allow the organization to get their automated testing “up and
running” (for most applications) within a few days,

rather than weeks.

Disadvantages

* Development of “customized” (Application-Specific) Functions and Utilities requires proficiency


in the tool’s Scripting language. Note that this is

also true of the “Functional Decomposition” method, and, frankly of any method used including
“Record/Playback”.
* If application requires more than a few “customized” Utilities, this will require the tester to learn a
number of “Key Words” and special formats.

This can be time-consuming, and may have an initial impact on Test Plan Development. Once the
testers get used to this, however, the time required to

produce a test case is greatly improved.

13. Ans: In comparison testing, we compare the old application with new application and see
whether the new application is working better than the old application or not.

Comparison Testing means comparing your software with the better one or your Competitor.

While comparison Testing we basically compare the Performance of the software.

For ex If you have to do Comparison Testing of PDF converter (Desktop Based Application) then
you will compare your software with your Competitor on the

basis of:-

1.Speed of Conversion PDF file into Word.


2.Quality of converted file.

14. Parallel Testing


Parallel/audit testing is a type of testing where the tester reconciles the output of the new system
to the output of the current system, in order to verify the new system operates correctly.

OR

Comparing our Product / Application build with other products existing in the market. Parallel
Testing is also known as Comparative Testing / Competetive Testing.

Testing newly developed system and compare the results with already existing system to check
any discrepancy between them.

15. Automated Testing:


Test automation is the use of software to control the execution of tests, the comparison of actual
outcomes to predicted outcomes, the setting up of test preconditions, and other test control and
test reporting functions. Commonly, test automation involves automating a manual process
already in place that uses a formalized testing process.

Test automation can be expensive, and it is usually employed in combination with manual
exploratory testing. It can be made cost-effective in the longer term
though, especially in regression testing. One way to generate test cases automatically is model-
based testing where a model of the system is used for test case generation, but research
continues into a variety of methodologies for doing so.

What to automate, when to automate, or even whether one really needs automation are crucial
decisions which the testing (or development) team has to take.

Selecting the correct features of the product for automation largely decides the success of the
automation. Unstable features or the features which are undergoing changes should be avoided.

16. Data Flow Diagram (DFD)


Data Flow Diagram is a graphical representation of the "flow" of data through an information
system. A data flow diagram can also be used for the visualization of data processing. It is
common practice for a designer to draw a context-level DFD first which shows the interaction
between the system and outside entities.

The process model is typically used in structured analysis and design methods. Also called a data
flow diagram (DFD), it shows the flow of information through a system. Each process transforms
inputs into outputs.

The model generally starts with a context diagram showing the system as a single process
connected to external entities outside of the system boundary. This

process explodes to a lower level DFD that divides the system into smaller parts and balances
the flow of information between parent and child diagrams. Many

diagram levels may be needed to express a complex system. Primitive processes, those that
don't explode to a child diagram, are usually described in a

connected textual specification.

17. Traceability Matrix


Traceability means that we would like to be able to trace back and forth how and where any work
product fulfils the directions of the preceding product.
The matrix deals with the where, while the how we have to do our self, once we know the where.

A traceability matrix is created by associating requirements with the work products that satisfy
them. Tests are associated with the requirements on which

they are based and the product tested to meet the requirement.

There can be more things included in a traceability matrix than shown. In traceability, the
relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many.
Traceability requires unique identifiers for each requirement and product. Numbers for products
are established in a configuration management (CM) plan.
Traceability ensures completeness, that all lower level requirements come from higher level
requirements, and that all higher level requirements are allocated to lower level requirements.
Traceability is also used to manage change and provides the basis for test planning.

18. Defect leakage


Defect leakage refers to the defect Found \ reproduced by the Client or User, which the tester
was unable to found.
Defect leakage is the number of bugs that are found in the field that were not found internally.
There are a few ways to express this:

* total number of leaked defects (a simple count)


* defects per customer: number of leaked defects divided by number of customers running that
release
* % found in the field: number of leaked defects divided by number of total defects found in that
release

In theory, this can be measured at any stage - number of defects leaked from dev into QA,
number leaked from QA into beta certification, etc. I've mostly

used it for customers in the field, though.

19. Configuration Management


Configuration Management is a discipline applying technical and administrative direction and
surveillance to: identify and document the functional and physical characteristics of a
configuration item, control changes to those characteristics, record and report change processing
and implementation status, and verify compliance with specified requirements.

Configuration management (CM) is the detailed recording and updating of information that
describes an enterprise's computer systems and networks, including all hardware and software
components. Such information typically includes the versions and updates that have been applied
to installed software packages and the locations and network addresses of hardware devices.
Special configuration management software is available. When a system needs a hardware or
software upgrade, a computer technician can accesses the configuration management program
and database to see what is currently installed. The technician can then make a more informed
decision about the upgrade needed.

An advantage of a configuration management application is that the entire collection of systems


can be reviewed to make sure any changes made to one system do not adversely affect any of
the other systems.
20. Statement coverage in Software Testing - Has each line of the source code been
executed?
Statement coverage is one of the ways of measuring code coverage. It describes the degree to
which the software code of a program has been tested.

All the statements in the code must be executed and tested.

Statement coverage means statement wise we need to give proper test cases.
For your example we need 3 statement coverages test cases are needed. and 2 branch
coverages are needed.

Branch coverage will be on true and false for if statements.


path coverage = branch coverage +1

21. Multiple condition coverage metric of software testing

Multiple condition coverage reports whether every possible combination of Boolean sub-
expressions occurs. 100% multiple condition coverage implies 100% condition determination
coverage.

Drawback of this metric is that it becomes tedious to find out the minimum number of test cases
required, especially for very complex Boolean expressions.

Another drawback of this metric is that the number of test cases required can vary to a large
extent among various conditions having similar complexity.

22. Difference between code coverage analysis & test coverage analysis

Both these terms are similar. Code coverage analysis is sometimes called test coverage analysis.
The academic world generally uses the term "test coverage" whereas the practitioners use the
term "code coverage".

23. Structural testing & functional testing


Structural testing examines how the program works, taking into account possible pitfalls in the
structure and logic.

Functional testing examines what the program accomplishes, without regard to how it works
internally.

24. Probe Testing


It is almost same as Exploratory testing. It is a creative, intuitive process. Everything testers do is
optimized to find bugs fast, so plans often change as testers learn more about the product and its
weaknesses. Session-based test management is one method to organize and direct exploratory
testing. It allows us to provide meaningful reports to management while preserving the creativity
that makes exploratory testing work. This page includes an explanation of the method as well as
sample session reports, and a tool we developed that produces metrics from those reports.

25. Purpose of Automation Testing Tools


The real use and purpose of automated test tools is to automate regression testing.

This means that we must have or must develop a database of detailed test cases that are
repeatable, and this suite of tests is run every time there is a change to the application to ensure
that the change does not produce unintended consequences.

26. Difference between Retesting and Regression Testing

Regression testing is the testing which is done on every build change. We retest already tested
functionality for any new bug introduced due to change.

Retesting is the testing of same functionality, this may be due to any bug fix, or due to any
change in implementation technology.

27. Best sequence of coverage goals as implementation strategy

1) Invoke at least one function in 90% of the source files (or classes).
2) Invoke 90% of the functions.
3) Attain 90% condition/decision coverage in each function.
4) Attain 100% condition/decision coverage.

28. Why a software needs to be tested? Why Software Testing is necessary?


Testing is the most important way of assuring (or controlling) the quality of software. Good
practices throughout the development process contribute to the quality of the final product, but
only testing can demonstrate that quality has been achieved and identify the problems and the
risks that remain.

1. To Improve Quality of software


2. To Improve Reliability of software.
3. 50% of maintenance cost attributed to fixing bugs.
4. TO AVOID USER TO FIND BUGS
5. TO KEEP RELIABILITY IN UR PRODUCT
6. TO KEEP QUALITY IN UR PRODUCT
7. TO STAND IN BUSINESS

To fix a bug found during a test implies rework, which might go far back into the development
cycle - possibly all the way back to the initial requirements. All that rework needs to be re-tested.
The cost of fixing bugs increases exponentially at each stage of development. Rework and re-
testing are time-consuming and expensive, and fixes are error-prone - studies have shown that
each fix has a 50% chance of creating a new defect.
Source: http://www.live-pr.com/en/the-need-for-software-testing-r1048223284.htm

29. Why do software have bugs?

# miscommunication or no communication - as to specifics of what an application should or


shouldn't do (the application's requirements).
# software complexity - the complexity of current software applications can be difficult to
comprehend for anyone without experience in modern-day software development. Multi-tier
distributed systems, applications utilizing mutliple local and remote web services applications,
data communications, enormous relational databases, security complexities, and sheer size of
applications have all contributed to the exponential growth in software/system complexity.
# programming errors - programmers, like anyone else, can make mistakes.
# changing requirements (whether documented or undocumented) - the end-user may not
understand the effects of changes, or may understand and request them anyway - redesign,
rescheduling of engineers, effects on other projects, work already completed that may have to be
redone or thrown out, hardware requirements that may be affected, etc.
#time pressures - scheduling of software projects is difficult at best, often requiring a lot of
guesswork. When deadlines loom and the crunch comes, mistakes will be made.
# poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented; the result is bugs. In many organizations management provides no incentive
for programmers to document their code or write clear, understandable, maintainable code. In
fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job
security if nobody else can understand it ('if it was hard to write, it should be hard to read').
# software development tools - visual tools, class libraries, compilers, scripting tools, etc. often
introduce their own bugs or are poorly documented, resulting in added bugs.

30. Who in the company is responsible for Quality


Every team member in a project is responsible for Quality
of product build.
But testing team is responsible to define the quality of
product developed.

31. Should we test every possible combination/scenario for a program?

No. Exhaustive testing is not possible and also a time consuming process and hence not
recommended. Its not mandatory.There are several points to be considered here. Delivery time
available, personnel availability, complexity of the application under testing, available resources,
budget etc. But we should test cover to the maximum extent satisfying the primary functionality
which are must from the customer point of view.
32. How will you describe testing activities?
Testing activities start from the elaboration phase. The various testing activities are preparing the
test plan, Preparing test cases, Execute the test case, Log teh bug, validate the bug & take
appropriate action for the bug, Automate the test cases, View and report the test results etc

The testing activities are:


- Effort estimation and project initiation
- system study
- test plan
- designing test cases
- automation (if required)
- executing the test cases
- reporting bugs
- regression testing
- analysis
- summary reports

33. Do you have a favorite QA book? Why?


I have a couple of favorites, but my all time number one is

Software testing In The Real World, by Edward Kit

I have bought and handed out multiple copies of this book I think it is so good.

I love the easy to read style, the practicle application and the very usable appendicies.

34. How do we perform regression testing?


Regression testing is nothing but testing the application for an existing functionality. When ever a
release for a software is made, some new functionality is introduced. In our normal test cycle we
would test the application against the new functionality. But there should be some testing done to
ensure that because of these new functionality, our old functionality are not affected. This testing
is known as Regression Testing

While performing regression testing we do not need create new Test case and all.

So Each time that you modify a Code of application (class, method, or function), you should
perform regression testing to ensure that your changes have not introduced errors.
Regression testing is performed for the 3 scenarios
1. When the defects are fixed
2. when there is the change for request
3. when new features are added

Developer fixed the defects and send to tester.Tester test the new functionality of the application
with the co-existence of the application and verify that The new functionality of the application
should not create any
problem on the existence application.

35. the roles of glass-box and black-box testing tools?


White-box or glass-box testing relies on analyzing the code itself and the internal logic of the
software. White-box testing is often, but not always, the purview of programmers. It uses
techniques which range from highly technical or technology specific testing through to things like
code inspections.
Although white-box techniques can be used at any stage in a software product's life cycle they
tend to be found in Unit testing activities.

Glass-box testing also called as white-box testing refers to testing, with detailed knowledge of the
modules internals. Thus these tools concentrate more on the algorithms, data structures used in
development of modules. These tools perform testing on individual modules more likely than the
whole application.

Glass-box testing: It is based on internal design of an application code.Tests are based on path
coverage,branch coverage,statement coverage. It is also known as White Box testing.

White box test cases can check for..

1) All independent paths with in a modules are executed atleast once

2) Execute all loops

3) Exercise all logical decisions

4) Exercise internal data structure to ensure their validity

36. the difference between QA and Testing?


Quality Assurance – the process of preventing defects from entering software through 'best
practices'. Not be confused with testing!

The process of critically evaluating software to find flaws and fix them and to determine its current
state of readiness for release

37. the difference between Smoke Testing and Sanity Testing


Answer: [From Wiki]
In software development, the sanity test (a form of software testing which offers "quick, broad,
and shallow testing"[1]) determines whether it is reasonable to proceed with further testing.

Software sanity tests are commonly conflated with smoke tests [2]. A smoke test determines
whether it is possible to continue testing, as opposed to whether it is reasonable[citation needed].
A software smoke test determines whether the program launches and whether its interfaces are
accessible and responsive (for example, the responsiveness of a web page or an input button). If
the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test
exercises the smallest subset of application functions needed to determine whether the
application logic is generally functional and correct (for example, an interest rate calculation for a
financial application). If the sanity test fails, it is not reasonable to attempt more rigorous testing.
Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly
determining whether an application is too flawed to merit any rigorous testing. Many companies
run sanity tests on a weekly build as part of their development process [3]

The Hello world program is often used as a sanity test for a development environment. If Hello
World fails to compile the basic environment (or the compile process the user is attempting) has a
configuration problem. If it works the problem may be a bug in the specific application being
compiled.

What is Ramp Testing? - Continuously raising an input signal until the system breaks down.

What is Depth Testing? - A test that exercises a feature of a product in full detail.

What is Quality Policy? - The overall intentions and direction of an organization as regards
quality as formally expressed by top management.

What is Race Condition? - A cause of concurrency problems. Multiple accesses to a shared


resource, at least one of which is a write, with no mechanism used by either to moderate
simultaneous access.

What is Emulator? - A device, computer program, or system that accepts the same inputs and
produces the same outputs as a given system.

What is Dependency Testing? - Examines an application's requirements for pre-existing


software, initial states and configuration in order to maintain proper functionality.
What is Documentation testing? - The aim of this testing is to help in preparation of the cover
documentation (User guide, Installation guide, etc.) in as simple, precise and true way as
possible.

What is Code style testing? - This type of testing involves the code check-up for accordance
with development standards: the rules of code comments use; variables, classes, functions
naming; the maximum line length; separation symbols order; tabling terms on a new line, etc.
There are special tools for code style testing automation.

What is scripted testing? - Scripted testing means that test cases are to be developed before
tests execution and some results (and/or system reaction) are expected to be shown. These test
cases can be designed by one (usually more experienced) specialist and performed by another
tester.

Você também pode gostar