Você está na página 1de 8

UNIT 1 INTRODUCTION

1 2 3 4 SOFTWARE TESTING: investigation conducted to provide stakeholders with information about the quality of the product under test. PERSPECTIVE OF TESTING: 1. debugging oriented testing, 2. demonstration oriented testing, 3. destruction oriented testing, 4. evaluation oriented testing, 5. prevention oriented testing. DEFINITION OF TESTING: Execution of work product with intent to find a defect. WHY TESTING IS NECESSARY: Errors, Faults and Failures are occurred while developing the Software Project,
Testing identifies faults

6 7

8 9

APPROACHES TO TESTING: 1. Big bang approach of testing-[system testing or final testing, testing after development phase is completed, show defects pertaining to any phase of development] 2. Total quality management approach-[defect removal cost] 3. Total quality management as against big bang approach- [TQM cost triangle] 4. TQM in cost perspective- [green money/ cost of prevention, blue money/ cost of appraisal, red money/ cost of failure] TESTING DURING DEVELOPMENT LIFE CYCLE: 1.Requirement testing-[gathering cus requirements, categorized into technical, legal & sys requirements, charclarity, complete, testable, measurable ] 2.Design testing-[high level design-sys architect, low level design-developer, char-clarity, complete, testable, measurable] 3.Code testing-[include code files, tables, stored procedure etc, done by using stubs/drivers, char-clarity, complete, testable, measurable ] 4.Test case-[document that describe i/p, action or event and expected response, to determine appli feature is working correctly or not] 5.Test scenario-[user stories, defines behavior of an application] TEST POLICY: Defined by senior mgnt cover all aspects of testing, decides framework, for project org it may be defined by client TEST PLANNING: Plan testing efforts adequately with an assumption that defects are there, if defects are not found it is failure of testing activity, successful tester is not one who appreciates development but one who finds defect in the project, testing is not a formality to be completed at the end of development cycle. CATEGORIES OF DEFECT: 1.On basis of requirement/design specification, 2.types of defects,3. root causes of defects, 4.defect error or mistake in software. RISK ANALYSIS: 1.advantages of automated testing-[very fast in calc, behavior consistent, works as per logic] 2.disadvantages of automated testing-[mistakes repeated, do not have learning capabilities, cannot learn from past experiences, judgment is missing] 3.risk-[potential loss to an org, unplanned event occurs when user is not prepared] 4.software product risks-[product risk, business impact risk, cus related risk, process risk, technological risk]

5.diff risk introduce due to introduction of s/w system-[improper use of technology, cascading effects of error, illogical processing of transactions, inability to translate user needs] 6.three implementation risks-[insufficient schedule, incapable development process, incapable testing process 7.risk handling-[defined by senior mgnt, typical ways- accept the risk as it is, bypass the risk, reduce the probability of risk, improve detection ability of risk] 8.types of actions for risk control mgnt-[preventive control- login password-combo box control, corrective control-spelling corrections, detective control- appli gives error message] 10 CONFIGURATION MANAGEMENT: identifying and defining items in the system, controlling the changes of those items throughout their lifecycle 1.Best practices of SCM, 2.baselining-[configuration items that have been formally reviewed and agreed upon] 3.Cm activities-[change mgnt-keep tract of changes to s/w from cus and developers, version mgnt- keep tract of multiple versions of system components, system building-process of assembling program components, release mgnt-preparing s/w for external release] 4.Versions-[an instance of a system, functionally distinct] 5.Variant-[ an instance of a system, functionally identical] 6.Release-[ an instance of a system, distributed to users outside of the development team] 7.Software configuration item-[product concept specification, s/w project plans, s/w req specifications, s/w design descriptions, source code, SCM procedures]

UNIT 2 TESTING TECHNIQUES


1 LEVELS OF TESTING: 1.proposal testing-[proposal made to cus on basis of request for proposal or info or quotation, review by diff panels-technical review and commercial review] 2.Requirement testing-[ gathering cus requirements, categorized into technical, legal & sys requirements, char-clarity, complete, testable, measurable] 3.Design testing-[ high level design-sys architect, low level design-developer, char-clarity, complete, testable, measurable] 4.Code review-[ include code files, tables, stored procedure etc, done by using stubs/drivers, char-clarity, complete, testable, measurable] 5.Unit testing-[testing of individual software components, requires top of the component hierarchy is tested first] 6.Module testing-[many units are combined together and form a module, need drivers/stubs for its execution] 7.Integration testing-[may starts at module level, if self executable tested by testers, if not it drivers/stubs it is tested by developers, two types-top down-top of the component hierarchy is tested first, bottom up- bottom of the component hierarchy is tested first] 8.Big bang testing-[tested after development is over, also called as system testing or final testing, adv-cost can be saved, no stubs/drivers required, very fast, dis adv-hard to debug, defect location not found easily] 9.Sandwich testing-[defines testing into two parts and follows both parts starting from both ends, top down approach and bottom up approach either simultaneously or one after another] 10.Critical path first-[also called as skeleton development, first testing the critical path of the system] 11.Sub system testing-[collection of units, sub modules, and modules integrated to form subsystems, mainly concentrate on the integration and interface errors ] 12.System testing-[testing done on system before it is delivered to customer, based on overall requirements, it covers all combined parts of the system] ACCEPTANCE TESTING: final stage of testing 1.Characteristics-[final opportunity to examine the s/w, conducted in production environment, determine whether developed system meets predefined criteria of acceptance or not] 2.Acceptance testing criteria-[criteria of acceptance or rejection of software, it is reduction mechanism, helps buyer and seller to get satisfaction by reducing possible risk, risk of buyer-accepts bad risk, risk to the seller-

when a good product get selected as it does not satify acceptance criteria] 3.Alpha testing-[It is the test done by the client at the developers site, adv-problem solved immediately,
customer see that system working correctly, dis adv-customers data may not represent actual data, lab environment may not represent real life environment] 4.Beta testing-[This is the test done by the end-users at the clients site, adv- identify gap between actual environment and requirements, appli works in environment live data, dis adv- users not trained for handling the system, changes in user environment may not be captured as org may keep on adding new systems ]

5.Criticality of requirements-criticality from usage point of view, criticality from acceptance point of view 6.Adv of acceptance testing-[gives confidence to the users, it does not make any system assumptions] 7.dis adv- done with predefined test cases, limit scope of acceptance testing, does not refer any reusability] SPECIAL TESTS: 1.complexity testing-[verification technique, where complexity of system design and coding is verified through reviews, walkthrough and inspections] 2.Graphical user interface testing-[most important part of the application along with functionality, font size, colors, spelling etc, every appli should follow Microsoft rules like controls should be in capital letters etc] 3.Compatibility testing-[testing whether s/w is compatible with other elements of a system, examplebrowsers, os and h/w] 4.Security testing-[checks the levels of security and protect the appli, authorized personnel can success the functions available to their security level, process-make list of all possible users of the system, define the probability of security breakage] 5.Recovery testing-[confirms that the program recovers from expected or unexpected events without loss of data or functionality, events include shortage of disk space, unexpected loss of communication or power cut conditions, example- machine recovery] 6.Installation testing-[it is intended to find how the appli can be installed by using installation guide or installation CD] 7.Error handling testing-[when working in system possible they may enter wrong data on select wrong options, appli expected to help user through error messages, types preventive messages, auto corrective messages, suggestive messages, detective messages] 8.Smoke testing- [testing the basic features of an application, only do positive testing, done by test manager or senior tester] 9.Sanity testing-[deeper level of smoke testing, more depth testing than smoke testing, testing the major features of an application] 10.Parallel testing-[it is done by comparing the existing system with newly designed system, used to compare results from two different systems in parallel to find the difference, ex- new version of the appli perform correctly with existing, compare i/p o/p etc] 11.Execution testing-[perform to ensure that system achieves desired level, considered as alpha and beta testing, to determine system meets design objectives ab defined inrequirements, system may used at right point of time] 12.Regression testing-[determine whether changed components have introduced any error in unchanged components of the testing, retesting the unchanged parts of the application, Test cases are re-executed in order to check whether previous functionality of application is working fine and new changes have not introduced any new bugs] 13.Adhoc testing-[also known as monkey or exploratory or random testing, performed without any formal test plan, test case, test scenario or test data, tester test the system on basis of error guessing and experience about similar application in past]

UNIT 3 TECHNIQUES FOR AUTOMATING TEST EXECUTION


1 THE V-MODEL: Each development activity has a corresponding test activity, (Diagram-page no 44 in notes) 1.development activities-requirements, function, design, code, 2.test activities-acceptance test, system test, integration test, unit test, designing acceptance test cases will find defects in the requirements, designing system test cases will find defects in the functional specification, designing integration test cases will find defects in the design, and designing unit test cases will find defects in the code] TOOL SUPPORT FOR LIFE CYCLE TESTING: (Diagram-page no 45 in notes) 1.Test design tools help to derive test inputs or test data, 2.Logical design tools work from the logic of a Specification, 3.Physical design tools manipulate existing data or generate test data, 5.Test management tools include tools to assist in test planning, keeping track of what tests have been run,, 6.Static analysis tools analyze code without executing it, 7.Coverage tools assess how much of the software under test has been exercised by a set of tests, 8.Debugging tools are not really testing tools Debugging tools enable the developer to step through the code by executing one instruction at a time and looking at the contents of data locations, 9.Dynamic analysis tools assess the system while the software is running, 10.Performance testing tools measure the time taken for various events, 11.Load testing tools generate system traffic THE PROMISE OF TEST AUTOMATION: 1. Run existing (regression) tests on a new version of a program, 2.Run more tests more often, 3. Perform tests which would be difficult or impossible to do manually, 4.Better use of resources, 5.Consistency and repeatability of tests, 6.Reuse of tests, 7.Earlier time to market, 8.Increased confidence COMMON PROBLEMS OF TEST AUTOMATION: 1.Unrealistic expectations, 2.Poor testing practice, 3.False sense of security, 4.Technical problems, 5.Organizational issues, 6.Maintenance of automated tests THE LIMITATIONS OF AUTOMATING SOFTWARE TESTING: 1.Does not replace manual testing, 2.Manual tests find more defects than automated tests, 3.Greater reliance on the quality of the tests, 4.Test automation does not improve effectiveness, 5.Tools have no imagination, 6.Test automation may Limit software development SCRIPT PRE-PROCESSING: use to describe different script manipulation techniques, less error prone 1.Script preprocessing functions Beautifier-checks the layout and format of scripts, makes easier to read and understand for different people General substitution-The idea is to hide some of the complication and intricate detail of scripts by substituting simple but meaningful alternatives, two main purposes for substitution are to replace cryptic character sequences with more simple and meaningful ones and to replace long and complicated sequences with more simple and meaningful ones static analysis-take a more critical look at scripts or tables, checking or actual and possible scripting defects for example a misspelled or incomplete instruction, its only goal the detection of all the defects 2.implementing preprocessing-using standard text manipulation tools, by using substitution table, by using interactive editor, command file is run immediately before the scripts are compiled. SCRIPTING TECHNIQUES: (See example-page no 53 in notes) 1. Linear scripts- what you end up with when you record a test case performed manually. It contains all the keystrokes, including function keys, arrow keys, alphanumeric keys, may also include comparisons, adv-no planning is required, just record any manual task, you can quickly start automating, dis adv- labor intensive, if anything happens when script is replayed and did not happen when it is recorded 2.Structured scripting- parallel to structured programming, special instructions are used to control the execution of the script, There are three basic control structures, sequence, selection-making decision and

iteration-ability to repeat a sequence of one or more instructions as many times as required, adv- more robust, made modular by calling other scripts, dis adv- script become more complex program, test data is still 'hardwired' into the script 3.shared scripts-scripts that are used (or shared) by more than one test case, allows one script to be 'called' by another, produce one script that performs some task that has to be repeated for different tests and then whenever that task has to be done we simply call this script at the appropriate point in each test case, adv-do not have to spend time scripting, we will have only one script to change in the event that something concerning the repeated task changes, dis adv- hard to find an appropriate script 4.data driven scripts-stores test inputs in a separate (data) file rather than in the script itself, leaves the control information (e.g. menu navigation) in the script, adv-same script can be used to run different tests, dis adv-must be well managed 5.keyword driven scripts-A limitation of the data-driven technique is that the navigation and actions performed have to be the same for each test case, and the logical 'knowledge' of what the tests are is built into both the data file and the control script, so they need to be synchronized, using a set of keywords to indicate the tasks to be performed, the control script reads each keyword in a test file in turn and calls the associated supporting script, adv- low maintenance cost, use non programming testers, many tests can be implemented without increasing the number of scripts

UNIT 4 TOOLS TO AUTOMATE TESTING


1 2 3

THE TOOL SELECTION PROJECT: it must be funded, resourced and staff adequately, project manager and
other people involved, need to be adequate priority THE TOOL SELECTION TEAM: Team to make tool selection decision Tool selection team leader, the other tool selection team members. IDENTIFYING REQUIREMENTS: Some requirements will be related to the testing problems, Other requirements include the technical and non-technical constraints on tool selection. Some examples of problems are: 1.Manual testing problems (e.g. too time consuming, boring, error prone); 2.No time for regression testing when small changes are made to the software; 3.Set-up of test data or test cases is error prone; 4.Inadequate test documentation; 5.Don't know how much of the software has been tested;

IDENTIFYING CONSTRAINTS:
1.Environmental constraints (hardware and software) 2.Commercial supplier constraints-A good relationship with your vendor can help you to progress your test automation in the direction you want it to go. Here are some factors that you should take into consideration in evaluating the tool vendor's organization. Is the supplier a bona fide company? How mature are the company and the product? etc 3.Cost constraints-Cost is most visible constraint on tool selection, Cost factors include: purchase or lease price (one-off, annual, or other anniversary renewal), cost basis (per seat, per computer, etc.), cost of training in the use of the tool (from the tool vendor), any additional hardware needed (e.g. PCs, additional disk space or memory), any additional software needed (e.g. updates to operating systems or Netware), 4.Political constraints-you may be required to buy the same tool that your parent company uses 5.Quality constraints-include both functional and nonfunctional aspects. Here are some suggestions you might like to considerHow many users can use the tool at once? Can test scripts be shared? What skill level is required to use the tool effectively? What programming skills are needed to write test scripts? Etc

IDENTIFYING TOOLS AVAILABILITY IN THE MARKET:


1. Feature evaluation-If you buy, the next step is to familiarize yourself with the general capabilities of the test automation Tools Which features are the most important ones to meet your needs and Objectives in your current situation? 2.Mandatory-used to rule out any tool that does not fulfill your essential conditions 3.Don't care-The 'don't care' category is used for features that are not of interest to you 4.Features will change from one category to another-your feature list will change as you progress in evaluating tools The tool vendors are sure to point out features they can supply but which you did not specifically request. Other tool users may recommend a feature as essential becoz of their experience 5.Types of features-seven point scale EVALUATING THE CANDIDATE TOOLS: Collect current information about the tools 1.Questions to ask tool vendors-What does this tool do? What are the benefits of using this tool? How can this tool solve my problems? etc 2.Consult tool evaluation reports-consult one or more of the publications that have evaluated testing tools 3.Contact reference sites-How Long have you been using this tool? Are you basically happy with it? How many copies/licenses do you have? How many users can be supported? What hardware and software platforms are you using? How did you evaluate and decide on this tool? etc

DECISION MAKING: 1.Assess against the business case-Before making this recommendation, assess the business case: will the
potential savings from this tool give a good return on investment, including purchase/lease price, training costs, and ongoing internal tool costs? 2.When to stop evaluating-If there is only one clear candidate tool, you could stop evaluating, it may be better to continue with the evaluation process until all of the people involved are happy with the final decision. Otherwise there may be problems later 3.Completing the evaluation and selection process-collect information from winning vendor and losing vendor 4.Build or Buy?- no tools that meet your requirements within your constraints. build your own, If you build your own tool, It will be most suitable for your own needs, well supported in terms of doc help, and training.

UNIT 5 AUTOMATED COMPARISON


1

DYNAMIC COMPARISON: performed while a test case is executing, Test execution tools
normally include comparator features, check things as they appear on the screen, Outputs that are not visible on the screen can also be compared dynamically, such as GUI attributes 1.Tool support and implementation-instructions must be inserted into the test script. These instructions tell the tool what is to be compared when, and what it is to be compared against, the tool can be instructed to capture a particular part of a screen or window and save the current instance of it as the expected result 2.Increased complexity, higher maintenance costs-it makes the test scripts more complex. POST-EXECUTION COMPARISON: (Diagram-page no 101 in notes) Performed after the test case is executing, a straight recording of the inputs. The log file contains just the basic information including start and end times, who ran it, etc, The expected outcomes are now held in a separate file, the same name as the output file with which it has to be compared, but in a different directory, We could have used a naming convention such as Expcountries2.dcm to distinguish our expected results file from the actual results file, two separate processes the first executes the test case while the second process performs a comparison COMPLEX COMPARISON: Complex comparison (also known as intelligent comparison) enables us to compare actual and expected outcomes with known differences between them. A complex comparison will ignore certain types of difference (usually those that we expect to see or that are not important to us) and highlight others Here are a few more examples of where complex comparisons are required: Different unique identity code or number generated each time; Output in a different order (new version of the software may sort on a different key); Different values within an acceptable range (e.g. temperature sensing);

Different text formats (e.g. font, color, size). 1.Simple masking-invoice example (see example-105 in notes) 2.Implementing complex comparisons-implemented by masking TEST SENSITIVITY: 1.Sensitive and robust tests- comparing the whole screen after each step in a test case, then more or less any change in the output to the screen will cause a mismatch and the test case will fail, comparing only the last message output to the screen at the end of a test case, then only if this message differs will a mismatch occur. Any other changes in the output to the screen will not cause the test case to fail 2.Trade-offs between sensitive and robust testsRobust tests- less susceptibility to changes, less implementation effort, more miss defects, less storage space Sensitive tests-more susceptibility to changes, high implementation effort, less miss defects, high storage space 3.Redundancy- If we run a set of sensitive test cases there is a good chance that a number of them will fail for the same reason. In this situation each of the failed test cases is highlighting the same defect. COMPARING DIFFERENT TYPES OF OUTCOMES: 1.Disk-based outcomes- include databases, files, and directories/folders 2.Comparing text flies- Text files are perhaps the easiest type of data to compare 3.Comparing non-textual forms of data- To compare anything other than standard text files usually requires a specialized comparator, to handle character-based screen images that use escape sequences and graphical characters or that are specifically designed to handle a variety of graphics data formats 4.Comparing databases and binary files- Comparators that handle databases and/or binary files tend to be fairly specialized and concentrate on just one particular type of database or binary file format 5.Character-based applications- fairly straightforward to specify the location of the characters you want to compare since the screen is already an array of individual characters each addressed by a row and column no. 6.GUI applications- Applications that have a GUI can display a much greater range of output styles than a character-based application, Most of these output styles have become the norm for the majority of users as applications have standardized on the use of items such as windows, icons, menu bars, dialog boxes, buttons, and check boxes, to name but a few. Whatever is displayed, it is converted into a bitmap. COMPARISON FILTERS: A filter is an editing or translating step that is performed on both an expected outcome file and the corresponding actual outcome file. More than one filtering task can be performed on any expected/actual test outcome before comparison, Using filters before comparing the files means that the Actual comparison can use a much simpler comparator tool. Effectively, a filter removes the legitimate differences from the expected and actual outcomes, thereby ensuring that a simple comparison tool will highlight only unexpected differences, 1.Advantages and disadvantages of filters-Availability of text manipulation tools, reuse, Easier implementation and debugging, Simple comparison tools can perform complex comparisons TESTWARE ARCHITECTURE: Test ware is the term we use to describe all of the artifacts required for testing, including documentation, scripts, data, and expected outcomes, Architecture is the arrangement of all of these artifacts, much test ware is common to both automated and manual testing, 1.Terminology-Test ware comprises all artifacts used and produced by testing. The artifacts used by testing we call the test materials, and these include the inputs, scripts, data, documentation (including specifications), and expected outcome. The artifacts produced by testing we call the Test Results. There are two types of Test Results, the products and the by-products 2.key issues-scale-test materials-input, script, data, documentation, expected outcome test results-products-actual outcome, byproduct-log, status, difference report reuse-A good set of automated scripts will involve a lot of script reuse, Reuse is something that has to be designed into scripts Multiple versions-The real value of an automated Test Suite should be realized when a new version of the software is to be tested Platform and environment independence-This fourth test ware architecture issue applies only where the same software has to be tested across different environments or hardware platforms. Ideally, all tests would be platform independent such that the same files, scripts, etc, can be used on every platform or in every environ.

PRE AND POST-PROCESSING: 1.Pre-processing-For most test cases there are prerequisites that must be in place before execution can begin. These should be defined as part of each test case and implemented for each test case before it is performed example-database 2.Post-processing-Immediately after a test case has been executed the test result artifacts, comprising the products (actual outcome) and by-products (e.g. tool log file) of test execution, example-artifacts 3.Why use these terms-The terms pre- and post-processing are a convenient way of describing a big chunk of testing. Consider the following characteristics of pre- and post-processing tasks: There are lots of them , They come in packs, Many of them are the same, They can be automated easily 4.Why automate pre- and post-processing-Pre- and post-processing tasks cry out to be automated. Performing these manually is both error prone and time consuming, A key difference is in the automation of the pre- and post-processing asks. There is a difference in the sequence of the tasks If tests are run manually, the analysis of results is usually done immediately after the comparison of actual outcome to expected outcome. The tester will spend time analyzing why there are differences, and whether it is the software that is wrong or the test itself. With automated testing, all analysis of differences is postponed until after the tests have run. 5.Pre-processing tasks-create, check, reorganize, convert. 6.Post-processing tasks-delete, check, reorganize, convert. ATTRIBUTES OF TEST MAINTENANCE: The first point of view is what appears to be a 'good idea' on the surface but does not take into account any effect on maintenance costs, The second point of view looks at the potential problems caused by the attribute, A third section under each attribute looks at possible solutions to reduce the maintenance problems, 1.Number of test cases Good idea? With each testing effort we add more and more tests to the Test Suite, Problem The more tests there are in the Test Suite the more tests there will be to maintain 2.Quantity of test data Good idea? Use lots of data in the tests to help insure thorough testing, Problem The more test data there is the more maintenance effort is needed 3.Format of test data Good idea? Use any format of data applicable to the system under test as test input and test output Problem The more specialized the format of data used, the more likely it is that specialized tool support will be needed to update the tests

NOTEPERSPECTIVE OF TESTING: - Headings Sensitive and robust tests- Side Headings test materials- Sub Headings

Você também pode gostar