Você está na página 1de 661

Domain Testing: Divide and Conquer

by Sowmya Padmanabhan Bachelor of Engineering in Computer Engineering SAKEC, University of Mumbai, India 2001

A thesis submitted to Florida Institute of Technology in partial fulfillment of the requirements for the degree of

Master of Science in Computer Sciences

Melbourne, Florida May 2004

Copyright by Sowmya Padmanabhan 2004 All Rights Reserved This research has been partially funded by the National Science Foundation grant EIA-0113539 ITR/SY+PE: "Improving the Education of Software Testers. Any opinions, findings or conclusions expressed in this thesis are those of the author and do not necessarily reflect the views of the National Science Foundation.

The author grants permission to make single copies

_________________________________

We the undersigned committee hereby approve the attached thesis

Domain Testing: Divide and Conquer

by Sowmya Padmanabhan Bachelor of Engineering in Computer Engineering SAKEC, University of Mumbai, India 2001

Cem Kaner, J.D., Ph.D. Professor Computer Sciences Principal Advisor

William D. Shoaff, Ph.D. Assistant Professor and Head Computer Sciences Committee Member

Robert Fronk, Ph.D. Professor and Head Science Education Committee Member

Abstract
Domain Testing: Divide and Conquer by Sowmya Padmanabhan Principle Advisor: Dr. Cem Kaner
Domain testing is a well-known software testing technique. The central idea of my thesis work is to develop and validate instructional materials that train people well in domain testing. The domain testing approach presented in my thesis is a procedural black-box testing approach. I present thorough literature reviews of domain testing and instructional design and evaluation strategies. This is followed by the designs of domain testing training material and the experiments conducted on 23 learners. Finally, I present the results of the experiments and draw inferences. I found that the instructional material, with its share of pluses and minuses, was effective in increasing learners competence levels in domain testing to a degree that they were able to successfully complete the tasks corresponding to the instructional goals. However, in the opinion of the evaluators of the final performance tests, the learners performed poorly on the performance tests and their performance is not comparable to the standard of a tester during a job interview who has one years experience and who considers herself reasonably good at domain testing.

iii

Table of Contents Chapter 1: Introduction............................................. 1


1.01 Problem Description..........................................................1 1.02 Background ........................................................................1
1.02.01 Definitions ..................................................................... 1 1.02.02 White-Box Testing Approach.......................................... 3 1.02.03 Black-Box Testing Approach .......................................... 3 1.02.04 Complete Testing/Exhaustive Testing.............................. 4 1.02.05 Domain Testing.............................................................. 4 1.02.06 Similarities among Different Interpretations of Domain Testing..6 1.02.07 Differences between Different Interpretations of Domain Testing .......................................................................... 6

1.03 Domain Testing Approach Presented in Training Material .......................................................................................9


1.03.01 Organization of the Thesis............................................... 9

Chapter 2: Domain Testing: A Literature Review 11


2.01 Partitioning of Input Domain ........................................ 11
2.01.01 White or Black ............................................................. 12
2.01.01.01 White-Box Testing Approach.............................................. 13
2.01.01.01.01 Path Analysis Approach.....................................................13 2.01.01.01.02 Mutation Testing Approach ...............................................15

2.01.01.02 Black-Box Testing Approach/ Specification-Based Approach................................................................................................16


2.01.01.02.01 Functional Testing Approach.............................................18 iv

2.01.01.03 Combination of Black-Box and White-Box Testing Approaches ................................................................................................ 19

2.01.02 Driving Factor.............................................................. 20


2.01.02.01 Confidence-Based Approach............................................... 20 2.01.02.02 Risk-Based Approach ........................................................... 21

2.01.03 Linear or Non-Linear .................................................... 23 2.01.04 Overlapping or Disjoint Subdomains ............................. 23 2.01.05 Size of Subdomains Equally Sized or Unequally Sized?....................................................................................... 24

2.02 Selecting Representatives from Subdomains............... 24


2.02.01 Random Selection......................................................... 25 2.02.02 Proportional Partition Testing........................................ 25 2.02.03 Risk-Based Selection .................................................... 28
2.02.03.01 Boundary Value Analysis..................................................... 28 2.02.03.02 Special Value Testing ........................................................... 31 2.02.03.03 Robustness Testing................................................................ 31 2.02.03.04 Worst Case Testing ............................................................... 31

2.02.04 Which Test Case Selection Method Should We Use? ..... 32

2.03 Testing Multiple Variables in Combination ................ 33


2.03.01 Cause-Effect Graphs..................................................... 33 2.03.02 Pairwise / Orthogonal Arrays Testing ............................ 34 2.03.03 Combinatorial Testing Using Input-Output Analysis ...... 35 2.03.04 All Pairs Combination................................................... 36 2.03.05 Weak Robust Equivalence-Class Testing ....................... 36 2.03.06 When to Use What Combination Technique? ................. 37
v

Chapter 3: Instructional Design and Evaluation: A Literature Review................................................. 39


3.01 Definitions....................................................................... 39 3.02 Learning Outcomes ........................................................ 40
3.02.01 Intellectual Skills .......................................................... 41 3.02.02 Cognitive Strategies ...................................................... 43 3.02.03 Verbal Information ....................................................... 43 3.02.04 Motor Skills ................................................................. 44 3.02.05 Attitudes....................................................................... 44 3.02.06 Taxonomy of Learning Levels ....................................... 45

3.03 Instructional Design ....................................................... 47


3.03.01 Identify Instructional Goals ........................................... 47 3.03.02 Conduct Instructional Analysis/Task Analysis................ 47 3.03.03 Identify Entry Behaviors and Learner Characteristics..... 48 3.03.04 Identify Performance Objectives.................................... 49 3.03.05 Develop Criterion-Referenced Test Items ...................... 49 3.03.06 Design Instructional Strategy......................................... 50
3.03.06.01 Gain Attention ....................................................................... 50 3.03.06.02 Informing the Learner of the Objective .............................. 50 3.03.06.03 Stimulating Recall of Prerequisite Learned Capabilities . 51 3.03.06.04 Presenting the Stimulus Material ........................................ 51 3.03.06.05 Providing Learning Guidance.............................................. 51 3.03.06.06 Eliciting the Performance..................................................... 51 3.03.06.07 Providing Feedback............................................................... 51 3.03.06.08 Assessing Performance......................................................... 52
vi

3.03.06.09 Enhancing Retention and Transfer...................................... 52

3.03.07 Develop Instructional Materials ..................................... 52 3.03.08 Conduct Formative Evaluation ...................................... 53 3.03.09 Revision of Instructional Materials ................................ 54 3.03.10 Conduct Summative Evaluation..................................... 55

3.04 Evaluation of Instruction ............................................... 55


3.04.01 Different Evaluation Approaches................................... 56 3.04.02 Collecting Quantitative Information for Evaluation........ 57
3.04.02.01 Knowledge and Skills Assessment...................................... 57 3.04.02.02 Attitude/Behavior Assessment............................................. 60

Chapter 4: Instructional Design and Evaluation Strategy for Domain Testing Training ................... 62
4.01 Purpose of Developing the Instructional Material....... 62 4.02 Domain Testing Approach Used in the Training ........ 62 4.03 Overview of the Instructional Materials ...................... 62 4.04 Instructional Strategy ..................................................... 63 4.05 Evaluation Strategy ........................................................ 67
4.05.01 Evaluation Materials ..................................................... 67 4.05.02 Mapping Assessment Items to Instructional Objectives .. 68

Chapter 5: Experiment Design ................................ 69


5.01 Overview of the Experiment.......................................... 69 5.02 Instructional Review Board........................................... 70 5.03 Finding Test Subjects..................................................... 70
vii

5.04 Facilities Used for the Experiment................................ 71 5.05 Before the Experiment ................................................... 72 5.06 Structure of Each Training Period ................................ 72 5.07 Experimental Error ......................................................... 74

Chapter 6: Results and Their Analyses .................. 76


6.01 Experiment Pretest and Posttest Results ...................... 76
6.01.01 Paper-Based Pretest and Posttest: Results and Their Interpretation............................................................................ 76

6.02 Performance Test Results .............................................. 99 6.03 Questionnaire Confidence, Attitude and Opinion of Learners.................................................................................. 100

Chapter 7: Conclusion............................................ 113


7.01 Domain Testing Training Where Does it Stand? ... 113
7.01.01 Pluses......................................................................... 113 7.01.02 Minuses...................................................................... 115 7.01.03 Final Remarks ............................................................ 116

Bibliography ............................................................ 118 Appendices............................................................... 131

viii

List of Charts
Chart 6.01 Paper-Based Pretest and Posttest Final Percentage Score Comparison.......................97 Chart 6.02 Final Scores Percentage Increase98 Chart 6.03 Averages..98 Chart 6.04 Averages Final Day (5) Questionnaire Responses110

ix

List of Tables
Table 6.01 Question 1 Paper-Based Pretest and Posttest Score Comparison...78 Table 6.02 Question 2 Paper-Based Pretest and Posttest Score Comparison...80 Table 6.03 Question 3 Paper-Based Pretest and Posttest Score Comparison...82 Table 6.04 Question 4 Paper-Based Pretest and Posttest Score Comparison...84 Table 6.05 Question 5 Paper-Based Pretest and Posttest Score Comparison...86 Table 6.06 Question 6 Paper-Based Pretest and Posttest Score Comparison...88 Table 6.07 Question 7 Paper-Based Pretest and Posttest Score Comparison...90 Table 6.08 Question 8 Paper-Based Pretest and Posttest Score Comparison...92 Table 6.09 Question 9 Paper-Based Pretest and Posttest Score Comparison...94 Table 6.10 Final Scores - Pretest and Posttest Score Comparison ..95 Table 6.11 Day One Questionnaire Responses...100 Table 6.12 Day Two Questionnaire Responses..102 Table 6.13 Day Three Questionnaire Responses104 Table 6.14 Day Four Questionnaire Responses..106
x

Table 6.15 Day Five Questionnaire Responses..108

xi

Acknowledgements
Dr. Cem Kaner, my principle advisor, who helped me with every aspect of my thesis work. Sabrina Fay for helping me with the instructional design of the training material for domain testing. Dr. Cem Kaner, James Bach and Pat McGee for spending their valuable time in evaluating the performance tests. Pat McGee for helping me out with some questions I had about all-pairs combination and the test cases developed by equivalence class analysis using multidimensional analysis. Amit Paspunattu for helping me with the literature search. Dr. Robert Fronk and Dr. William D. Shoaff, my committee members, for their valuable suggestions on improving my experiment design. All my learners for undergoing the training of domain testing and helping me evaluate the effectiveness of the instructional material and instruction on domain testing. National Science Foundation and Rational/IBM for funding my thesis work. All those who directly or indirectly have helped me with my thesis.

xii

Dedication
This thesis is dedicated to Almighty God for giving me the strength to successfully complete it, for uplifting me during the rough times and for cheering me during the good times. I am thankful to my loving and supportive family and to Sumit Malhotra, my best friend, without whose moral support the completion of this thesis would have been impossible.

xiii

Chapter 1: Introduction
1.01 Problem Description
The central idea of my thesis work is to develop and validate instructional materials that train people well in domain testing. The lack of systematic and effective materials for training novice testers in domain testing, a well-established software testing technique, was the driving force behind the conception of the idea of my thesis work. The National Science Foundation (NSF) has primarily funded this thesis research.

1.02 Background
This section gives a brief introduction to some basics in software testing and domain testing in an attempt to increase understanding of the detailed literature review of domain testing presented in the next chapter.

1.02.01 Definitions
Computer Program: A combination of computer instructions and data definitions that enable computer hardware to perform computational or control functions (IEEE Std. 610.12, 1990, p. 19).

Software: IEEE Std. 610.12 (1990) defined software as Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system (p.66).

Software Testing: Testing is the process of executing a program with the intent of finding errors (Myers, 1979, p. 5).

The purpose of testing is to determine whether a program contains any errors (Goodenough & Gerhart, 1975, p. 156).

Testing--A verification method that applies a controlled set of conditions and stimuli for the purpose of finding errors (IEEE Computer Society, 2004, 1).

Software testing in all its guises is a study of the software input space, the domain over which a program under test is supposed to operate. The whole question of how should testing be done? is a matter of input selection (Hamlet, 2000, p. 71).

Test Case: IEEE Std. 610.12 (1990) defined a test case as: (1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. (2) (IEEE Std. 829-1983) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item. (p. 74)

Bug/Error/Fault: We test to find bugs, also called errors. IEEE Std. 610.12 (1990) defined a bug, error or fault as: (1) The difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. For example, a difference of 30 meters between a computed result and the correct result. (2) An incorrect step, process, or data definition. For example, an incorrect instruction in a computer program. (3) An incorrect result. For example, a computed result of 12 when the correct result is 10. (4) A human action that produces an incorrect result. For example, an incorrect action on the part of a programmer or operator. (p. 31) 2

There are two general approaches to doing testing- white-box testing and black-box testing.

1.02.02 White -Box Testing Approach


In this testing approach, the program under test is treated as a white box or a glass box, something you can see through (Kaner, Falk & Nguyen, 1999; Myers, 1979). According to Myers (1979), this approach can also be called a logic-driven testing approach in which the tester does testing based on the internal structure of the program, usually ignoring the specifications. This approach is also known as structural testing or glass-box testing. IEEE Std. 610.12 (1990) defined structural, glass-box or white-box testing as Testing that takes into account the internal mechanism of a system or component (p. 71).

1.02.03 Black-Box Testing Approach


In this testing approach, the program under test is treated as a black box, something you cannot see through (Kaner et al., 1999; Myers, 1979). A black-box tester is unconcerned with the internals of the program (Howden, 1980a; Myers, 1979; Podgurski & Yang, 1993) and tests the program using the specifications of the program (Myers, 1979). Black-box testing has been defined by IEEE Std. 610.12 (1990) as (1) Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions (p. 35). Specification can be defined as A document that specifies, in a complete, precise, verifiable manner, the characteristics of a system or component, and, often, the procedures for determining whether these provisions have been satisfied (IEEE Std. 610.12, 1990, p. 69). According to Myers (1979), the black-box testing approach can also be called a data-driven or input/output-driven testing approach since black-box testers

feed the program a set of input values and observe the output to see if it is in accordance with what is expected in the specification document.

1.02.04 Complete Testing/Exhaustive Testing


One of the problems with software testing is that it is practically impossible to achieve complete or exhaustive testing (Burnstein, 2003; Collard, personal communication, July 22, 2003; Goodenough & Gerhart, 1975; Huang, 1975; Myers, 1979; Kaner, 2002a; Kaner et al., 1999). According to Kaner (2002a), exhaustive or complete testing is not possible because: 1. The domain of possible inputs is too large. 2. There are too many combinations of data to test. 3. There are too many possible paths through the program to test. 4. There are user interface errors, configuration and compatibility failures, and dozens of other dimensions of analysis. (p. 2)

Huang (1975) gave a simple yet very powerful example of why complete testing is just impossible. He described a program that has just two input variables, x and y, and one output variable z. If we were to assume that the variables are integers and the maximum value for x or y is 232 , then it means that each has 232 possibilities of input values. This in turn means that the total possible combinations of input values for x and y is 232 x 232 . Huang (1975) said, Now suppose this program is relatively small, and on the average it takes one millisecond to execute the program once. Then it will take more than 50 billion years for us to complete the test! (p. 289). This is called combinatorial explosion.

1.02.05 Domain Testing


Domain testing, one of the widely used software testing methodologies, was designed to alleviate the impossibility of complete testing and thereby enable 4

effective testing with reduced effort. Kaner and Bach (2003) described the fundamental goal or question in domain testing, saying, This confronts the problem that there are too many test cases for anyone to run. This is a stratified sampling strategy that provides a rationale for selecting a few test cases from a huge population (part 2, slide 3). Kaner, Bach and Pettichord (2002) define domain of a variable as a (mathematical) set that includes all possible values of a variable of a function (p. 36). According to Hamlet and Taylor (1990), Input partitioning is the natural solution to the two fundamental testing problems of systematic method and test volume (p. 1402). Generally speaking, tasks in domain testing consist of partitioning the input domain, the set of all possible input values, into a finite number of subsets or equivalence classes and then choosing a few representatives from each class as the candidates for testing (Goodenough & Gerhart, 1975; Jorgensen, 2002; Kaner et al., 1999; Myers, 1979). All members of an equivalence class are equivalent to each other in the sense that they are all sensitive to the same type of error (Goodenough & Gerhart, 1975; Ho wden, 1976; Jeng & Weyuker, 1989; Kaner & Bach, 2003; Kaner et al., 2002; Kaner et al., 1999; Myers, 1979; Richardson & Clarke, 1981; Weyuker & Ostrand, 1980). However, one member of a class might be more likely to expose such an error or may also be able to expose a different error. In either of these cases, we would treat that member as a better (more powerful) member of the equivalence class. When one is available, we select the best representative (most powerful member) of the equivalence class (Kaner & Bach, 2003, part 2). Although most of the literature describes partitioning of input domain, similar analysis can be applied to the output domain as well (Kaner et al., 2002; Kaner et al., 1999; Myers, 1979; Whittaker & Jorgensen, 2002). For programs having multiple variables, the next task would be to combine the test cases in order to perform combination testing.

In sum, domain testing is a divide-and-conquer method of testing in which researchers systematically reduce an enormously large test data set to a few subsets and further select a few representatives from each.

1.02.06 Similarities among Different Interpretations of Domain Testing


There are differing interpretations of the domain testing methodology, but a few things are common to all of them. They all agree that input domain of any average program is enormously large and it is impossible to test for all data points. They advocate partitioning the input domain into a finite number of subsets based on some criterion. They also agree that all the members of any such subset are equivalent to each other with respect to the criterion that was used to classify them in their respective subset. All interpretations concur that one or more representatives must be chosen from each such subset.

1.02.07 Differences between Different Interpretations of Domain Testing


One of the differences lies in the criterion that is used for partitioning the input domain. Some researchers have used path analysis as the criterion for partitioning (Boyer, Elspas & Levitt, 1975; Clarke, Hassell & Richardson, 1982; DeMillo, Lipton & Sayward, 1978; Duran & Ntafos, 1981; Ferguson & Korel, 1996; Goodenough & Gerhart, 1975; Hajnal & Forgcs, 1998; Hamlet, 2000; Howden, 1976; Howden, 1986; Huang, 1975; Jeng & Weyuker, 1989; Jeng & Weyuker, 1994; Koh & Liu, 1994; Podgurski & Yang, 1993; Weyuker & Jeng, 1991; White & Cohen, 1980; Zeil, Afifi & White, 1992a; Zeil, Afifi & White, 1992b; Zeil & White, 1981; White & Sahay, 1985). Some others have described mutation testing as a form of domain testing. Path analysis and mutation testing approaches are basically white-box approaches and are defined and described under sections 2.01.01.01.01 and 2.01.01.01.02, respectively. 6

Other researchers have used the features/functions and input/output properties described in the specification as the criterion for partitioning (Beizer, 1995; Chen, Poon & Tse, 2003; Hamlet, 1996; Howden, 1980; Hutcheson, 2003; Jorgensen, 2002; Kaner, 2002a; Kaner et al., 1999; Kaner et al., 2002; Kaner & Bach, 2003; Mayrhauser, Mraz & Walls, 1994; Myers, 1979; Ostrand & Balcer, 1988; Schroeder & Korel, 2000; Podgurski & Yang, 1993; Reid, 1997; Richardson, OMalley & Tittle, 1989; Weiss & Weyuker, 1988). This is a black-box testing approach. Some have described a specific form of black-box domain testing approach called functional testing (Hamlet, Manson & Woit, 2001; Howden, 1981; Howden, 1986; Howden, 1989; Podgurski & Yang, 1993; Vagoun, 1996; Zeil et al., 1992b). This has been defined under section 2.01.01.02.01. In the literature, Howdens works on functional testing have been described to have laid the foundation of black-box testing. There are some others who have suggested incorporating blackbox and white-box domain testing approaches into a combined approach that takes advantage of both the specification and the internal structure of a program (Binder, 1999; Chen, Poon, Tang & Yu, 2000; Goodenough & Gerhart, 1975; Hamlet & Taylor, 1990; Howden, 1980a; Howden, 1980b; Howden, 1982; Richardson & Clarke, 1981; Weyuker & Ostrand, 1980; White, 1984). Another difference is that some researchers have proposed that partitioning of input domain should result in non-overlapping or disjoint subdomains or partitions (Howden, 1976; Jorgensen, 2002; Myers, 1979; White & Cohen, 1980). However, others allowed overlapping subdomains in their analyses as they noted that in practice it may be impossible to get perfectly disjoint subdomains (Jeng & Weyuker, 1989; Kaner et al., 1999; Weyuker & Jeng, 1991). Yet another difference comes from the way best representatives are selected from an equivalence class. Some have asserted that since all members of an equivalence class are equivalent to each other, any arbitrary member can be chosen at random to represent its respective equivalence class (Howden, 1976; Ntafos, 1998; Podgurski & Yang, 1993; Weyuker & Jeng, 1991; Weyuker & Ostrand, 7

1980). But others have described strategies such as boundary value analysis for selecting best representatives from each equivalence class (Binder, 1999; Hamlet, 2000; Howden, 1980b; Hutcheson, 2003; Jorgensen, 2002; Kaner & Bach, 2003; Kaner et al., 2002; Kaner et al., 1999; Myers, 1979; White & Cohen, 1980). In boundary value analysis strategy, test data at the boundaries and just beyond the boundaries is selected. In addition, some researchers have described methods such as special value and worst case testing to supplement boundary value analysis (Jorgensen, 2002). Some have also described proportional partition testing, a partition testing strategy in which the test data selection is random but the number of test cases selected from a subdomain depends on the probability of failure of inputs in the subdomain (Chan, Mak, Chen & Shen, 1997; Chan, Mak, Chen & Shen, 1998; Chen, Wong & Yu, 1999; Chen & Yu, 1994; Chen & Yu, 1996b; Chen & Yu, 1996c; Leung & Chen, 2000; Ntafos, 1998; Ntafos, 2001). In other words, if we were to assume that each input in a subdomain is equally likely to occur, then the number of test cases selected would depend on the size of the subdomain. There is also a difference in how linear and non-linear domains and continuous and discrete domain spaces are analyzed. Some researchers have described domain testing only for linear continuous space domains (Beizer, 1995; Clarke, Hassell & Richardson, 1982; White & Cohen, 1980). Others have extended their description of domain testing methodology to non-linear and discrete domain spaces as well (Jeng & Weyuker, 1994; Kaner et al., 1999; Zeil et al., 1992b). While most people in the literature have not described forming equally sized subdomains, there are some who have suggested partitioning input domain into equally sized subdomains (Chan et al., 1998; Weyuker & Jeng, 1991). Finally, there is the difference in the driving force behind testing. Some researchers have described domain testing strategy as a method of gaining confidence in the program or for proving the correctness of the program (Goodenough & Gerhart, 1975; Howden, 1976; Howden, 1979; Howden, 1981; Howden, 1986; White & Cohen, 1980). Others have described it as a risk-based or failure-based testing approach (Beizer, 1995; Collard, personal communication, 8

July 22, 2003; Frankl, Hamlet, Littlewood & Strigini, 1998; Gerrard & Thompson, 2002; Hamlet & Taylor, 1990; Hamlet, 2000; Kaner, 2002b; Kaner & Bach, 2003; Podgurski & Yang, 1993; Whittaker & Jorgensen, 2000). In the latter approach, partitions or subdomains are formed anticipating failures, whereas in the former approach the goal is to prove that the program is working well. This means that if the program works properly for a representative of a partition, then it is assumed to give confidence of correctness for all the remaining members in the partition. Testing approaches that concentrate on achieving code coverage, such as the path analysis approach mentioned earlier, fall under this category.

1.03 Domain Testing Approach Presented in Training Material


As mentioned before, the training material was developed with the aim of training novice testers well in domain testing. In the training material I developed, I have adopted a procedural black-box approach to teaching and doing domain testing. Kaner defined the proceduralist approach to teaching, saying, A proceduralist approach tries to create a procedure for everything, and then teach people to do tasks, or to solve problems, by following the right procedure (e-mail communication, January 31, 2004). Attempts have been made to add a slight flavor of risk-based testing to this procedural approach. The combination technique discussed in the training material is the all pairs combination technique. I have attempted to incorporate Gagnes nine conditions of learning in the instructional design and Blooms taxonomy in the design of evaluation material, which includes exercises and tests.

1.03.01 Organization of the Thesis


Chapter 2: Domain Testing: A Literature Review: This describes different interpretations of domain testing in the literature.

Chapter 3: Instructional Design and Evaluation: A Literature Review: This describes instructional design and evaluation methodologies.

Chapter 4: Instructional Design and Evaluation Strategy for Domain Testing Training: This contains brief descriptions of the instructional materials and their structure and contents and points to the location of the instructional material in the thesis.

Chapter 5: Experiment Design: This chapter describes how the experiments were set up and explains the two experiments that were conducted as part of my thesis work.

Chapter 6: Results and Their Analyses: This chapter presents the results of paper-based and computer-based pretests and posttests conducted in the two experiments. It also presents analyses and interpretation of the results.

Chapter 7: Conclusion: This chapter draws conclusions from the results obtained and their interpretation.

Bibliography: This contains the list of all sources that I have used directly or indirectly for the completion of my thesis.

Appendices: There are 26 appendices, Appendix A through Appendix Z. These contain the various materials that were directly or indirectly used for the purpose of domain testing training. The last three appendices contain evaluation reports of the performance tests by Dr. Ce m Kaner, James Bach and Pat McGee, respectively.

10

Chapter 2: Domain Testing: A Literature Review


As discussed in the introductory chapter, since exhaustive testing is impossible we have to be able to do selective testing, but in a way that the entire population is represented. Partitioning the input domain and selecting best representatives from each partition is one way to achieve this goal. In this testing strategy, called domain testing, the three main tasks are: 1. Dividing or partitioning the set of all possible test cases into partitions based on some criterion. 2. Selecting candidates that best represent each partition. 3. Combining variables in case of programs having multiple variables.

2.01 Partitioning of Input Domain


This is the process of dividing the input domain or the set of all possible test cases into partitions such that all test cases in one partition are equivalent to each other with respect to some criterion. Although most of the literature describes partitioning of input domain, similar analysis can be applied to the output domain as well (Kaner et al., 2002; Kaner et al., 1999; Myers, 1979). According to Kaner et al. (2002), test cases can be grouped into an equivalence class if they satisfy the following conditions: (a) they all test the same thing; (b) if one of them catches a bug, the others probably will too; and (c) if one of them doesnt catch a bug, the others probably wont either (p. 36). Partitioning of input domain differs in the literature along the following dimensions:
q

White or Black: White-Box Testing Approach o Path Analysis Approach o Mutation Testing Approach 11

Black-Box Testing Approach/Specification-Based Approach o Functional Testing Approach Combination of White-Box and Black-Box Testing Approaches

Driving Factor: Confidence-based approach Risk-based approach

Type of Domain: Linear Non-linear

Overlapping or Disjoint Subdomains : Partitioning into overlapping subdomains Partitioning into disjoint subdomains Size of Subdomains: Equally sized subdomains No particular sized subdomains

A detailed description of the elements in the above classification of partitioning of the input domain follows.

2.01.01 White or Black


There are some testers that rely solely on the implementation of a program, which demonstrates the actual behavior of a program. We call them the 'white-box testers'. There are others that rely on specification, which describes or is at least supposed to describe the intended behavior of a program. We call this group the 'black-box testers'. 12

Black-box testers may or may not rely on a specification, and white-box testers often rely on specifications. White-box testers have knowledge of the code, while black-box testers do not. Some others have realized that both implementation and specification are important sources of information for testing. In this context, different approaches to partitioning the input domain have been described in the literature. The following is a classification of the approaches.

2.01.01.01 White-Box Testing Approach


As mentioned before, there are two visible variations of partition testing under this category: Path Analysis Approach Mutation Testing Approach

2.01.01.01.01 Path Analysis Approach Some of those who have described the path analysis approach to doing partition testing are Boyer et al. (1975), Clarke and Richardson (1982), DeMillo et al. (1978), Duran and Ntafos (1981), Ferguson and Korel (1996), Goodenough and Gerhart (1975), Hajnal and Forgcs (1998), Hamlet (2000), Howden (1976), Howden (1986), Huang (1975), Jeng and Weyuker (1989), Jeng and Weyuker (1994), Koh and Liu (1994), Podgurski and Yang (1993), Weyuker and Jeng (1991), White and Cohen (1980), White and Sahay (1985), Zeil et al. (1992a), and Zeil and White (1981). These are definitions of some basic terms concerned with path-analysis:

Path: IEEE Std. 610.12 (1990) defined a path by stating, (1) In software engineering, a sequence of instructions that may be performed in the execution of a computer program (pp. 54-55).

13

Howden (1976) defined a path this way: A path through a program corresponds to some possible flow of control (p. 209).

Kaner et al. (1999) defined a path as a sequence of operations that runs from the start of the program to an exit point. This is also called an end-to-end path. A subpath is a sequence of statements from one place in the program to another. Subpaths are also called paths (p. 43).

Path Condition: IEEE Std. 610.12 (1990) defined the path condition as A set of conditions that must be met in order for a particular program path to be executed (p. 55).

Path Analysis: This has been defined by IEEE Std. 610.12 (1990) as Analysis of a computer program to identify all possible paths through the program, to detect incomplete paths, or to discover portions of the program that are not on any path (p.55).

In the path analysis approach to doing domain testing, partitioning of the input domain is done based on paths. To understand what is meant by path in this context, consider an example of a very simple program:

If x < 10 then Event A occurs Else Event B occurs.


Depending on what the value of the variable x is, the program would either go down the path that leads to the execution of event A or would go down the path that leads to the execution of event B. In the path analysis approach to doing partition testing, the input domain corresponding to a program would be the set of all paths that the program can take. 14

For instance, in the above example, there are two possible paths. One path is that event A occurs and the other path corresponds to the possibility that event B occurs. Of course, there is another possible path for all the cases when the value of x is non-numeric. In the above example it is difficult to tell, but normally if there were an error-handling capability incorporated in the program, the path taken in the event of such an error would be whatever is mentioned in the corresponding errorhandling construct of the program. In domain testing, the first main task is to partition the input domain into partitions or equivalence classes based on some criterion. As mentioned before, the criterion in the path analysis approach is the path. All members of one partition or subset of the input domain are expected to result in the execution of the same path of the program (Howden, 1976; Weyuker & Jeng, 1991; White & Cohen, 1980). In the example above, the set of all values of variable x less than 10 forms one partition and the set of all values of x greater than or equal to 10 forms another.

2.01.01.01.02 Mutation Testing Approach If a program passed all the tests from a subdomain, then can one really be sure that the program is actually correct? Either the program really is correct over that subdomain or the tester is using a set of ineffective test cases to test the program (DeMillo et al., 1978). This is where mutation testing comes into play. Mutation testing involves creating a set of wrong versions of a program, called mutants, for every subdomain over whose members the original program seems to operate correctly. That is, it is known to have passed all the tests in that subdomain (DeMillo et al., 1978). A mutant of a program is usually formed by modifying a single statement of the original program (Weyuker & Jeng, 1991). The statement that is modified will depend on which portion of the program the associated subdomain corresponds to. Adrion, Branstad and Cherniavsky (1982), DeMillo et al. (1978), Howden (1981), Jeng and Weyuker (1989), Podgurski and Yang (1993), and Weyuker and 15

Jeng (1991) are among the researchers that have described how mutation testing can be incorporated in the process of achieving partitioning of input domain. So, how are the mutants really used? If testing the mutant gives different results compared to the original program, then it is confirmed that the original program was indeed correct. But if the mutant yields results identical to that of the original program, then there is definitely something amiss (DeMillo et al., 1978). Hence, mutation testing is taking domain testing or partition testing a step further. It ensures that when a program passes a set of tests, it really means that the program is correct. However, Howden (1981) has pointed out one outstanding disadvantage of the mutation testing approach--the large number of mutants that one might end up generating for a program. Howden said, It is estimated that there are on the order of n2 mutants for an n-line program. This implies that if a proposed test set T contains t elements it is necessary to carry out on the order of n2 to n2 t program executions to determine its completeness (1981, p. 67).

2.01.01.02 Black-Box Testing Approach/ Specification-Based Approach


Those who have described the process of domain testing using specifications include Beizer (1995), Chen et al. (2003), Hamlet (1996), Howden (1980), Hutcheson (2003), Jorgensen (2002), Kaner (2002a), Kaner and Bach (2003), Kaner et al. (2002), Kaner et al. (1999), Mayrhauser et al. (1994), Myers (1979), Ostrand and Balcer (1988), Patrick and Bogdan (2000), Podgurski and Yang (1993), Richardson et al. (1989), Reid (1997), and Weiss and Weyuker (1988). Kaner et al. (2002) have suggested doing domain testing by first identifying both input and output variables for every function either by looking at the specification for the program at hand or by looking at the program or prototype from the outside by considering the program or function as a black box. Next, they suggested finding the domain for each variable and partitioning this domain into equivalence classes. After that, they suggested choosing a few candidates from each class that best represent that class. 16

Myers (1979) suggested associating an equivalence class with every input or output condition mentioned in the specification of the product under test. Myers cited two kinds of equivalence classes associated with every input or output condition: 1. A valid equivalence class consists of all values that serve as valid input to the product or program under test, such that the corresponding input or output condition is satisfied. 2. An invalid equivalence class consists of all erroneous values or invalid inputs. Ostrand and Balcer (1988) described the classic category partition method of doing functional testing using specifications. The first three tasks in their method are: 1. Analyze the specification. The tester identifies individual functional units that can be tested separately. For each unit, the tester identifies: parameters of the functional unit characteristics of each parameter objects in the environment whose state could affect the functional units operation characteristics of each environment object

The tester then classifies these items into categories that have an effect on the behavior of the functional unit. 2. Partition the categories into choices. The tester determines the different significant cases that can occur within each parameter and environment category. 3. Determine constraints among the choices. The tester decides how the choices interact, how the occurrence of one choice can affect the existence of another, and what special restrictions might affect any choice. (p. 679)

Kaner et al. (1999) described an example of a program whose specification includes 64K to 256K RAM memory requirements. In this case, one could identify 17

three equivalence classes. The first one contains all cases that require operating the program over RAM capacities below 64K. The second class consists of all cases that require operating the program over RAM capacities between 64K and 256K. Finally, the third equivalence class consists of all cases that require operating the program over RAM capacities greater than 256K. Functional testing has been described in the literature as a specific form of specification-based or black-box domain testing, which is described next .

2.01.01.02.01 Functional Testing Approach In the context of this thesis, a function is defined as: In programming, a named section of a program that performs a specific task (Webopedia, 2003, 1). IEEE Std. 610.12 defined a function as: (1) A defined objective or characteristic action of a system or component. For example, a system may have inventory control as its primary function. (2) A software module that performs a specific action, is invoked by the appearance of its name in an expression, may receive input values, and returns a single value. (p. 35) Hamlet et al. (2001), Howden (1981), Howden (1986), Howden (1989), Podgurski and Yang (1993), Vagoun (1996) and Zeil et al. (1992b) are among the researchers that have described the functional testing approach to doing domain testing. Howden's work on functional testing is often cited in the literature as having laid the foundation of black-box testing because the approach he described focuses only on the inputs and outputs of a programs functions and not on how the program executes the functions. According to Podgurski and Yang (1993), Perhaps the most widely used form of partition testing is functional testing. This approach requires selecting test data to exercise each aspect of functionality identified in a programs requirement specification. The inputs that invoke a particular feature comprise a subdomain (p. 170). 18

In the functional approach to doing domain testing, partitioning of the input domain is done based on functions. Functional structures described in the specification are studied and test data is developed around these structures (Howden, 1986). The program at hand is analyzed to identify the functions involved, the input variables involved and their constraints. According to Howden (1981), identifying functions and selecting reliable test cases are two important tasks in functional testing. Depending on what kind of function it is, the approach to deriving a reliable test data set for the corresponding function will be different, since each will have different kinds of errors associated with it (Howden, 1981). One of the five types of functions identified by Howden (1981) based on functional structure is the arithmetic relation function. Arithmetic relation functions are computed by expressions of the form E1 r E2 , where E1 and E2 are arithmetic expressions and r is one of the relational operators <, >, =, =, = or ? (p. 70). According to Howden (1981), of the many errors that can be associated with this type of function, a simple one is use of illegal or incorrect relation, which in turn means use of an incorrect relational operator. To test for such an error, a tester would consider two partitions. One corresponds to the set of test cases that have the expressions related by the correct relational operator, and the other partition consists of all test cases that have the expressions related by one of the remaining sets of relational operators.

2.01.01.03 Combination of Black-Box and White-Box Testing Approaches


As previously mentioned, there are also a few researchers who have talked about leveraging the advantages of both white-box and black-box testing strategies and described a combined approach to doing domain testing. Among the proponents of such a combined approach have been Binder (1999), Chen et al. (2000), Goodenough and Gerhart (1975), Hamlet and Taylor (1990), Howden (1980a), 19

Howden (1980b), Howden (1982), Richardson and Clarke (1981), Weyuker and Ostrand (1980) and White (1984). In Weyuker and Ostrands (1980) approach to doing domain testing, both program-dependent sources (such as the underlying code) and programindependent sources (such as the program specifications) are used. Programdependent sources lead to path domains and program-independent sources lead to problem domains. These two domains are intersected to get the final subdomains. Since a programs specification describes the intended behavior of the program and the implementation represents the actual behavior, Weyuker and Ostrand (1980) suggested that the differences between the path and problem domains are good places to look for errors because that is where the inconsistency between the specification and the implementation lies.

2.01.02 Driving Factor


Are we trying to gain confidence that the program works well, or are we trying to find failures? To achieve confidence in a program, the kind of test cases selected will be different from the ones chosen to break the program. This in turn will affect how a tester forms partitions of the input domain. In this context, there are two general approaches to doing domain testing, which are explained in the following sections.

2.01.02.01 Confidence-Based Approach


Some researchers have described partition testing as a method of gaining confidence in the correctness of the program, rather than testing the program to specifically make it fail in as many ways as possible (Goodenough & Gerhart, 1975; Howden, 1976; Howden, 1979; Howden, 1981; Howden, 1986; White & Cohen, 1980). Most testing strategies that strive for code coverage fall under this category since the main goal there is to obtain coverage and in turn gain confidence 20

in the correctness of the program. Code coverage-based testing methods like the path analysis approach seem to give confidence in the program since they test all or most portions of the code. This approach will probably miss all or some of the bugs and issues that would have been revealed if the criterion for testing was to expose all risky areas of the program and make the program fail in many interesting and challenging ways.

2.01.02.02 Risk-Based Approach There are other researchers who have talked about strategizing domain testing effort based on risks . In other words, they described a fault-based or suspicionbased approach to forming partitions (Beizer, 1995; Collard, personal communication, July 22, 2003; Frankl et al., 1998; Gerrard & Thompson, 2002; Hamlet, 2000; Hamlet & Taylor, 1990; Kaner, 2002b; Kaner & Bach, 2003; Myers, 1979; Podgurski & Yang, 1993; Whittaker & Jorgensen, 2002). Gerrard and Thompson (2002) defined risk as: A risk threatens one or more of a projects cardinal objectives and has an uncertain probability (p. 14). Kaner (2002b) characterized risk as: Possibility of suffering loss or harm (probability of an accident caused by a given hazard) (slide 8). In other words, a statement describing a risk is an assertion about how a program or system could fail. Collard said, I often say in classes that a list of assumptions is a list of risk factors with plans for how we are going to manage them. I also emphasize the criticality of assumptions (how much does each one matter?), which is another way of saying we need to prioritize based on risk (personal communication, July 22, 2003). Most of those who describe risk-based approaches to doing domain testing do not specifically describe forming equivalence classes or partitions based on risks , but their approach describes how risks should be identified and how test data should be selected based on identified risks. Some researchers have described how data points that represent the most risky areas in an equivalence class should 21

be selected (Beizer, 1995; Collard, personal communication, July 22, 2003; Hamlet & Taylor, 1990; Kaner, 2002b; Kaner & Bach, 2004; Myers, 1979). An example is selection of test data lying on the boundaries of equivalence classes. However, Hamlet and Taylor (1990) and Kaner and Bach (2004) have not specifically described forming partitions or equivalence classes based on risks or anticipated failures. Kaner and Bach (2004) identified specific tasks involved in the risk-based domain testing approach: The risk-based approach looks like this: Start by identifying a risk (a problem the program might have). Progress by discovering a class (an equivalence class) of tests that could expose the problem. Question every test candidate What kind of problem do you have in mind? How will this test find that problem? (Is this in the right class?) What power does this test have against that kind of problem? Is there a more powerful test? A more powerful suite of tests? (Is this the best representative?) Use the best representatives of the test classes to expose bugs. (part 5, slide 3) Hamlet and Taylor (1990) suggested development of fault-revealing subdomains. They described the partition testing method to doing domain testing as a failure-based approach. They also asserted that a good partition testing method will help create subdomains, each of which is associated with a particular failure, and that testing samples from a subdomain should enable detection of the associated failure. According to Kaner (2002b), risk-based domain testing leads to development of powerful tests and optimal prioritization, assuming that correct risks are first identified and then prioritized. The hazard with using the risk-based approach is that testers might miss certain risks because they might not think they are likely or just not be aware of them (Kaner, 2002b). 22

Whittaker and Jorgensen (2002) described an attack-based approach to doing domain testing. They argued that testing any software along four (input, output, storage and computational) dimensions with the right set of attacks will, to a large extent, ensure finding the major bugs in software.

2.01.03 Linear or Non-Linear


Not all domains of programs are linear in nature. Kaner and Bach (2003) described linearizable and non-linearizable domains. Linearizable variables are ones whose values can be mapped onto a number line, such as a variable representing a range of numbers. On the other hand, non-linearizable variables are those whose values cannot be mapped onto a number line, such as printers (Kaner et al., 1999; Kaner & Bach, 2003). Kaner and Bach (2003) also characterized linearizable variables as variables whose values represent ordered sets and non-linearizable variables as ones whose values represent non-ordered sets. Most of the literature on domain testing refrains from wandering into the territory of non-linear domains, but there are some who do discuss it. Jeng and Weyuker (1994) described a simplified domain testing strategy that is applicable to non-linear domains as much as it is to linear domains. Zeil et al. (1992b) depicted detection of linear errors in non-linear domains. According to both Jeng and Weyuker (1994) and Zeil et al. (1992b), it does not matter whether the domain is continuous or discrete.

2.01.04 Overlapping or Disjoint Subdomains


Some researchers have suggested that partitioning of input domain should result in non-overlapping or disjoint partitions since their analysis is based on a pure mathematical model of doing partitioning (Howden, 1976; Jorgensen, 2002; Myers, 1979; White & Cohen, 1980). Some people talk about having disjoint or nonoverlapping partitions because according to them, if two partitions (for example, A 23

and B) overlap and someone were to select test data from the area that forms the intersection of the two partitions, how would one decide whether to consider the test data as belonging to partition A or to partition B, or both? Ntafos (1998) proposed that one way to resolve the problem of overlapping subdomains is to further partition the overlapping portion until one gets disjoint subdomains. Other researchers have relaxed their requirements and considered the possibility of overlapping subdomains in their analyses, observing that in practice, overlapping subdomains are very probable (Jeng & Weyuke r, 1989; Kaner et al., 1999; Weyuker & Jeng, 1991).

2.01.05 Size of Subdomains Equally Sized or Unequally Sized?


Most partition testing discussions in the literature do not necessitate forming equally sized domains because the size of the subdomains is an irrelevant criterion for them. However, Chan et al. (1998) and Weyuker and Jeng (1991) have asserted that forming equally sized partitions actually helps improve the effectiveness of detecting failures by partition testing over random testing. There is a detailed discussion about this under section 2.02.02.

2.02 Selecting Representatives from Subdomains


Partitioning the input domain is only half the battle. The next task is to select candidates from each partition to represent their respective partitions. The literature outlines some visible distinctions in how the selection of representatives can be done:

Random Selection Proportional Partition Testing Risk-Based Selection o Boundary Value Analysis o Special Value Testing o Robustness Testing 24

o Worst Case Testing The following section contains detailed descriptions of each of the items listed above.

2.02.01 Random Selection


Some researchers have taken the word equivalence in the equivalence class quite literally. They have considered all the elements of an equivalence class to be equivalent in all respects, so much so that they would not know how to prefer a member over any other member of an equivalence class. Hence, they have suggested random selection of one or more members from each equivalence class. Their theory is that the program under test will either pass (behave as expected) over all members of an equivalence class or fail (behave differently from what is expected) over all. According to them, it really does not matter which one is chosen from a partition (Howden, 1976; Ntafos, 1998; Podgurski & Yang, 1993; Weyuker & Jeng, 1991; Weyuker & Ostrand, 1980). In the literature, those who describe such a random process of choosing best representatives are usually the ones that do not specifically describe a risk-based approach to doing domain testing.

2.02.02 Proportional Partition Testing


This method is a slight variation of the above-defined random selection method. Some researchers have portrayed proportional partition testing, a partition testing strategy in which the test data selection is random but the number of test cases selected from a subdomain depends on the probability of failure of inputs in the subdomain. In other words, if we were to assume that each input in a subdomain is equally likely to occur, then the number of test cases selected would depend on the size of the subdomain (Chan et al., 1997; Chan et al., 1998; Chen et al., 1999; Chen & Yu, 1994; Chen & Yu, 1996; Leung & Chen, 2000; Ntafos, 1998; Ntafos, 2001).

25

There have been several discussions in the literature about the relative merits of partition testing and pure random testing. Pure random testing is not random selection of test cases from subdomains, but is just random selection of test cases from the entire input domain without forming any partitions. Random and partition testing have been found to be better than each other under certain conditions, and the two are almost equivalent under certain other conditions. Proportional partition testing has mostly been discussed as a method that evolved in order to improve partition testing and to make its effectiveness at finding failures greater than that of random testing. Weyuker and Jeng (1991) noted that when the original partition testing method is refined so that we form all subdomains of equal size and then select an equal number of test cases from each subdomain, pure random testing can never beat partition testing in terms of effectively finding failures. This is assuming that either the number of subdomains formed is very large or the number of test cases chosen from each subdomain is very large compared to the size of the subdomain itself. Chan et al. (1998) cited a modified version of the proportional partition testing method that somewhat blends the refined version of partition testing described by Weyuker and Jeng (1991) with the traditional proportional partition testing method. Chan et al. (1998) called their method the Optimally Refined Proportional Sampling (ORPS) strategy. This method involves partitioning the input domain into equally sized partitions as described by Weyuker and Jeng (1991), and then selecting one test case at random from each equally sized subdomain. Chan et al. (1998) noted that the results of trying out their ORPS strategy on several programs seemed positive enough to recommend this method as a subdomain testing strategy. Ntafos (1998) has illustrated an experiment in which he compared random testing with proportional partition testing. When doing proportional partition testing, he considered 50 subdomains. He applied the two strategies to 50 test cases to start with, and then increased to 2,000 total test cases in increments of 50 cases. 26

He referred to the number of test cases as n, the probability of detecting at least one failure in subdomain i as Pi and the number of subdomains as k. Ntafos (1998) found the following: The experiment was repeated 1,000 times using random values for the probabilities and the failure rates (but keeping the overall failure rate about the same). Test case allocation for proportional partition testing was done by starting with an initial allocation of ni = max(l, floor(n Pi )) and allocating each of the remaining test cases so as to minimize the percentage difference between the current and the true proportional allocation. With 50 test cases we have that proportional partition testing was outperformed by random testing in 154 out of 1,000 cases but the probability of detecting at least one error is 22.5% higher for proportional partition testing than random testing (averaged over the 1,000 cases). (pp. 43-44)

With 2,000 test cases, proportional partition testing is outperformed in only 35 cases but the two strategies perform equally well in 581 cases. The probability of detecting at least one error is now only 0.06% higher for proportional partition testing. The variation of the various values with n is mostly the expected one; note that the number of cases in which random testing outperforms proportional partition testing tends to decrease but the decrease is not monotonic. It is also somewhat surprising that even with 2,000 test cases random testing still outperforms proportional partition testing in a significant number of cases. (p. 44) Ntafos (1998) repeated his experiment with 100 subdomains. This time, he started with 100 test cases and increased to a total of 4,000 test cases in increments of 100 cases. The results are similar; even with 4,000 test cases, there is still one case when random outperforms proportional partition testing while the difference in performance is only 0.009% (p. 44). Ntafos (1998) concluded that if it requires at least one test case selected from each subdomain and thousands of test cases overall to prove that proportional 27

partition testing beats random testing by an insignificant amount, then the effectiveness of proportional partition testing when compared with random testing is questionable not only in terms of cost (overhead of generation of thousands of test cases) but also in terms of how effective it is in detecting failures. Ntafos (1998) also pointed out that partition testing strategies that rely on random selection of representatives from subdomains that are considered homogenous are ineffective in finding certain errors when compared with testing strategies that involve knowledge-based selection of test cases. These would include boundary values in domain testing, selection of test cases based on anticipated failures or risk-prone areas in case of risk-based approach and selection of test cases based on important features or functions of a software. He further argued that forming perfectly homogeneous subdomains has been practically impossible, so random selection of test cases from so-called homogeneous subdomains might not be a great idea after all.

2.02.03 Risk-Based Selection


Here we select test cases based on risk. Section 2.01.02.02 discussed how riskbased testing strategies determine risks and select test cases based on these risks. One of the most familiar risk-based test case selection strategies is called boundary values analysis. Jorgensen (2002) also identified certain other risk-based test selection strategies, such as special value testing and worst case testing.

2.02.03.01 Boundary Value Analysis


Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary conditions are those situations directly on, above, and beneath the edges of input equivalence classes and output equivalence classes (Myers, 1979, p. 50). A boundary describes a change-point for a program. The program is supposed to work one way for anything on one side of the boundary. It does something different for anything on the other side (Kaner et al., 1999, p. 399). 28

Hutcheson (2003) described boundary value analysis as one of the most important testing techniques. Boundary value analysis is a test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, minimum, just inside and outside boundaries, typical values, and error values (p. 316). Those who practice boundary value analysis believe that areas on the boundary and around it are risky areas. This, in fact, is a risk-based strategy. According to Kaner et al. (2002), in boundary testing the values of an equivalence class are mapped onto a number line and the boundary values, which are the extreme endpoints of the mapping and the values just beyond the boundaries, are chosen as the best representatives of that equivalence class. A best representative of an equivalence class is a value that is at least as likely as any other value in the class to expose an error in the software (p. 37). Kaner et al. (1999) depicted different kinds of boundaries, some of which are described below.

Numeric boundaries: lower and upper boundaries defined by a range of values or a single boundary defined by equality. Boundaries on numerosity: boundaries defined by the length (or ranges of length) of the elements or the number of constituent characters in the elements.

Boundaries in loops: minimum and maximum number of times a loop can execute will determine the lower and upper boundaries, respectively, for the loop iterations.

Boundaries within data structures: boundaries defined by the lower and upper bounds of structures that store data. Boundaries in space: boundaries defined by the bounds of objects in two-dimensional or three-dimensional space. Boundaries in time: boundaries defined by time-determined tasks. 29

Hardware-related boundaries: boundaries defined by the upper and lower bounds of hardware needs and requirements.

According to Hutcheson (2003), boundary value analysis is based on the belief that if a system works correctly for the boundary values, it will also work correctly for all the values within the range. This makes boundary values the most important test cases. Hutcheson (2003) explained that applying boundary value analysis to a month variable will yield the following six test cases: {0, 1, 2, 11, 12, 13}. The test cases or data points 1 and 12 are minimum and maximum values, respectively, that a month variable can take. Test cases 0 and 2 are just outside and just inside the boundary defined by test case 1, respectively. Similarly, test cases 11 and 13 are just inside and just outside the boundary defined by test case 12, respectively. Hutcheson (2003) further noted that test cases 2 and 11 seem redundant, as both of these serve as data points from the region within the two endpoints. The researcher therefore suggested replacing the two test cases 2 and 11 with mid-point data value 6, which makes {0, 1, 6, 12, 13} the final set of test cases due to boundary value analysis. Some other researchers have not recommended using any values other than those on the boundaries and ones that are just beyond the boundaries, since the others would simply be redundant. According to them, in the aforementioned example the test case 6 would also be redundant (Kaner et al., 2002; Kaner et al., 1999; Kaner & Bach, 2003; Myers, 1979). Jorgensen (2002) pointed out one outstanding disadvantage of boundary value analysis. This analysis works well only when there are independent variables that belong to domains with well-defined boundaries. Jorgensen (2002) gave an example of a function that evaluates the next date of the current date. Just applying boundary value analysis to the date variable of this function will miss errors corresponding to that of a leap year. Also, boundary value analysis cannot be applied to non-linearizable discrete variables. 30

2.02.03.02 Special Value Testing


Jorgensen (2002) described this strategy as a widely practiced form of functional testing. According to him, in this strategy a tester tests the program or software based on their knowledge of the problem domain. Testers who have tested similar software before or are familiar with its domain of functionality have their own lists of encountered issues and risky areas for such software and the corresponding problem domain. Such testers know where the most risk-prone areas in the software are. Based on their knowledge and experience, they would then select such special test cases that address those risks and reveal the vulnerability of the corresponding risk-prone areas. Jorgensen (2002) noted that for the date example described in the previous section, special value testing would generate test cases that test for leap year risks that would be missed by the normal boundary value analysis technique.

2.02.03.03 Robustness Testing


Jorgensen (2002) described robustness testing as a simple extension of boundary value analysis. He explained that the main focus here is on outputs. This kind of testing is done to determine how well the program handles situations in which the value of the expected output exceeds the maximum tolerated or perhaps falls below the minimum required. Hence, this testing would involve test case generation that yields those values for input variables that push the expected values of the output variables to the extreme and beyond.

2.02.03.04 Worst Case Testing


Jorgensen (2002) illustrated this as an extreme version of boundary value analysis. In this case, one would test using various combinations of test cases that were generated for individual variables using boundary value analysis. The intention here is to see what happens to the software when variables with such extreme values interact together. 31

Jorgensen (2002) also asserted that this kind of testing is useful when variables are known to heavily interact with each other. However, this has similar shortcomings as with boundary value analysis when there are dependent variables involved. Beizer (1995) contended that extreme points combination testing might not be an effective test case generation technique. This is because the number of combinations increases exponentially with the number of input variables and when there are dependent variables involved, a large number of the tests that are generated might be meaningless. For example, certain combinations might be impossible due to certain dependency relationships among the input variables.

2.02.04 Which Test Case Selection Method Should We Use?


This thesis research has concluded that risk-based strategy is the best strategy, especially where cost and effective failure detection are concerned. Random testing might be used as a supplement and might be valuable in high-volume automated test case execution. High-volume automated random testing would be inexpensive since it is automated, and having hundreds or thousands of random test cases might help in finding any bugs that were not revealed by the risk-based strategy. In conclusion, risk-based selection of test cases is something that definitely needs to be done, but high-volume random test selection might be done in addition if time and resources permit, especially for regression testing. Kaner et al. (2002) have described regression testing as follows: Regression testing involves reuse of the same tests, so you can retest (with these) after change. There are three kinds of regression testing. You do bug fix regression after reporting a bug and hearing later on that its fixed. The goal is to prove that the fix is no good. The goal of old bugs regression is to prove that a change to the software has caused an old bug fix to become unfixed. Side-effect regression, also called stability regression, involves retesting of substantial parts of the product. The goal is to prove that the 32

change has caused something that used to work to now be broken. ( pp. 4041)

2.03 Testing Multiple Variables in Combination


While testing multiple variables together, a test case represents a combination of input values of these multiple variables. As noted in the introductory chapter, one would ideally test for all possible combinations of inputs. But this is practically impossible because even with a normal commercial program with few variables, all possible combinations can lead to combinatorial explosion. The following are some of the techniques described in the literature that involve combination testing of multiple variables with a reduced test case set:

Cause-effect graphs Combinatorial testing using input-output analysis Pairwise or orthogonal arrays testing All pairs combination testing Weak robust equivalence class testing

The following sections include brief discussions of each of the above-listed items.

2.03.01 Cause-Effect Graphs


Bender (2001), Elmendorf (1973), Elmendorf (1974), Elmendorf (1975), Myers (1979), and Nursimulu and Probert (1995) described cause-effect graphing as a combination testing technique. To start with, using the specification of the program under test, one identifies causes, effects and constraints due to the external environment. Next, Boolean graphs are formed with causes and effects as the nodes and the links joining causes and the respective effects that represent the relationship between the causes and effects. The graph is then traced to build a decision table, which is used to produce test cases. 33

According to Ostrand and Balcer (1988), the cause-effect graphing technique can get very complicated and very difficult to implement, especially when the number of causes is too large.

2.03.02 Pairwise / Orthogonal Arrays Testing


One of the solutions to combinatorial explosion is pairwise testing. Pairwise testing (or 2-way testing) is a specification based testing criterion, which requires that for each pair of input parameters of a system, every combination of valid values of these two parameters be covered by at least one test case (Lei & Tai, 1988, p. 1). Bolton (2004) defined an orthogonal array (OA) as: An orthogonal array has specific properties. First, an OA is a rectangular array or table of values, presented in rows and columns, like a database or spreadsheet. In this spreadsheet, each column represents a variable or parameter (About Orthogonal Arrays section, 2). The value of each variable is chosen from a set known as an alphabet. This alphabet doesn't have to be composed of lettersit's more abstract than that; consider the alphabet to be "available choices" or "possible values". A specific value, represented by a symbol within an alphabet is formally called a level. That said, we often use letters to represent those levels; we can use numbers, words, or any other symbol. As an example, think of levels in terms of a variable that has Low, Medium, and High settings. Represent those settings in our table using the letters A, B, and C. This gives us a three-letter, or three-level alphabet. At an intersection of each row and column, we have a cell. Each cell contains a variable set to a certain level. Thus in our table, each row represents a possible combination of variables and values (About Orthogonal Arrays section, 3-4) 34

Hence, a row in an orthogonal array would represent one possible combination of values of the existing variables and all the rows together would comprise the set of all possible combinations of values for the variables. The combinations generated due to orthogonal array strategy can quickly become overwhelming and difficult to manage, especially with todays normal commercial programs that have hundreds of variables. Pairwise testing tends to alleviate this problem somewhat. It involves testing variables in pairs and without combining them with other variables. The pairwise strategy is helpful if the tester is trying to test for risks associated with relationships that exist among the pairs, but not otherwise (Kaner, 2002c). As pairwise testing has been described in the literature, it seems that the test cases used are actually a subset of the set of all test cases that are generated by orthogonal array testing.

2.03.03 Combinatorial Testing Using Input-Output Analysis


Schroeder and Korel (2000) described the input-output analysis approach to doing combinatorial testing, asserting that the credibility of pairwise testing or orthogonal arrays testing in terms of fault-detecting capability is quite doubtful since it has not been proven that the reduced data set resulting from these methods actually has a good failure-detection rate. They suggested reducing the set of all possible combinations without losing the ability to detect failures in the given program. Schroeder and Korel (2000) observed that since not every input to a program affects every possible output of the program, if one can identify the influencing subset of input values for every possible output of a program, the number of combinations of input-output for a particular output will be significantly lower compared to the set of all possible input-output combinations for that output. They drew input-output relationship diagrams that have inputs at the top and outputs at the bottom and arrows going down from inputs to the corresponding outputs they affect. After doing this, for every output they listed its influencing inputs as columns of a table. Next, they filled in the rows with values of the inputs 35

that lead to the corresponding output. After they had done this for each output, they used their brute-force algorithm to form a minimal set of combinations of inputs that best represent each of the outputs. The two researchers considered their method better than pairwise or orthogonal testing.

2.03.04 All Pairs Combination


Yet another solution to combinatorial explosion is all pairs combination. This is an extension of pairwise testing, but it yields more combinations. According to Kaner et al. (2002), Every value of every variable is paired with every value of every other variable in at least one test case (p. 54). If there are n number of variables in a program, then combinations generated due to the all pairs combination technique will contain n*(n-1)/2 pairs in each combination (Kaner et al., 1999). Cohen, Dalal, Parelius and Patton (1996) considered the all pairs approach to be far better than the orthogonal arrays approach. They noted that in the pairwise version of orthogonal testing, every pair of values must occur an equal number of times, which makes this combinatorial approach very difficult to implement in practice. They gave an example to prove their point. For example, for 100 parameters with two values each, the orthogonal array requires at least 101 tests, while 10 test cases are sufficient to cover all pairs (p. 87). However, this technique is only applicable to independent variables (Kaner & Bach, 2004, part 19).

2.03.05 Weak Robust Equivalence -Class Testing


Jorgensen (2002) cited four forms of equivalence class testing involving multiple variables:

Weak Normal Equivalence Class Testing: It is called normal because only those test cases that contain valid values for each variable in the input combination will be considered. It is called weak because not all possible

36

combinations of valid equivalence classes of variables involved will be considered.

Strong Normal Equivalence Class Testing: This is called normal for the reasons stated above and it is called strong because there will be at least one test case for each combination of valid equivalence classes of input variables involved.

Weak Robust Equivalence Class Testing: This is called robust because there is at least one variable in an input combination whose value is a representative of an invalid equivalence class of that variable. This method is weak because in a given combination of input variables, only one variable has its value coming from an invalid equivalence class.

Strong Robust Equivalence Class Testing: As mentioned before, the robust part comes from consideration of invalid values. The strong part refers to the fact that a single test case has multiple variables with values coming from their invalid equivalence classes.

2.03.06 When to Use What Combination Technique?


According to Kaner and Bach (2004), when independent variables are involved, all pairs is perhaps the most effective combination technique to use because it generates a minimal number of combinations when compared with other combination techniques, such as orthogonal and pairwise. It also considers several pairs of multiple variables simultaneously. When dependent variables are present in a program, then the cause-effect graphing technique can be used, although this is quite a complex combination technique (part 19). Kaner and Bach (2004) also presented a method of combining dependent variables by constructing relationship tables for variables in a program and 37

analyzing the dependency relationships existing between them to generate only meaningful combinations (part 19). Hence, if there are both independent and dependent variables in a program, one might choose to apply the all pairs method to the independent variables and then separately test the dependent variables. This is what I have done in my domain testing training. The variables are first categorized as independent or dependent. The independent variables are combined using the all pairs combination technique and then the dependent variables, if there are any, are tested separately using dependency relationship tables.

38

Chapter 3: Instructional Design and Evaluation: A Literature Review


3.01 Definitions
Instruction: Instruction is the intentional facilitation of learning toward identified learning goals (Smith & Ragan, 1999, p. 2).

By instruction I mean any deliberate arrangement of events to facilitate a learners acquisition of some goal (Driscoll, 2000, p. 25).

Instruction is seen as organizing and providing sets of information and activities that guide, support and augment students internal mental processes (Dick, L. Carey & J.O. Carey, 2001, p. 5).

Learning: Learning has occurred when students have incorporated new information into their memories that enables them to master new knowledge and skills (Dick et al., 2001, p. 5).

Driscoll (2000) contended that there are certain basic assumptions made in the literature about learning. First, they refer to learning as a persisting change in human performance or performance potential Second, to be considered learning, a change in performance or performance potential must come about as a result of the learners experience and interaction with the world (p. 11).

Instructional Design: The term instructional design refers to the systematic and reflective process of translating principles of learning and instruction into plans for instructional materials, activities, information resources and evaluation (Smith & Ragan, 1999, p. 2). 39

Reiser and Dempseys (2002) definition of instructional design is, Instructional design is a system of procedures for developing education and training programs in a consistent and reliable fashion. Instructional design is a complex process that is creative, active, and iterative (p. 17).

Learning Theory: A learning theory, therefore, comprises a set of constructs linking observed changes in performance with what is thought to bring about those changes (Dick et al., 2001, p. 11). Driscoll (2000) commented about what the focus of any learning theory should be, saying, Learning requires experience, but just what experiences are essential and how these experiences are presumed to bring about learning constitute the focus of every learning theory (p. 11).

3.02 Learning Outcomes


Why do we learn? We learn to achieve some desired outcome, perhaps to attain some new skills. There are different kinds of outcomes or skills that one can achieve or aim to achieve via the process of learning. The performance of a learner indicates the kind of outcomes that have been achieved due to learning. Krathwohn, Benjamin and Masia (1956) stated almost five decades ago that all instructional objectives, and hence learning outcomes, fall under three domains: Cognitive tasks that require intellect, recollection of learned information, combining old ideas to synthesize new ones, etc. Affective deals with emotions, attitudes, etc. Psychomotor tasks requiring muscular movement, etc. Later in the literature, these three domains were expanded upon and spread out across five different skills or learning outcomes. The following five learning outcomes are described in the literature (Bloom, Hastings & Madaus, 1971; Dick &

40

Carey, 1985; Dick et al., 2001; Driscoll, 2000; Gagne, Briggs & Wager, 1988; Morrison, Ross & Kemp, 2004; Reiser & Dempsey, 2002). Intellectual skills Cognitive strategies Verbal information Motor skills Attitudes Intellectual skills, cognitive strategies and verbal information, which are discussed in sections 3.02.01 through 3.02.05, fall under the cognitive domain. Attitudes fall under the affective domain and motor skills are classified in the psychomotor domain.

3.02.01 Intellectual Skills


Intellectual skills enable individuals to interact with their environment in terms of symbols or conceptualizations. Learning an intellectual skill means learning how to do something of an intellectual sort. Identifying the diagonal of a rectangle is one example of a performance that indicates achievement of an intellectual skill as a learning outcome (Gagne et al., 1988, pp. 43-44). Also in the literature, the following subcategories of intellectual skill are described (Gagne, 1985; Driscoll, 2000).

Concept: This skill is the ability to combine previously possessed pieces of knowledge to learn new rules and thereby new concepts. Driscoll (2000) further depicted two different types of concepts, defined and concrete. This means that the learner can state the newly learned rule or concept and can demonstrate the essence of the concept as well.

Discriminations: This subcategory of intellectual skill is the ability to discriminate between different objects involved in the new concept or rule 41

that needs to be learned. The learner should be able to discriminate between the objects in terms of certain properties, such as color, shape, size and texture. Without learning to make such discriminations, a learner cannot learn new concepts or rules.

Higher-Order Rules: This is the ability to combine two or more simple rules or concepts and form complex rules or concepts. This skill comes in handy, especially when a learner is involved in some kind of problemsolving activity.

Procedures: When rules or concepts become too long and complex, they need what is called a procedure. A learner then needs to develop the ability to state a sequence of events or actions of a procedure that describes the simple divisions of a complex rule or concept. The learner, through the learned procedure, should then be able to apply the corresponding complex concept or rule to an applicable problem or situation and should know exactly how and where to start, as well as how to arrive at the end result of the solution.

Here lies the foundational idea of the procedural approach to teaching. I have used the procedural approach in the training material for domain testing. First, I present my learners with the concept of domain testing technique. Next , I describe the tasks involved in performing testing using this technique. Then I lay out the procedures for performing the individual tasks, starting from the first and going to the last in a sequential and procedural manner. This approach has its advantages and disadvantages. The disadvantages are apparent when higher-order learning is concerned, which I came to realize when my learners performance tests were evaluated. The advantage is that you train the learners to a common baseline and there is a lot of uniformity in what they learned 42

and how they apply the knowledge. The problems encountered with the procedural style of teaching domain testing are discussed in Chapter 7.

3.02.02 Cognitive Strategies


Gagne et al. (1988) defined cognitive strategies, saying, They are capabilities that govern the individuals own learning, remembering, and thinking behavior (p. 45). Every individual has their own internal methods of learning things. Cognitive strategies control an individuals internal mechanisms of learning and in turn, how they learn (Gagne et al., 1988). Using an image link to learn a foreign equivalent to an English word is one of the examples Gagne et al. (1988) gave of a performance that demonstrates use of cognitive strategies as one of the learning outcomes (p. 44). In the context of software testing, an example of a cognitive strategy would be applying knowledge of characteristics of different types of variables to identify variables of a given program and mapping them to different variable types. According to Gagne (1985), as learners begin to achieve intellectual skills they are trying to develop internal strategies, finding ways to improve the way they learn, think and grasp knowledge and skills.

3.02.03 Verbal Information


Verbal information is the kind of knowledge we are able to state. It is knowing that or declarative k nowledge. During our lifetime, we all constantly learn verbal information or knowledge (Gagne et al., 1988, p. 46). Listing the seven major symptoms of cancer is one of the examples Driscoll (2000) gave of a performance that demonstrates learned capability of verbal information (p. 350). In the context of software testing, an example of verbal knowledge would be being able to list the different tasks involved with doing domain testing.

43

3.02.04 Motor Skills


A motor skill is one of the most obvious kinds of human capabilities. Children learn a motor skill for each printed letter they make with pencil on paper. The function of the skill, as a capability, is simply to make possible the motor performance (Gagne et al., 1988, pp. 47-48). Examples of motor skills include serving a tennis ball, executing a triple axle jump in ice skating, dribbling a baseball, and lifting a barbell with weights (Driscoll, 2000, p. 356). Gagne (1985) asserted that learners are said to have learned a motor skill not just when they are able to perform a set of actions, but when they are also able to perform those actions proficiently with a sense of accuracy and smoothness. In the context of domain testing, an example of a motor skill would be use of the wrist, hand and fingers to operate the mouse and position the cursor on the computer screen to type characters, construct equivalence class and all pairs tables, etc.

3.02.05 Attitudes
What we do, how we do what we do, and why we do what we do are determined and influenced by our attitude. All of us possess attitudes of many sorts toward various things, persons, and situations. The effect of an attitude is to amplify an individuals positive or negative reaction toward some person, thing, or situation (Gagne et al., 1988, p. 48). Determining learners attitudes is important when trying to evaluate an instructional design and the corresponding instructional material, since attitudes are influential to the way learners learn and how they perform. Gagne et al. (1988) said that since it is obvious that observing attitudes of each and every learner in a classroom is extremely time consuming, an alternative to observation is having the learners fill out anonymous questionnaires or surveys.

44

Keeping the questionnaires anonymous helps the learners freely express themselves. This might also help to determine if a learners poor performance was due to their own attitude and effort or something lacking in the instruction. However, it would be very difficult to analyze the responses since anonymity does not guarantee that the responses will be accurate. Also, if one learners response was very negative and one learner failed the class, there is really no way to prove that these two learners are one and the same. It is quite possible that a student who aced the test might not have really enjoyed the class, or one who enjoyed the class actually did fail. In general, questionnaires help to measure learners overall attitudes towards the instruction and find out on an average what the learners opinions about the instruction are. It is also helpful to see if the overall responses are synchronized with the overall performance, or if there is just something outstandingly different about the attitudes when compared with the performance. However, questionnaires might not always be useful for deducing specific correlations between attitude and performance of individual learners.

3.02.06 Taxonomy of Learning Levels


The taxonomy of learning levels proposed by Benjamin Bloom, Gagnes contemporary, falls within the cognitive domain of learning outcome --one of the three learning levels described by Gagne (Bloom et al., 1971; Driscoll, 2000). The following is a brief description of the taxonomy of learning levels, popularly known as Blooms Taxonomy (Anderson et al., 2001; Bloom et al., 1971; Driscoll, 2000; Learning Skills Program, 2003; Krathwohn et al., 1956; Reiser & Dempsey, 2002).

Knowledge: This is the ability to recall knowledge and information presented during an instruction. Being able to define domain testing-related terms such as equivalence class analysis, boundary value analysis and all 45

pairs combination is an example of this ability. This is not an intellectual ability. The next five learning levels require intellectual skills.

Comprehension: This is the ability to understand and grasp the instructional material. The ability to understand what is meant by boundary values of a variable is an example of this learning level.

Application: This is the ability to use the knowledge and skills learned during the instruction by putting it to practice in real scenarios or situations. Being able to identify variables of a real program and apply equivalence class analysis to the variables to come up with equivalence classes for the variable is an example of this sort of learning level.

Analysis: This is the ability to see patterns, correlate different information and identify components of a problem. Being able to realize when just applying boundary value analysis is a good idea and when finding additional test cases based on special value testing is a better idea is an example of this kind of ability.

Synthesis: This is the ability to use different pieces of information and put them together to draw inferences and possibly create new knowledge and concepts. Being able to put all different concepts in domain testing together and correctly apply them to any given program or software is an example of this ability.

Evaluation: This is the ability to make judgments about the knowledge acquired and concepts learned through an instruction. This is also the ability to compare the learned concepts with other similar concepts and make informed decisions about their value, perhaps even being able to determine to what extent the instructional material addresses the higher-level 46

objectives of the instruction. Being able to evaluate the effectiveness of the domain testing method relative to other testing techniques or being able to judge when one method is more applicable than others to a situation is an example of the highest level in Blooms taxonomy.

3.03 Instructional Design


According to Gagne et al. (1988), The purpose of instruction, however it may be done, is to provide support to the processes of learning (p. 178). There are 10 stages of instructional design discussed in the literature, and they are outlined in the subsequent sections (Dick & Carey, 1985; Dick et al., 2001; Gagne et al., 1988; Morrison et al., 2004; Smith & Ragan, 1999).

3.03.01 Identify Instructional Goals


What exactly do you want the learners to achieve? The instructor has to determine what the need for designing the instruction really is and what performance is expected out of the learners after undertaking the instruction. These desired expectations can be identified as instructional goals. A goal may be defined as a desirable state of affairs (Gagne et al., 1988, p. 21). According to Gagne et al. (1988), this can be done by studying the goals, comparing them with the current scenario and determining what is missing in the current scenario. This will give direction to the design of instruction. In my case, I realized that the existing training materials on domain testing were not thorough enough and there werent enough assessment items in the form of exercise questions to train learners for every task involved with doing domain testing. This missing piece was my motivation in developing the training material the way I did.

3.03.02 Conduct Instructional Analysis/Task Analysis


What tasks need to be performed to achieve the instructional goals and what skills are required to perform these tasks? The purpose of this stage is to do skill analysis 47

in order to find out what skills are required to achieve each of the goals defined in the first stage. But first and foremost, task analysis needs to be conducted in order to find out the steps or tasks required to achieve each of the goals. Then the associated skills corresponding to each task or step are determined (Dick et al., 2001; Gagne et al., 1988; Jonassen, Tessmer & Hannum, 1999). Task analysis for instructional design is a process of analyzing and articulating the kind of learning that you expect the learners to know how to perform (Jonassen et al., 1999, p. 3). According to Gagne et al. (1988), learning task analysis is carried out if the skills involved are of intellectual nature. The purpose of a learning task analysis is to reveal the objectives that are enabling and for which teaching sequence decisions need to be made (p. 24). In my case, I did a detailed task analysis of the domain testing technique, starting with the higher-level tasks such as identifying variables and conducting equivalence class analysis, further breaking down each individual task into subtasks. Once the tasks were identified, the corresponding skills were analyzed.

3.03.03 Identify Entry Behaviors and Learner Characteristics


What are the prerequisite skills? Gagne et al. (1988) stated that the purpose of this stage is to determine what skills and characteristics the learners should have. This step is important since the instructional designer needs to identify for whom the instruction would be appropriate and for whom it would be inappropriate. Dick et al. (2001) described the entry behavior test which needs to be administered to determine whether or not the learners possess the required prerequisites. This is discussed further in section 3.04.02.01. In my case, after I completed the task and skill analyses for domain testing, I realized that my prospective learners needed to have at least some basic knowledge in discrete mathematics to sufficiently understand the concept of set theory. They also needed some experience in programming, and having taken at least two programming classes was deemed sufficient. I determined that learners should not have taken any 48

software testing courses before, since that would influence their performance in my training.

3.03.04 Identify Performance Objectives


What kind of performance of learners determines their success? The needs and goals should be translated into performance objectives that are specific and detailed enough to show progress towards the instructional goals developed in the first stage. According to Gagne et al. (1988), there are several reasons why writing performance objectives could be very useful. One of the goals of developing performance objectives is to cater to communication with people at all levels. Having these objectives also helps in the development of instructional material. Yet another reason for having detailed performance objectives is that they enable measurement of student performance against the objectives. Appendix P contains the basic and higher-order instructional objectives that were identified for my training.

3.03.05 Develop Criterion-Referenced Test Items


How do we know if the learners have actually learned? Tests are a means to assess the effectiveness of the instructional design. Various kinds of tests are described in the literature, such as entry behavior tests, pretests, posttests and practice tests. Pretests and posttests enable the instructor to measure the extent to which learners have learned and practice tests or exercises help the instructor to keep track of each learners progress and provide corrective feedback from time to time. I had one pretest and two posttests, as well as exercise questions to test each instructional objective. More of this is discussed in section 3.04.02.01.

49

3.03.06 Design Instructional Strategy


What is the instructors plan of action to enable the learners to meet the objectives? Gagne et al. (1988) defined instructional strategy, saying, By instructional strategy we mean a plan for assisting the learners with their study efforts for each performance objective (p. 27). In order to support the process of learning, an external support structure which consists of events of instruction needs to be designed. According to Gagne et al. (1988), The events of instruction are designed to make it possible for learners to proceed from where they are to the achievement of the capability identified as the target objective (p. 181). Gagne et al. (1988) also stated that the purpose of any teaching should be to provide nine events of instruction, which are described next. Section 4.04 describes how I have attempted to achieve these nine events of instruction in my training of domain testing.

3.03.06.01 Gain Attention


Learners learn the most when instructors have their attention. In order to achieve this, instructors should constantly involve stimulus changes in their lessons. Nonverbal communication is also often used to gain attention. Kaner (2004) suggested a technique to gain students attention and motivate them. He presented the students with a problem even before presenting any related lecture material. He gave them some time to think about it before presenting the lecture and perhaps the solution. He hoped that this would trigger students thinking and help them come up with their solution as they follow the lecture.

3.03.06.02 Informing the Learner of the Objective


Informing the learner about instructional objectives is very important, as the learner needs to be made aware of what kind of performance is indicative of having 50

accomplished learning. This also helps give the learner a direction towards achieving the end goals.

3.03.06.03 Stimulating Recall of Prerequisite Learned Capabilities


It is very important that the instructor help the learners recall prior learning by asking probing, relevant questions. This enables the learner to connect new knowledge to previous knowledge and makes learning fun and interesting. Much of new learning (some might say all) is, after all, the combining of ideas (Gagne et al., 1988, p. 184).

3.03.06.04 Presenting the Stimulus Material


Every piece of information that is required by the learner to meet the performance objectives must be communicated either verbally or in writing. The written form of communication is called instructional material.

3.03.06.05 Providing Learning Guidance


Guiding the learner facilitates the process of learning. Guiding a learner is not telling the answer but it is giving direction, which in turn is expected to enable the learner to combine previous concepts to learn new concepts.

3.03.06.06 Eliciting the Performance


Once the learners have been provided instruction and guidance, they are tested to see if they can actually perform and exhibit characteristics that indicate that they have indeed learned. In other words, these tests determine if they have met the instructional objectives. Learners can be tested through exercises and tests.

3.03.06.07 Providing Feedback


The learners should be given feedback and should be made aware of the degree of correctness in their performance. There may be written or verbal feedback 51

provided. It is very important for learners to know what they are doing correctly and incorrectly.

3.03.06.08 Assessing Performance


According to Gagne et al. (1988), there are two decisions that need to be made by the instructor when assessing the learners performance. The first is, does the performance in fact accurately reflect the objective? The second judgment, which is no easier to make, is whether the performance has occurred under conditions that make the observation free of distortion (pp. 189-190).

3.03.06.09 Enhancing Retention and Transfer


Gagne et al. (1988) have asserted, Provisions made for the recall of intellectual skills often include arrangements for practicing their retrieval (p. 190). Transfer of learning can be effectively achieved by giving the learners a set of new tasks that are very different from the tasks (for example, practice exercises) that they were exposed to during the actual instruction. The goal of doing this is to see whether or not a learner can apply the concepts and skills learned through the instruction to a situation that is very different, but still in the applicable domain (Gagne et al., 1988).

3.03.07 Develop Instructional Materials


What training material will be provided to the learners? The word materials here refers to printed or other media intended to convey events of instruction (Gagne et al., 1988, p. 29). Depending on how original the instructional goals are, instructional materials might or might not already be available in the market. If applicable instructional material does not exist, instructors may develop their own instructional material, although it is an expensive affair. Even if the exact required materials are not available in the market, teachers could use the ones that are

52

available and integrate them with their ideas to produce something that is tailored to the requirements of their instructional goals (Gagne et al., 1988).

3.03.08 Conduct Formative Evaluation


Does the instructional design really cater to the instructional objectives and will the instruction actually work? The purpose of formative evaluation is to revise the instruction so as to make it as effective as possible for the largest number of students (Gagne et al., 1988, p. 30). Dick and Carey (1985) defined the purpose of formative evaluation strategy, saying, The major function of formative evaluation is to collect data and information to improve an instructional product or activity (p. 257). There are two main things that need to be done in formative evaluation. The first is to have a s ubject matter expert evaluate the instructional design. The second task is to have the instructional material tried out on some test subjects from the target population. The data hence collected can then be used to revise the instruction (Dick & Carey, 1985). According to Worthen, Sanders and Fitzpatrick (1997), formative evaluation is a technique to determine holes in an instruction, which in turn gives direction to the instructor about what improvements need to be made. Formative evaluation is an early risk-analysis strategy used to determine whether or not the instructional design is going to work. The following are three levels or stages of formative evaluation that have been discussed in the literature (Dick & Carey, 1985; Dick et al., 2001; Gagne et al., 1988; Smith & Ragan, 1999).

One-to-One Evaluation/Testing: This involves evaluating the performance of a few individual test subjects chosen from the target population. The goal of one-to-one testing is to find out if the instructional material has some obvious blunders, such as unclear questions on the test or unreasonable learning 53

expectations. If there are such blunders, then corrective measures need to be taken and the instruction needs to be revised.

Small Group Evaluation/Testing : This involves trying out the instructional materials on a small group of learners. While Gagne et al. (1988) have suggested a group of six to eight test subjects, Dick and Cary (1985) argued that data collected from a group of fewer than eight subjects would not be credible enough to be declared as a correct representation of the target population. Dick and Carey (1985) also said that there are two main objectives in conducting small group evaluation. The first is to check if the blunders found in the previous stage have been effectively implemented, and the second is to find any other errors in the instructional design.

Field Trial: Now an entire classroom of learners that very closely simulates the actual classroom environment gets to try the instructional materials, which have been revised by one-to-one and small group evaluation techniques. Dick and Carey (1985) have asserted that there are two things that need attention in this final stage of formative evaluation. The first is to verify that the problems found in the small group evaluation stage have been effectively resolved. The second is to determine if the instruction is truly ready for the real audience.

I conducted a single one-to-one evaluation and one small group evaluation for the formative evaluation of my training material. I will collectively refer to these as pilot studies.

3.03.09 Revision of Instructional Materials


Have you made the necessary changes and corrections to the instruction based on the data collected during formative evaluation? According to Dick and Carey (1985), there are two basic types of revisions that will need to be made in this stage of instructional design The first is changes that need to be made in the content or 54

substance of the materials to make them mo re accurate or more effective as a learning tool. The second type of change is related to the procedures employed in using your materials (p. 223). Based on the feedback I received from the pilot studies, I revised my instructional material before conducting the actual training.

3.03.10 Conduct Summative Evaluation


How worthy is the instruction? According to Gagne et al. (1988), once the instructional materials have been revised enough through formative evaluation, their effectiveness as a whole is tested by conducting summative evaluation. Dick and Carey (1985) defined the purpose of summative evaluation, saying, Summative evaluation may be defined as the design, collection, and interpretation of data and information for a given set of instruction for the purpose of determining the value or worth of that instruction (p. 258). Unlike formative evaluation, the data collected during summative evaluation is not used to improve the existing version of the course or instruction, but it is intended to be used to improve potential future revisions of the course. Summative evaluation may be conducted as early as immediately after formative evaluation or as late as after several years. According to Worthen et al. (1997), summative evaluation is carried out to make decisions about the future of the instructional program and to determine whether or not it can be adopted. Chapter 7 presents the evaluation reports of the performance tests taken by my learners. These reports may be treated as a summative evaluation.

3.04 Evaluation of Instruction


Bloom et al. (1971) outlined their view of evaluation in education this way: 1. Evaluation as a method of acquiring and processing the evidence needed to improve the students learning and the teaching. 55

2. Evaluation as including a great variety of evidence beyond the usual paper and pencil examination. 3. Evaluation as an aid in clarifying the significant goals and objectives of education and as a process for determining the extent to which students are developing in these desired ways. 4. Evaluation as a system of quality control in which it may be determined at each step in the teaching-learning process whether the process is effective or not, and if not, what changes must be made to ensure its effectiveness before it is too late. 5. Finally, evaluation as a tool in education practice for ascertaining whether alternative procedures are equally effective or not in achieving a set of educational ends. (pp. 7-8) Efforts in teaching software testing, especially these days, seem to have all or most of the above-mentioned objectives as their focus of evaluation. My training primarily attempted to cater to the first three objectives mentioned above.

3.04.01 Different Evaluation Approaches


According to Worthen et al. (1997), the following are several different evaluation approaches: 1. Objectives-oriented approaches, where the focus is on specifying goals and objectives and determining the extent to which they have been attained. 2. Management-oriented approaches, where the central concern is on identifying and meeting the informational needs of managerial decisionmakers. 3. Consumer-oriented approaches, where the central issue is developing evaluative information on products, broadly defined, for use by consumers in choosing among competing products, services and the like.

56

4. Expertise-oriented approaches, which depend primarily on the direct application of professional expertise to judge the quality of whatever endeavor is evaluated. 5. Adversary-oriented approaches, where planned opposition in points of view of different evaluators (pro and con) is the central focus of the evaluation. 6. Participant-oriented approaches, where involvement of participants (stakeholders in that which is evaluated) is central in determining the values, criteria, needs, and data for the evaluation. (p. 78) I have used the objectives-oriented evaluation approach in this thesis. I found this approach to be most applicable to the kind of evaluation I wanted to do for my training, which was being able to direct the assessment items towards the predefined objectives. This evaluation method provides the perfect technique of mapping assessment items to the instructional objectives, which makes it possible to measure learners performance against specific instructional objectives. This is discussed more in Chapter 4, section 4.05.

3.04.02 Collecting Quantitative Information for Evaluation


In the literature, tests and questionnaires have been described as two of the most common methods employed to collect quantitative information that can be used to evaluate the effectiveness of an instructional program.

3.04.02.01 Knowledge and Skills Assessment


The following are four testing approaches described in the literature:

Norm-Referenced Testing: Worthen et al.(1997) stated that the principal aim of administering norm-referenced tests is to compare the performance of one group of learners with the performance of a different group of learners taking the same test. They contended that the 57

weakness of this approach could be that the test content might have little or no validity for the curriculum being evaluated. This weakness does not exist in the criterion-based testing approach, which is discussed next.

Criterion-Referenced Testing : In contrast with norm-referenced tests, criterion-referenced tests are developed specifically to measure performance against s ome absolute criterion (Worthen et al., 1997, p. 352). Such testing has an edge over norm-referenced testing strategy because the content of the tests is tailored according to a specific curriculum and consequently is relevant to that curriculum (Worthen et al., 1997). Every item on the test is also tied to some criterion, which makes this testing strategy very effective. Dick et al. (2001) declared that the terms objectives-referenced and criterion-referenced are one and the same, except that objectives-referenced is used to be more specific when tying the assessment or test items to the performance objectives.

Dick et al. (2001) described four kinds of criterion-referenced tests: o Entry Behavior Test: These tests are administered to the learners to find out their level of expertise in the prerequisite skills. If some learners perform poorly on this test, it might mean that they would have great difficulty in succeeding in the upcoming instruction or may not succeed at all. o Pretest: The pretest is usually administered so that the score can be compared with the score on the posttest to determine how much the learners have really gotten out of the instruction. This also means that both the pretest and posttest should be equivalent in terms of difficulty level and what performance objectives they address. Dick et al. (2001) also 58

specifically said, A pretest is valuable only when it is likely that some of the learners will have partial knowledge of the content. If time for testing is a problem, it is possible to design an abbreviated pretest that assesses the terminal objective and several key subordinate objectives (p. 147). The entry behavior test and pretest can be combined into one test if time is especially short (Dick et al., 2001). o Practice Tests: These are administered during the instruction at regular intervals, not only because it helps the learners to remain involved with the instruction, but also because they are like milestones that indicate how much the student has learned so far. This helps the instructor provide feedback to the learners from time to time and lets students understand where they are doing well and where they need to improve. o Posttest: This is administered to determine how much the learners have learned and whether or not the performance objectives have been met. According to Dick et al. (2001), Posttests are administered following instruction, and they are parallel to pretests, except they do not include items on entry behaviors (p. 148). In addition, Worthen et al. (1997) contended that some instructional designs are posttest-only designs because a pretest in those circumstances might not provide useful information for assessment and evaluation.

Objectives-Referenced Testing: According to Worthen et al.(1997), unlike norm-referenced and criterion-referenced testing strategies that provide a standard for judging learners performance, objectivesreferenced and domain-referenced testing strategies do not provide any such standards. In objectives-referenced tests, the test items are 59

designed to cater to specific instructional objectives. Dick et al. (2001) have reiterated that the terms objectives-referenced and criterionreferenced are one and the same, except that objectives-referenced is more specific in tying the assessment items back to the stated performance objectives. Objectives-referenced and criterion-referenced testing are useful mostly for formative evaluation (Worthen et al., 1997).

Domain-Referenced Testing: In domain-referenced tests, the test items are designed to test the learners knowledge and mastery of a domain of content.

I have used objectives-referenced testing strategy in my training, which is categorized under the objectives-oriented evaluation approach. I have used this approach for the reasons previously mentioned.

3.04.02.02 Attitude/Behavior Assessment


Attitudes, particularly behavioral outcomes of instruction, need to be studied because they not only show how effective the instruction has been, but they also enable the instructor to receive corrective feedback from learners in the form of criticism or suggestions that might help improve future instruction (Morrison et al., 2004). Questionnaires, surveys and interviews are some of the techniques described in the literature to assess learners attitudes. Attitudes as a learning outcome have been discussed previously in section 3.02.05.

Questionnaires: According to Worthen et al. (1997), Questionnaires (sometimes referred to as surveys) may be developed to measure attitudes, opinions, behavior, life circumstances (income, family size, housing conditions, etc.) or other issues (p. 353). Jonassen et al. (1999) have stated that questionnaires, called survey questionnaires, might be used during the instructional analysis phase of 60

instructional design itself. According to them, survey questionnaires might be given to a group of subjects from the target audience to determine the kind of tasks they perform, which in turn helps in performing task analysis.

Morrison et al. (2004) have outlined two kinds of questions that might be included in a questionnaire: Open-Ended these require that learners write down answers to the questions in their own words. For example, What did you like best about the instruction? is an example of an open-ended question. Closed-Ended these require that learners choose from a given set of answers. Such questions usually have their answers mapping to a rating scale, such as 1Excellent, 2-Very Good, 3-Good, 4-Fair and 5-Poor.

I have used questionnaires in my training sessions to measure learners attitudes, opinions and behaviors. I have used both open-ended and closed-ended questions in these questionnaires.

Interviews: Morrison et al. (2004) have stated, An interview allows learners to discuss their reactions toward instruction in more detail than can be done on a survey or questionnaire (p. 301). They have further contended that it is up to the instructional designer to decide, keeping in mind the comfort level of the interviewees, if group interviews or individual interviews are to be conducted.

61

Chapter 4: Instructional Design and Evaluation Strategy for Domain Testing Training
4.01 Purpose of Developing the Instructional Material
As mentioned before, the central idea of my thesis work is to develop and validate instructional materials that train people well in domain testing.

4.02 Domain Testing Approach Used in the Training


As discussed in Chapter 1, I have presented a procedural black-box approach to doing domain testing in the training material I developed. Attempts have been made to add a slight flavor of risk-based testing to this procedural approach. The combination technique discussed in the training material is the all pairs combination technique. I have attempted to incorporate Gagnes nine conditions of learning in the instructional design and Blooms taxonomy in the design of evaluation material, which includes exercises and tests.

4.03 Overview of the Instructional Materials


The instructional material collectively contains:

1. Reference Materials: Appendix A: Reference Material #1 for Domain Testing Training Appendix B: Reference Material #2 for Domain Testing Training Appendix C: Reference Material #3 for Domain Testing Training Appendix D: Heuristics for Equivalence Class Analysis Appendix E: Guidelines for All Pairs Combination

2. Lecture Slides: Appendix F: Day 1 Lecture 62

Appendix G: Day 2 Lecture Appendix H: Day 3 Lecture Appendix I: Day 4 Lecture

3. Exercises: Appendix J: Day 2 Exercises Appendix K: Day 3 Exercises Appendix L: Day 4 Exercises

4. Tests: Appendix M: Paper-Based Tests (Tests A and B) Appendix N: Performance Test

5. Questionnaires: Appendix O: Questionnaires

4.04 Instructional Strategy


The instruction was designed based on Gagnes conditions of learning. Specifically, the instruction was designed to meet Gagnes nine events of instruction.

1. Gain Attention
I have tried to achieve this through motivational introduction on the first day of the training with the help of a simple example showing how even a simple program can make testing seem so impossible. I then introduce domain testing as a software testing technique which helps alleviate the impossibility of complete testing. I also utilize nonverbal communication such as voice modulation, in which I vary the tone and pitch of my voice. This not only wakes up 63

sleeping minds and makes them alert, but it also helps emphasize a sentence or concept. Before introducing any new topic during the remaining training, I present a scenario that highlights the importance of the topic.

2. Identify Objectives
During the introductory lecture on the first training day, I give the learners a brief overview of the overall instructional and performance objectives so they are aware of what performance is expected of them. Before the start of every new topic, I give a brief introduction on what is coming next , including objectives.

3. Recall Prior Learning


The examples and exercises have been presented in the order of their complexity. Every new example and exercise requires the knowledge and skills acquired in the previous examples and exercises. Before starting with a new example, I help the students recall what they have learned so far and how it extends to what they are going to learn next.

4. Present Stimulus
This is achieved through: Lecture slides Reference materials

5. Guide Learning
This is achieved through: 64

Reference materials Linking examples to real world problems Examples presented in the order of complexity Examples teaching how to extract information from problem description and translate that to symbolic information

Examples organized so that learners get to see patterns from


previous examples and generalize them to apply to new situations, examples and exercises. Whether or not the learners actually see patterns and generalize them to apply to new situations has been assessed with the help of corresponding exercises.

After every set of examples, the learners get to solve similar


exercises that help reinforce the concept that is being taught.

6. Elicit Performance
This is achieved through: Exercises

Tests
Learners get to take three tests: a paper-based pretest, a paper-based posttest and a computer-based posttest. The paper-based tests are achievement tests, whereas the final posttest is the performance test. An extra posttest was administered for the sake of extra validation of the instructional materials. Learners get to solve both paper-based and computer-based exercises during the training sessions. Computer-based tests and exercises involve actual applications or programs on the computer.

7. Provide Feedback Feedback is given after every exercise session.

65

During each exercise session, I go around the class from learner to learner to see their progress and point out what they are doing correctly and where they can improve. If time permits, the learners are required to take corrective action immediately by redoing the exercise question that they previously did incorrectly or incompletely.

An answer key to the exercise questions is also shown and is provided for future reference.

8. Assess Performance
This is done with the help of: Exercises Tests (pre and post) Answer key Grading standards

9. Enhance Retention/ Transfer


For every new piece of information presented in the examples, learners get to solve exercises that are on various levels of difficulties according to Blooms taxonomy. Those levels are: i. Knowledge ii. Comprehension iii. Application iv. Analysis v. Synthesis vi. Evaluation

Learners get to test real computer applications.

66

Every exercise question and every question on each of the tests has been assigned a difficulty level according to Blooms taxonomy. (See Appendix Q and Appendix R.)

4.05 Evaluation Strategy


I have used objectives-oriented evaluation strategy. According to Worthen et al. (1997), the purpose of evaluation in objectives-oriented evaluation strategy is determining the extent to which the instructional objectives have been met.

4.05.01 Evaluation Materials


The following materials have been used for evaluation purposes: Exercises Tests o Pretest paper-based test (A or B) o Posttests Paper-Based Test equivalent to the pretest Performance Test testing real world application

There are basically two paper-based tests, Test A and Test B. Both of them have an identical number of questions that are equivalent in terms of what instructional goals they test for. However, the scenario described in the corresponding questions is slightly different. In addition, there is a performance test. The learners were divided into two groups at random, and those groups were equally sized whenever possible. Again at random, one of the groups was assigned Test A and the other Test B on the pretest. If learners got Test A on the pretest, then they would get Test B on the posttest and vice versa.

67

Questionnaires Learners got to fill out a questionnaire at the end of each of the five training days. (See Appendix O to find the five questionnaires.)

4.05.02 Mapping Assessment Items to Instructional Objectives


Every exercise question and every question on each of the tests has been mapped to one or more instructional objectives. This is in keeping with the spirit of objectivesoriented evaluation strategy. The matrices illustrating these mappings are located in Appendix R. These matrices, as previously mentioned, also have a difficulty level corresponding to Blooms taxonomy assigned to each exercise and test question. (See Appendix Q.)

68

Chapter 5: Experiment Design


5.01 Overview of the Experiment
After designing and developing training material for domain testing, I conducted two rounds of pilot studies. These were basically formative evaluations that lead to immense revision and refinement of the training material content, my instruction and the way I was organizing the training material. Once I felt the training material was polished enough to be used in the real experiment, I conducted actual training sessions using the revised training material with 23 learners. I started with 18 learners in my actual experiment. Each training period lasted for five days, starting on a Monday and ending on a Friday. The entire training period totaled 18 hours. The learners took a paper-based pretest on the first day following a brief introductory lecture on domain testing. The pretest was open book and the learners were given reference materials to consult during the test. On the second, third and fourth training days, the learners were subject to lectures that involved demonstration of examples that addressed one or more instructional objectives. They were then required to solve exercises based on what they learned in the lecture. The strategy was to alternate between examples and exercises. The learners could refer to the reference materials and lecture slides to solve exercises. However, they were not allowed to take the reference materials home. This helped control the amount of time subjects actually spent on the material during the week. We didn't want the variability of some students working for several hours per night on the material while others worked only in class. We also wanted to reduce the chances of learners scheduled to attend future training sessions being exposed to the training material, which could influence those future learners performance. On the fifth and final training day, the learners were required to take two posttests. One was a paper-based test which was equivalent to the pretest, and the 69

other was a performance test that required them to apply what they had learned to an actual computer program. As before, both the posttests were open book. The learners had access to the reference materials, lecture slides, exercises and the exercise answer key, along with their own solutions to the exercises. Learners were required to fill out questionnaires after the end of each training day. Every training day except the first day was split into two sessions. There was a morning session followed by a lunch break and the afternoon session. Lunch breaks lasted for approximately an hour. Caffeinated and carbonated drinks were also served after lunch breaks each training day and during pretests and posttests. After the completion of this experiment, it was realized that the experiment unfortunately had a blunder associated with one of the instructional objectives. This is discussed further in section 5.06. It was then decided to correct this error and conduct another round of training sessions using five more learners. The first round of experiments is classified as Experiment 1 and the latter round as Experiment 2.

5.02 Instructional Review Board


Any research at Florida Tech involving human subjects requires approval from Florida Techs instructional review board (IRB). If the research meets certain criteria, which mine did, it may be exempted by the IRB. Before conducting the experiments, I submitted an application to the IRB requesting exemption from IRB review. The application was approved. (See Appendix T.)

5.03 Finding Test Subjects


Flyers and e-mail invitations to open e-mail forums were used to obtain test subjects. Appendix V contains the flyer that was first used to advertise the training sessions. The content of the e-mail invitations was identical to the display flyer. The prerequisites for undergoing this training were successful completion of a discrete math course and at least two programming languages courses. It was also required that the learners had never taken a software testing course before. 70

This first version of the advertisement helped me get subjects for pilot studies (formative evaluation). The pilot studies were conducted for 15 hours. After the pilot studies , it was realized that 15 hours were not enough to successfully complete the training and that we were falling short by at least three hours. Thus, the training time was extended to 18 hours. The Call for Participation contents were revised and a new version was circulated inviting test subjects for the final experiment. The revised version is included in Appendix W. Again, both flyers and e-mails to open forums were used as media for communication. To begin with, I had planned out schedules of the training sessions in four successive weeks. When prospective candidates responded to the advertisement, I interviewed them to determine if they had the required prerequisites. If they did satisfy the prerequisites, I explained briefly what the training sessions were and what they were required to do. I then assigned the candidates to one of the four training sessions depending on their availability. I sent out e-mail confirmations about the dates and times each candidate was scheduled for. I also sent out e-mail reminders to candidates five days before the Monday of their scheduled training week, specifically telling them to confirm their attendance so that in case they were unable to attend, I would have sufficient time to look for some other suitable candidates to fill their spot.

5.04 Facilities Used for the Experiment


Computer-aided classrooms were used for the training sessions. Classrooms had to be booked in advance. Having computers in the classroom was very useful, especially when the students had to solve exercises and test questions on the computer. The instructor terminal was also connected to a projector that projected the computer screen image on a large screen that faced the students.

71

5.05 Before the Experiment


Before starting the training on the first day, the learners were informed about the rules and regulations of the experiment, what their role in the experiment was, how many hours of their time would be invested in the training and how they would be compensated. They were also given a consent form, which explained the same and more. This consent form is included in Appendix U. The learners were allowed to participate in the training sessions only after they had completely read the consent form and signed it. Learners academic transcripts were collected on the first day before the training began. Transcripts were required as proof that they indeed met the prerequisites for attending the training. The learners were also required to fill out a tax form to be compensated for investing their time in the training. Finally, the learners were assigned secret codes in order to protect their identities. The learners used the codes on tests and exercise sheets. No data collected during the training required the learners real names, real student identification numbers or any other personal information.

5.06 Structure of Each Training Period


The following is a description of sessions on each day. This section and Appendix S describe the distribution of time with respect to the examples and exercises, as well as the order in which the examples and exercise sessions were conducted.

Day 1
Introduction to domain testing (30 minutes) see Appendix F Paper-based pretest (150 minutes), reference materials were provided see Appendix M and Appendices A through C for reference materials

Day 2 -- See Appendix G and Appendix J


Morning session (90 minutes): Testing range type variables along single dimension: 72

o Numeric variables Integer Floating point

Afternoon session (90 minutes): Testing numeric variables defined over multiple ranges Introduction to ASCII and testing in terms of ASCII Testing string type variables along a single dimension

Day 3 -- See Appendix H and Appendix K


Morning session (120 minutes): Testing range type variables along multiple dimensions: o Numeric variables Integer Floating point

o String variables Testing enumerated variables Identifying variables and their data type of actual computer application functions Afternoon session (120 minutes): Testing actual computer application functions using the concepts and techniques learned in the earlier sessions

Day 4 -- See Appendix I and Appendix L


Morning session (90 minutes): Introduction to combination testing Performing all pairs combination on simple problems Identifying independent and dependent variables Demonstration of all pairs combination of test cases pertaining to independent variables belonging to actual computer application functions

73

Testing the dependent variables separately by first finding their dependency relationships Afternoon session (90 minutes): Performing all pairs combination of test cases of actual computer application functions Identifying independent and dependent variables of the functions Combining independent variables using all pairs combination technique and testing dependent variables separately based on dependency relationships

Day 5 -- See Appendix M and Appendix N


Morning session (150 minutes): Paper-based posttest Afternoon session (150 minutes): Performance-based posttest

5.07 Experimental Error


After trying the training material on 18 learners, I realized that I had made an error in the instructional objective that required the learners to be able to do all pairs combination. I misinterpreted and misunderstood the method of doing all pairs combination, so obviously what was taught to the learners was incorrect. Dr. Kaner pointed out this blunder when he and other evaluators were reviewing the performance tests of the 18 learners. Having corrected my understanding of the all pairs combination technique, I made appropriate changes to the instructional material, reference material and lecture slides involving the topic. The questions on the exercises and the tests remained the same, except that the expected answers were now according to the corrected instruction. All other things remained unchanged. Since the portion of the instructional materials altered was minor, Dr. Kaner recommended that five 74

learners should be sufficient to test the corrected material. Thus, Experiment 2 was performed with five new learners having the same prerequisites as before.

75

Chapter 6: Results and Their Analyses


6.01 Experiment Pretest and Posttest Results
This section presents the results of evaluation of the pretest and posttests (both paper-based and computer-based performance test). The results have been analyzed and interpreted. The scores for individual questions, the final scores of all the 23 learners and the interpretation of the scores are presented in Table 6.01 to Table 6.10. The graphic representation of the comparison of the 23 learners final scores is presented in Chart 6.01 to Chart 6.03. The average of pretest score, posttest score and percentage increase for the 23 learners is presented in Chart 7.12.

6.01.01 Paper-Based Pretest and Posttest: Results and Their Interpretation


All the learners scores are presented in Table 6.01 to Table 6.10. Question 6, which tested all pairs combination, had an experimental error corresponding to Experiment 1. This has been discussed in section 5.06. This question, which was worth 10 points, has not been evaluated for Experiment 1. Consequently, the paperbased pretest and posttest for Experiment 1 are evaluated out of a total score of 90 points and those for Experiment 2 are evaluated out of 100 points. Before we proceed to the results, let us look at the definitions of the terms average and standard deviation.

Average: An average is a numerical value used to describe the overall clustering of data in a set. Average is usually used to indicate an arithmetic mean. However, an average can also be a median or a mode (Intermath Dictionary, 2004a, 1). 76

Standard Deviation: A measure describing how close members of a data set are in relation to each other. The standard deviation is the square root of the variance, also known as the mean of the mean. The standard deviation can be found by taking the square root of the variance (Intermath Dictionary, 2004b, 1).

Question 1 (10 points):


Test A: An integer field/variable i can take values in the range 999 and 999, the endpoints being inclusive. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here.

Test B: An integer field/variable i can take values in the range 1000 and 1000, the endpoints being exclusive. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here.

Table 6.01 analyzes and interprets learners scores on Question 1.

77

Table 6.01: Question 1 Paper-Based Pretest and Posttest Score Comparison


Stude nt# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q1 (Pretest score, worth 10 points) 9 10 8 10 10 10 10 10 10 9.5 2 5 5 4 8 8 10 9.75 5 2 4 7 4 7.40 2.85 Q1 (Posttest, worth 10 points) 10 10 10 10 10 10 10 10 10 10 10 9.75 9.5 10 10 10 10 10 10 10 10 10 10 9.97 0.11

Interpretation (Question 1): The average score of the students for this question was 7.4 in the pretest and 9.97 in the posttest. This means that most of the students did well on this question in the pretest itself. Almost all of the students scored above 90% in the posttest and their scores improved from pretest to posttest. The standard deviation of the scores was 2.85 and 0.11 in the pretest and posttest, respectively. This means that the deviation in the scores from the average was higher in the pretest than in the posttest. In fact, the deviation in the scores on the posttest was extremely minimal.

78

Question 2 (10 points):


Test A: A floating point field/variable f can take values only between 70.000 and 70.000, the left end being inclusive and the right end being exclusive. The precision is 3 places after the decimal point. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here.

Test B: A floating point field/variable f can take values only between 50.00 and 49.99, both endpoints being inclusive. The precision is 2 places after the decimal point. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here.

Table 6.02 analyzes and interprets learners scores on Question 2.

79

Table 6.02: Question 2 Paper-Based Pretest and Posttest Score Comparison


Student# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q2 (Pretest score, worth 10 points) 9 9 6.5 8 9.5 10 10 10 8 3 1.5 0 5 3 10 8 8 7.25 3 2 4 7 4.5 6.36 3.14 Q2 (Posttest, worth 10 points) 10 10 9 10 10 10 10 10 10 10 10 10 9.5 10 10 10 10 10 10 10 10 10 10 9.93 0.23

Interpretation (Question 2): Students average score for this question was 6.36 in the pretest and 9.93 in the posttest. This means that on an average, the students scored well (63.6%) on this question in the pretest itself. Almost all of the students scored above 90% in the posttest and their scores improved from pretest to posttest. The standard deviation of the scores was 3.14 and 0.23 in the pretest and posttest, respectively. This means that the deviation in the scores from the average was higher in the pretest than in the posttest. In fact, the deviation in the scores on the posttest was minimal. 80

Question 3 (10 points):


Test A: A string field/variable s can take only uppercase and lowercase letters from the English alphabet. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here.

Test B: A string field/variable s can take digits and uppercase and lowercase letters from the English alphabet. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here.

Table 6.03 analyzes and interprets learners scores on Question 3.

81

Table 6.03: Question 3 Paper-Based Pretest and Posttest Score Comparison


Student# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q3 (Pretest score, worth 10 points) 4 5 4 1 6 4 4 0 2 3 2 0 3 2 3 3 5 1 4 3 4 3 4.5 3.07 1.57 Q3 (Posttest, worth 10 points) 10 10 5 10 10 10 9 10 10 10 8 8 8 7 7 10 10 9 10 9 9 9 8 8.96 1.33

Interpretation (Question 3): Students average score for this question was 3.07 in the pretest and 8.96 in the posttest. This means that on an average, the students scored very poorly (30.7%) on this question in the pretest. But the average score improved significantly to 89.6% in the posttest. The standard deviation of the scores was 1.57 and 1.33 in the pretest and posttest, respectively. This means that the deviation in the scores from the average was minimal in the pretest and the posttest, although the scores became somewhat more uniform in the posttest.

82

Question 4 (10 points):


Test A: In an online airlines reservation system, there is a year combo-box field that has the following options available: o 2003 o 2004 o 2005 Develop a series of tests by performing equivalence class analysis and boundary value analysis, if applicable, on this field.

Test B: DVD Collections, Inc. has a shopping Web site where a user can purchase DVDs. For checking out, a user needs to enter his or her credit card number and the type of the credit card, which is a combo-box field having the following options available: o American Express o VISA o MasterCard o Discover Develop a series of tests by performing equivalence class analysis and boundary value analysis, if applicable, on the type of credit card variable/field.

Table 6.04 analyzes and interprets learners scores on Question 4.

83

Table 6.04: Question 4 Paper-Based Pretest and Posttest Score Comparison


Student# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q4 (Pretest score, worth 10 points) 9 7 7 9.25 9.25 4.25 4 0 0.5 1 0.5 1 4 0.5 9.5 6 2 0 1 3 3 2 5 3.86 3.29 Q4 (Posttest, worth 10 points) 10 0 10 9.25 10 4.25 10 10 10 9 9 9.5 9.75 10 9.75 10 10 9.25 7 8 6 10 10 8.73 2.42

Interpretation (Question 4): Students average score for this question was 3.86 in the pretest and 8.73 in the posttest. This means that on an average, the students scored very poorly (38.6%) on this question in the pretest. But the average score improved significantly to 87.3% in the posttest. The standard deviation of the scores was 3.29 and 2.42 in the pretest and posttest, respectively. This means that the deviation in the scores from the average was higher in the pretest than in the posttest. 84

Question 5 (5 points):
Test A: The screenshot of the login function of the Yahoo Messenger application program is shown below. For this login function, identify its variables. For each variable, determine the data type and state whether the variables are input or output.

Test B: The screenshot of an online length or distance unit converter program is provided below. The program takes an input value for length and the corresponding unit of measurement and outputs the corresponding equivalent value for the output unit of measurement chosen by the user. Identify the variables of the unit converter program, the data type of each of the variables and state whether the variables are input or output.

Length or distance
Input
1 inches

Output
0.000016 miles (UK and US)

Table 6.05 analyzes and interprets learners scores on Question 5. 85

Table 6.05: Question 5 Paper-Based Pretest and Posttest Score Comparison


Student# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q5 (Pretest score, worth 5 points) 5 5 1 4 5 5 2.5 4.5 3 5 5 3.75 5 2 2 5 2 2 4 1 3 5 2.5 3.58 1.44 Q5 (Posttest, worth 5 points) 5 4.5 5 5 4.75 5 5 5 5 5 5 5 5 5 5 5 5 2 5 5 5 5 5 4.84 0.63

Interpretation (Question 5): Students average score for this question was 3.58 in the pretest and 4.84 in the posttest. This means that on an average, not only did the students score well (71.6%) on this question in the pretest itself, but the average score in the posttest also improved by 25.2%. The standard deviation of the scores was 1.44 and 0.63 in the pretest and posttest, respectively. This means that the deviation in the scores from the average was slightly conspicuous in the pretest but it became quite negligible in the posttest.

86

Question 6 (10 points):


Test A: There are five variables l, m, n, o and p in some function. The variables l, m, n and o have five test cases each and p has two test cases. How many total combinations of test cases are possible for these five variables? How many minimal combinations does all pairs combination technique yield? How many pairs will each combination have if all pairs combination technique is used? Develop a series of combination tests on these five variables by performing all pairs combination on them. Show all iterations. Give relevant comments when you backtrack and redo any ordering. Please have a separate table for each reordering.

Test B: There are four variables a, b, c and d in some function. All four variables have five test cases each. How many total combinations of test cases are possible for these four variables? How many minimal combinations does all pairs combination technique yield? How many pairs will each combination have if all pairs combination technique is used? Develop a series of combination tests on these four variables by performing all pairs combination on them. Show all iterations. Give relevant comments when you backtrack and redo any ordering. Please have a separate table for each reordering.

Table 6.06 analyzes and interprets learners scores on Question 6.

87

Table 6.06: Question 6 Paper-Based Pretest and Posttest Score Comparison


Student# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q6 (Pretest score , Q6 (Posttest, worth worth 10 points) 10 points)

This question has not been evaluated for Experiment 1 due to an experimental error (refer to section 5.06) corresponding to the instructional goal that is being tested in this question.

3 2 4 1 4.5 2.90 1.43

10 10 10 9 10 9.80 0.45

Interpretation (Question 6): Students average score for this question was 2.90 in the pretest and 9.80 in the posttest. This means that on an average, the students scored poorly (29%) on this question in the pretest itself. But the average score in the posttest improved significantly to 98%. The standard deviation of the scores was 1. 43 and 0.45 in the pretest and posttest, respectively. This means that the deviation in the scores from the average was low in the pretest and it became almost negligible in the posttest.

88

Question 7 (15 points):


Test A: The I-20 VISA program for international students lets a student know the status of his or her application for a new I-20 by entering his or her corresponding VISA number. The VISA number is a 16-digit numeric value with no dashes, commas or spaces in between. The minimum allowed VISA number that a student could have ever been assigned is 1000009999000000 and the maximum is 9999955590999800. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class and boundary value analysis on this variable.

Test B: Creative Technologies offers its employees the opportunity to attend conferences and training that helps in the enhancement of their skill set and in turn the companys profits. The company requires that an employee use the companys reimbursement system to report expenses. The system has an expense report function that has various fields like employee ID, meal expenses, travel expenses, training/conference registration expenses, miscellaneous expenses and total expenditure. The company reimburses the employees up to $5,000. All expense-related fields require entering of only whole numbers, which means that the user should round each expense to the nearest whole number and then enter this number in the corresponding field. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class and boundary value analysis on this variable.

Table 6.07 analyzes and interprets learners scores on Question 7.

89

Table 6.07: Question 7 Paper-Based Pretest and Posttest Score Comparison


Student# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q7 (Pretest score, worth 15 points) 4 3 3 0 0.5 4 1 1 2 1 1 3 5 1 1.5 3 0 0 5 0 1.5 4 6.5 2.22 1.87 Q7 (Posttest, worth 15 points) 5.5 14 15 15 14 15 15 14.75 14 14.5 13 14.75 13 15 15 14.75 14.5 13 14.75 14.75 14 13 13 13.88 1.98

Interpretation (Question 7): Students average score for this question was 2.22 in the pretest and 13.88 in the posttest. This means that on an average, the students scored very poorly (14.8%) on this question in the pretest. But the average score improved significantly to 92.53% in the posttest. The standard deviation of the scores was 1.87 and 1.98 in the pretest and posttest, respectively. This means that the deviation in the scores from the average was almost equivalent in the pretest and the posttest. The students did equally poorly in the pretest and equally well in the posttest. 90

Question 8 (15 points):


Test A: Bank X has started a new savings account program that allows customers to earn interest based on their savings account balance. The interest is calculated using the following table: Balance range $5000.00 <= balance <= $15000.00 $15000.00 < balance <= $30000.00 $30000.00 < balance <= $55000.00 $55000.00 < balance <= $80000.00 balance > $80000.00 Interest that can be earned 1.09% 1.50% 1.73% 2.09% 2.50%

What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable.

Test B: XYZ credit cards offer a cash back award program. After a customer spends a particular amount of money, he or she receives a cash back award according to the following table:

Amount spent range Cash back that can be earned $1000.00<= amount spent <= $1500.00 $50 $1500.00 < amount spent <= $2000.00 $75 $2000.00 < amount spent <= $2500.00 $95 $2500.00 < amount spent <= $3500.00 $110 amount spent > $3500.00 $150 What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Table 6.08 analyzes and interprets learners scores on Question 8. 91

Table 6.08: Question 8 Paper-Based Pretest and Posttest Score Comparison


Student# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q8 (Pretest score, worth 15 points) 2.5 0 3 2 3.5 4 5 0.5 0 0 0.5 1 5 1 2 4 5 0 7 0 5 5 9 2.83 2.53 Q8 (Posttest, worth 15 points) 14 0 15 14.75 14 14.5 10 10 14.75 14.5 14 14.5 13 13 12 14.75 11 11 14 12 12 14 11 12.51 3.17

Interpretation (Question 8): Students average score for this question was 2.83 in the pretest and 12.51 in the posttest. This means that on an average, the students scored very poorly (18.86%) on this question in the pretest. But the average score improved significantly to 83.4% in the posttest. The standard deviation of the scores was 2.53 and 3.17 in the pretest and posttest, respectively. This means that the deviation in the scores from the average actually increased from the pretest to the posttest. This is because student JL711S2 scored zero on this question in both the pretest and posttest, whereas the others improved significantly in the posttest. 92

Question 9 (15 points):


Test A: ZLTech has a Web-based student records system. A student has to enter his or her Social Security number in the text field provided for it and then click on the sign in button to log in and access his or her records, which include courses taken by the student so far and the corresponding grades. The Social Security number can have only 11 characters. The first three characters have to be digits, then a hyphen and then two more digits, a hyphen and finally four digits. No other characters whatsoever are allowed to be entered for the Social Security number field. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable.

Test B: The U.S. Immigration and Naturalization Service has an online application that lets international students check the status of their work authorization application in the United States. Upon applying for work authorization, every student is given a ticket number, which is an 11character value. Every ticket number consists of digits and uppercase letters, with a letter in the fourth and eighth positions. The other characters have to be strictly digits. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class and boundary value analysis on this variable.

Table 6.09 analyzes and interprets learners scores on Question 9. 93

Table 6.09: Question 9 Paper-Based Pretest and Posttest Score Comparison


Student# Student Code. Experiment 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. Experiment 2 19. 20. 21. 22. 23. Average Standard deviation JL711S1 JL711S2 JL711S3 JL711S4 JL1418S1 JL1418S2 JL1418S3 JL1418S4 JL1418S5 JL2125S1 JL2125S2 JL2125S3 JL2125S4 JL28A1S2 JL28A1S3 JL28A1S4 JL28A1S5 JL28A1S6 A2529S1 A2529S2 A2529S3 A2529S4 A2529S5 Q9 (Pretest score, worth 15 points) 1 0 2.35 5 2 2 3.5 0 2 1 2 2 4 2 1 3 0 0 1 2 2 2 4 1.91 1.35 Q9 (Posttest, worth 15 points) 15 14 14.5 15 14.5 12 14.5 12 15 14.75 7.5 14 12 12 12 14 14.5 14 13.5 14 13 12 13 13.34 1.69

Interpretation (Question 9): Students average score for this question was 1.91 in the pretest and 13.34 in the posttest. This means that on an average, the students scored very poorly (12.73%) on this question in the pretest. But the average score improved significantly to 88.93% in the posttest. The standard deviation of the scores was 1.35 and 1.69 in the pretest and posttest, respectively.

94

Table 6.10: Final Scores - Pretest and Posttest Score Comparison


Student# Pretest Final Score (out of 90 for 1:18, out of 100 for 19:23) 43.5 39 34.85 39.25 45.75 43.25 40 26 27.5 23.5 14.5 15.75 36 15.5 37 40 32 20 33 15 30.5 36 44.5 31.84 10.24 Pretest % Score Posttest % Score Percentage Increase

Student Code. Experiment 1 1. JL711S1 2. JL711S2 3. JL711S3 4. JL711S4 5. JL1418S1 6. JL1418S2 7. JL1418S3 8. JL1418S4 9. JL1418S5 10. JL2125S1 11. JL2125S2 12. JL2125S3 13. JL2125S4 14. JL28A1S2 15. JL28A1S3 16. JL28A1S4 17. JL28A1S5 18. JL28A1S6 Experiment 2 19. A2529S1 20. A2529S2 21. A2529S3 22. A2529S4 23. A2529S5 Average Standard deviation

Posttest Final Score (out of 90 for 1:18, out of 100 for 19:23) 79.5 62.5 83.5 89 87.25 80.75 83.5 81.75 88.75 87.75 76.5 85.5 79.75 82 80.75 88.5 85 78.25 94.25 92.75 89 92 90 84.28 6.81

48.33 43.33 38.72 43.61 50.83 48.06 44.44 28.89 30.56 26.11 16.11 17.50 40.00 17.22 41.11 44.44 35.56 22.22 33 15 30.5 36 44.5 34.61 11.26

88.33 69.44 92.78 98.89 96.94 89.72 92.78 90.83 98.61 97.50 85.00 95.00 88.61 91.11 89.72 98.33 94.44 86.94 94.25 92.75 89.00 92.00 90.00 91.43 6.16

40.00 26.11 54.06 55.28 46.11 41.67 48.33 61.94 68.06 71.39 68.89 77.50 48.61 73.89 48.61 53.89 58.89 64.72 61.25 77.75 58.50 56.00 45.50 56.82

Interpretation (Final Scores): Since Question 6 has not been evaluated for Experiment 1 (refer to section 5.06), the final pretest and posttest scores for Experiment 1 were computed out of 90 points and the scores for Experiment 2 were computed out of 100 points. All the scores have also been computed on a scale of 95

100 (% scores) for the sake of uniformity and easier data analysis. On an average, the percentage increase in the scores from pretest to posttest was 56.82%, a good improvement. The standard deviation of the final pretest score was 11.26. It decreased to 6.16 for the posttest, which means that the students performance was more uniform compared to that in the pretest.

Chart 6.01, Chart 6.02 and Chart 6.03 graphically represent the comparison of final paper-based pretest and posttest percentage scores, the percentage increases and the averages of the final pretest and posttest percentage scores, and the average percentage increase in the scores from pretest to posttest, respectively.

96

Chart 6.01: Paper-Based Pretest and Posttest Final Percentage Score Comparison

Percentage Score Comparison


120.00

100.00

80.00 Score (out of 100)

60.00

Pretest % Posttest %

40.00

20.00

0.00 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Student

97

Chart 6.02: Final Scores Percentage Increase

Chart 6.03: Averages


AVERAGES
100

90

80

70

60 Score

50

40

30

20

10

0 Pretest (out of 100) Posttest (out of 100) Percentage Increase (Max 100%)

98

6.02 Performance Test Results


Appendices X, Y and Z contain the performance test evaluation reports by Dr. Cem Kaner, James Bach and Pat McGee, respectively.

99

6.03 Questionnaire Confidence, Attitude and Opinion of Learners


As mentioned before, the learners were required to complete questionnaires at the end of each of the five training days. The questionnaires consisted of both closedended and open-ended questions. Tables 6.11 through 6.15 describe questions on each questionnaire and their answers.

Table 6.11: Day 1 Questionnaire Responses


Questionnaire Day 1 1. How would you rate the overall clarity of today's lecture?
Excellent 10 Good 13 Fair 0 Poor 0

2.

How would you rate the overall clarity of the pretest?


Excellent 2 Good 15 Fair 5 Poor 1

3.

How would you rate your overall satisfaction with today's lecture?
Excellent 6 Good 15 Fair 2 Poor 0

4.

How would you rate your overall satisfaction with the pretest?
Excellent 3 Good 13 Fair 5 Poor 2

5.

How would you rate today's session in terms of how interesting it was?
Excellent 10 Good 10 Fair 3 Poor 0

6.

Rate your confidence when answering the questions on the pretest.


Excellent 1 Good 7 Fair 11 Poor 4

7.

Was enough time given to solve the pretest?


More than enough 7 Enough 11 Little short of enough 5 Very less time provided 0

8.

Were too many or too less questions included in the pretest?


Too many questions Enough Little short of enough Very less questions

100

11

12

9.

How would you rate your competence in doing domain testing?


Excellent 1 Good 7 Fair 11 Poor 4

10.

What was the most useful part of todays session?


General comments were that the reference materials were very helpful in solving the test. The learners also liked the fact that there was an introductory lecture before the pretest.

11.

What did you like best about todays session?


While some commented that the pretest was the best part of the training session, there were some others that commented that the introductory lecture was the best part.

12.

What was the least useful part of todays session?


Some commented that the least useful part of the day was taking the pretest. Others complained that the pretest was just too lengthy. There were a couple of learners that felt that everything was useful. One learner commented that he was just too confused to know what was useful and what was useless.

13.

What other information would you like to see added to todays lecture and test?
Some commented that they would like to see more examples added to the reference materials, while others said that a longer introductory lecture would have been nice.

14.

Would you like to suggest specific improvements to todays lecture?


Some commented that it would be nice to have demonstrations of some more examples during the lecture, which might help during the pretest.

15.

Would you like to suggest specific improvements to the pretest?


Generally, the comments were that the number of questions on the test should be reduced.

101

Table 6.12: Day 2 Questionnaire Responses

Questionnaire Day 2 1. How would you rate the overall clarity of today's lecture?
Excellent 14 Good 9 Fair 0 Poor 0

2.

How would you rate the overall clarity of today's exercises?


Excellent 8 Good 15 Fair 0 Poor 0

3.

How would you rate your overall satisfaction with today's lecture?
Excellent 11 Good 12 Fair 0 Poor 0

4.

How would you rate your overall satisfaction with today's exercises?
Excellent 7 Good 15 Fair 1 Poor 0

5.

How would you rate today's session in terms of how interesting it was?
Excellent 9 Good 12 Fair 2 Poor 0

6.

Rate your confidence when solving the exercises.


Excellent 5 Good 18 Fair 0 Poor 0

7.

Was enough feedback provided for the exercises?


More than enough 6 Enough 16 Little short of enough 1 Very less 0

8.

Was enough time given to solve the exercises?


More than enough 3 Enough 16 Little short of enough 4 Very less time provided 0

9.

What was the most useful part of todays session?


While some commented that they best liked the exercises and examples, others specifically mentioned that they found doing equivalence class analysis the best part of the training session. Some also mentioned that they enjoyed learning to do domain testing on non-numbers. A few commented that they found the lecture very informative.

10.

What did you like best about todays session?


While some commented that the exercises and examples were the best part, some others commented that they liked the fact that examples and exercises were alternated. Some 102

commented that they liked the fact that examples and exercises were alternated. Some also commented that they loved the step-by-step approach to doing equivalence class analysis.

11.

What was the least useful part of todays session?


While most of them did not answer this question, some commented that everything was useful. There were two that thought that the equivalence class analysis of non-numbers was least useful to them. There was a learner who thought that the lecture slides were least useful.

12.

What other information would you like to see added to todays lecture?
While most of them did not answer this question, some commented that they would not like to see anything added. One learner commented that she would like more lecture time.

13.

What other information would you like to see added to todays exercises?
Some commented that it would nice to have more explanation of how to build the equivalence class table

14.

Would you like to suggest specific improvements to todays lecture?


While most of them did not answer this question or just said none, some commented that the lecture was too long.

15.

Would you like to suggest specific improvements to todays exercises?


While most of them did not answer this question or just said none, a few of the learners requested more time to solve the exercises.

103

Table 6.13: Day 3 Questionnaire Responses

Questionnaire Day 3 1. How would you rate the overall clarity of today's lecture?
Excellent 10 Good 12 Fair 1 Poor 0

2.

How would you rate the overall clarity of today's exercises?


Excellent 9 Good 14 Fair 0 Poor 0

3.

How would you rate your overall satisfaction with today's lecture?
Excellent 8 Good 14 Fair 1 Poor 0

4.

How would you rate your overall satisfaction with today's exercises?
Excellent 8 Good 13 Fair 2 Poor 0

5.

How would you rate today's session in terms of how interesting it was?
Excellent 9 Good 14 Fair 0 Poor 0

6.

Rate your confidence when solving the exercises.


Excellent 5 Good 15 Fair 3 Poor 0

7.

Was enough feedback provided for the exercises?


More than enough 3 Enough 20 Little short of enough 0 Very less 0

8.

Was enough time given to solve the exercises?


More than enough 2 Enough 14 Little short of enough 3 Very less time provided 4

9.

What was the most useful part of todays session?


While most of the learners commented that they best liked the hands-on experience as they got to test real computer applications, some others commented that they liked the multidimensional analysis of variables.

10.

What did you like best about todays session?


Most of the learners commented that they liked the exercises and liked the fact that the exercises involved functions of real computer applications.

104

11.

What was the least useful part of todays session?


While most of them did not answer this question, some commented that everything was useful. There was one who thought slides were least useful.

12.

What other information would you like to see added to todays lecture?
While most of them did not answer this question, two of the learners suggested adding more time and examples to the lecture.

13.

What other information would you like to see added to todays exercises?
While most of them did not answer this question, some commented that everything was fine. However, there were at least two learners that suggested giving more time to solve exercises.

14.

Would you like to suggest specific improvements to todays lecture?


While most of them did not answer this question, some commented that everything was fine. However, one learner suggested that examples required more explanations to solve the corresponding exercises successfully.

15.

Would you like to suggest specific improvements to todays exercises?


While most of them did not answer this question or just said none, a few of the learners suggested that the number of exercises be reduced.

105

Table 6.14: Day 4 Questionnaire Responses

Questionnaire Day 4 1. How would you rate the overall clarity of today's lecture?
Excellent 11 Good 12 Fair 0 Poor 0

2.

How would you rate the overall clarity of today's exercises?


Excellent 9 Good 13 Fair 1 Poor 0

3.

How would you rate your overall satisfaction with today's lecture?
Excellent 11 Good 12 Fair 0 Poor 0

4.

How would you rate your overall satisfaction with today's exercises?
Excellent 10 Good 12 Fair 1 Poor 0

5.

How would you rate today's session in terms of how interesting it was?
Excellent 9 Good 14 Fair 0 Poor 0

6.

Rate your confidence when solving the exercises.


Excellent 4 Good 19 Fair 0 Poor 0

7.

Was enough feedback provided for the exercises?


More than enough 5 Enough 18 Little short of enough 0 Very less 0

8.

Was enough time given to solve the exercises?


More than enough 2 Enough 15 Little short of enough 5 Very less time provided 1

9.

What was the most useful part of todays session?


While most of the learners commented that they best liked the exercises, a few others commented that they best liked learning to do all pairs combination and building the all pairs table.

10.

What did you like best about todays session?


Most of the learners commented that they liked building the all pairs table and appreciated the fact that a lot of time was spent teaching how to build the all pairs table

106

appreciated the fact that a lot of time was spent teaching how to build the all pairs table and enough feedback was provided. Some also commented that they best liked learning how to identify dependent variables and analyzing the dependency relationships. A learner also commented that she liked the fact that there was a lot of student interaction.

11.

What was the least useful part of todays session?


While most of them refrained from answering this question, some commented that everything was useful. There was one who thought slides were least useful and another one who thought that it took forever to build the all pairs table.

12.

What other information would you like to see added to todays lecture?
While most of them did not answer this question, others just commented none.

13.

What other information would you like to see added to todays exercises?
While most of them did not answer this question, others just commented none.

14.

Would you like to suggest specific improvements to todays lecture?


While most of them did not answer this question, some commented that everything was fine.

15.

Would you like to suggest specific improvements to todays exercises?


While most of them did not answer this question or just said none, a few of the learners commented that more time needs to be provided to solve the exercises.

107

Table 6.15: Day 5 Questionnaire Responses

Questionnaire Day 5 1. How would you rate the overall training in terms of how interesting it was?
Excellent 15 Good 7 Fair 1 Poor 0

2.

How would you rate your overall satisfaction with the training?
Excellent 14 Good 9 Fair 0 Poor 0

3.

How would you rate the overall clarity of the posttest?


Excellent 14 Good 9 Fair 0 Poor 0

4.

How would you rate your overall satisfaction with the posttest?
Excellent 11 Good 11 Fair 1 Poor 0

5.

Rate your confidence when answering the questions on the posttest.


Excellent 8 Good 15 Fair 0 Poor 0

6.

Was enough time given to solve the posttests?


More than enough 3 Enough 15 Little short of enough 5 Very less time provided 0

7.

Were too many or too less questions included in the posttests?


Too many questions 4 Enough 18 Little short of enough 1 Very less questions 0

8.

How would you rate your competence in doing domain testing?


Excellent 6 Good 16 Fair 1 Poor 0

9.

How likely is it that you'll recommend this training to somebody else?


Very likely 13 Likely 10 Less likely 0 Not likely 0

10.

What was the most useful part of this training?


Some learners commented that they found the training very informative and interactive. Some others commented that they best liked the exercises and examples.

11.

What did you like best about the training?

108

Most of them commented that they best liked the exercises and examples. Several others said that they learned so much about a testing strategy that was new to them in just five days. One learner commented that the best part of the training was that he got paid for it. A few commented that hands-on experience during the training was the best part.

12.

What was the least useful part of the training?


While most of them refrained from answering this question, some commented that everything was useful. There was one who thought slides were least useful and another one who thought that it took forever to build the all pairs table.

13.

What other information would you like to see added to or deleted from todays tests?
While most of them did not answer this question or just commented none, one learner said that either more time should be provided to solve the tests or some questions should be deleted.

14.

Would you like to suggest specific improvements to the posttests?


While some of them did not answer this question, some commented that everything was fine. Several learners commented that fewer questions on the paper-based posttest and more on the computer-based performance test would be desirable. A learner commented that the training should actually be spread over more days and the learners should be allowed to take the training materials home. A few commented that the paper-based test was too time consuming. Just one learner said that the difficulty level of the test questions should be increased.

15.

Would you like to suggest specific improvements to the training?


While most of them did not answer this question or just said none, a few of the learners commented that more time needs to be provided to solve the exercises and tests. One learner specifically mentioned that an overview of all software testing paradigms should be provided and it should be explained where domain testing methodology stands amongst all of them.

The following page contains Chart 6.04 that graphically describes the answers to the closed-ended questions on the questionnaire for Day 5 with the help of several pie charts.

109

Chart 6.04: Final Day (5) Questionnaire Responses


Q1: How would you rate the overall training in terms of how interesting it was?

4% 30%

0% Excellent Good Fair 66% Poor

Q2: How would you rate your overall satisfaction with the training?
0% 0% 39% 61% Excellent Good Fair Poor

Q3: How would you rate the overall clarity of the posttest?
0% 0% 39% 61% Excellent Good Fair Poor

110

Q4: How would you rate your overall satisfaction with the posttest?

4% 0% 48% Excellent Good Fair 48% Poor

Q5: Rate your confidence when answering the questions on the posttest.
0% 0% 35% Excellent Good Fair 65% Poor

Q6: Was enough time given to solve the posttests?

22%

0%

13% More than enough Enough Little short of enough Very less time provided 65%

111

Q7: Were too many or too less questions included in the posttests?

4%0%

17% Too many questions Enough Little short of enough Very less questions

79%

Q8: How would you rate your competence in doing domain testing?

4% 0% 26% Excellent Good Fair Poor 70%

Q9: How likely is it that you'll recommend this training to somebody else?

0% 43% 57% Very likely Likely Less likely Not likely

112

Chapter 7: Conclusion
7.01 Domain Testing Training Where Does it Stand?
After evaluation of the tests, it has been observed that there were some plus points and some minus points about the learners performance. I evaluated the paperbased tests. As you can see from the results presented in the previous chapter, the learners performance improved a lot from the pretest to the posttest. But when it came to testing a real computer-based application, while the learners did do some things very well, they failed in some ways. The performance tests were evaluated by Dr. Cem Kaner, James Bach and Pat McGee. They have noted some pluses and minuses with the learners performance and thus the training material. According to these three evaluators, the learners performance is not comparable to the standard of a tester during a job interview who has one years experience and who considers herself reasonably good at domain testing. The detailed evaluation reports of the performance tests are available in appendices X, Y and Z, respectively.

7.01.01 Pluses
Kaner (2003) cited some plus points of the learners performance in the performance test for the page setup feature of Microsofts PowerPoint application, which are mentioned below: These students tablesespecially the specific identification of dimensions along which data entered into the variable can varylook unusually sophisticated for students who have no prior test experience or education and only 15 hours of instruction. The 23 students analyses were remarkably consistent. Their tables were structured the same way and almost all identified the same dimensions. For example, for the Page Width variable (a floating point field that can range from 1 to 56 inches), students consistently identified three dimensions: page width (in inches), number of 113

characters that can be entered into the field (from 0 to 32) and the allowed characters (digits and decimal point). (p. 4)

Kaner (2004) mentioned the following plus points about my instructional materials: In my opinion, the instructional materials developed by Padmanabhan are consistent with the field's most common presentation style for domain testing. She took their approach, did a more thorough job of presenting it to students, and obtained reasonable results when she asked questions that involved straightforward application of what she had taught. The specific results could certainly be improved on, but I don't think that those problems are at the heart of the learning problems faced by the subjects when they tried to transfer their knowledge to the performance test. (p. 10)

McGee cited the following positive point: They did well on the outside boundaries of each dimension, since these could be tested independently. They did well on the enumerated variables, correctly identifying each value as something they needed to test. (Appendix Z: Performance Tests Evaluation Report by Pat McGee, p. 2)

Given the analysis that they did, all of the subjects generated all-pairs tables that were completely or mostly correct. One subject got the first three iterations correct, then completely messed up the fourth iteration. Another generated a table that was completely wrong, but the error was probably a bad copy-and-paste for the first variable. (For that variable, all values listed were the same.) Making a very reasonable correction to this answer would lead to it being completely correct. (Appendix Z: Performance Tests Evaluation Report by Pat McGee, p. 3)

114

7.01.02 Minuses
Kaner (2003) also commented on the things the students failed at in the performance test. The students were also remarkably consistent in what they missed. Examples: (a) If you enter a sufficiently large paper size, PowerPoint warns that the current page size exceeds the printable area of the paper in the printer. No one identified a boundary related to this message. (b) When you change page dimensions, the display of slides on the screen and their font sizes (defaults and existing text on existing slides) change. No one appeared to notice this. And (c) change the page size to a large page and try a print preview. You only see part of the page. (p. 4)

Bach mentions the following problem with some of the risks identified by the students: Some of the risks identified by the students indicate a lack of insight about the nature of the technology under test. For instance, student #1 suggested using characters that are on either side of the ASCII code range for numeric digits. That kind of test made sense in 1985, but it seems unlikely that the programmers of PowerPoint are still writing their own input filter routines based on ASCII code signposts in an IF statement. Modern languages provide more sophisticated libraries and constructs to perform filtering on input. I think a much better test would be to paste in every possible character code. This is fairly easy to do and would trigger many more kinds of faults in the filtering code. (Appendix Y: Performance Tests Evaluation Report by James Bach, p. 4)

McGee commented that the students failed to achieve higher-order learning: I believe that the problem presented was somewhat different from the simple cases presented in the training. The problem presented had four inter-related controls which basically represented a 2-D space with special 115

points. The material presented in training was mostly about 1-D spaces. So, in many respects, this problem was a good test in that it required performance at level 6 of Bloom's taxonomy. Unfortunately, none of the subjects made this conceptual leap. They tried to treat the problem as if it were two independent 1-D spaces. This led them to propose tests that I thought were not very powerful. (Appendix Z: Performance Tests Evaluation Report by Pat McGee, pp. 1-2)

7.01.03 Final Remarks


Kaner (2003) made some final comments about my experiment: Padmanabhan provided her students with detailed procedural documentation and checklists. From the consistency of the students results, I infer that students followed the procedural documentation rather than relying on their own analyses. Its interesting to note that, assuming they are following the procedures most applicable to the task at hand, the students not only include what the procedures appear to tell them to include, but they also miss the issues not addressed by the procedures. In other words, the students arent doing a risk analysis or a dimensional analysis; they are following a detailed set of instructions for how to produce something that looks like a risk analysis or dimensional analysis. I think well find Padmanabhans materials (especially the practice exercises) useful for bringing students to a common procedural baseline. However, we have more work to do to bring students understanding of domain testing up to Blooms Level 6 (Evaluation) or Gagnes (1996) development of cognitive strategies. (p. 4-5)

Bach made the following final comments: I expect more insight, product knowledge, and imagination from a serious tester who had more than a few months of experience working for a company that cared about doing good testing. So, I would not say that this 116

work is strong evidence that the students have been brought to an equivalent of a tester with 1 or 2 years experience. (Appendix Y: Performance Tests Evaluation Report by James Bach, p. 4)

McGee made the following final comments: Overall, I did not get the impression that any of these subjects understood the material well enough to apply it to a new situation. I believe that they mostly could apply these techniques to situations that were very similar to the examples they had been trained on. In terms of Bloom's Taxonomy, I believe they learned the material mostly at level 3 (Application). (Appendix Z: Performance Tests Evaluation Report by Pat McGee, p. 4)

Kaner made the following concluding comments in his evaluation report: Padmanabhan's course takes the approach that I was using (and that is common in the literature and in other instructor's courses) to a higher level. Her teaching is more thorough, has more examples, and more formative assessment. The lesson that I take from her results is not that she didn't do a good enough job, but that failure despite the acceptable job that she did indicates that we need to rethink the teaching strategy in common use. I think that the modern literature on mathematics education, focusing on how to build higher-order understanding rather than procedural knowledge, can provide a good foundation for the next generation of course (and course-related experimentation), but I think that's the subject for another thesis . (Appendix X: Performance Tests Evaluation Report by Dr. Cem Kaner, p. 12)

117

Bibliography
Adrion, R. W., Branstad, M. A., & Cherniavsky, J. C. (1982, June). Validation, verification, and testing of computer software. ACM Computing Surveys, 14(2), 159-192. Ammann, P., & Offutt, J. (1994, June). Using formal methods to derive test frames in category-partition testing. IEEE Computer Society, 69-80. Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (Eds.). (2001). A taxonomy for learning, teaching and assessing: A revision of Blooms taxonomy of educational objectives. New York: Addison Wesley Longman. Bailey, M. A., Moyers, T. E., & Ntafos, S. (1995). An application of random software testing. IEEE Computer Society, 1098-1102. Retrieved October 10, 2003, from http://ieeexplore.ieee.org Beizer, B. (1995). Black-box testing: Techniques for functional testing of software and systems. United States: John Wiley & Sons. Bender, R. (2001). How do you know when you are done testing? Retrieved February 20, 2004, from, http://jisqa.org/Presentations/2002%20Presentations/BENDER-D.pdf Binder, R. V. (1994, September). Design for testability in object-oriented systems. Communications of the ACM, 37(9), 87-101. Binder, R. V. (1999). Testing object oriented systems: Models, patterns and tools. Boston, MA: Addison-Wesley Longman. Bloom, B. S. (1956). Taxonomy of educational objectives: Handbook 1 cognitive domain. New York: Longman. Bloom, B. S., Hastings, T. J., & Madaus, G. F. (1971). Handbook on formative and summative evaluation of student l earning. United States: McGraw-Hill.

118

Boland, P. J., Singh, H., & Cukic, B. (2003, January). Comparing partition and random testing via majorization and Schur functions. IEEE Transactions on Software Engineering, 29(1), 88-94. Bolton, M. (2004). Pairwise testing. Retrieved February 29, 2004, from http://www.developsense.com/testing/PairwiseTesting.html Boyer, R. S., Elspas, B., & Levitt, K. (1975, June). SELECT A formal system for testing and debugging programs by symbolic execution. ACM SIGPLAN Notices, 10(6), 234-245. Burnstein, I. (2003). Practical software testing. New York: Springer-Verlag. Chan, F. T., Mak, I. K., Chen, T. Y., & Shen, S. M. (1997). On some properties of the optimally refined proportional sampling strategy. Computer Journal, 40(4), 194-199. Retrieved October 25, 2003, from http://www3.oup.co.uk/computer_journal/subs/Volume_40/Issue_04/Chan. pdf Chan, F. T., Mak, I. K., Chen, T. Y., & Shen, S. M. (1998). On the effectiveness of the optimally refined proportional sampling testing strategy. IEEE Computer Society, 1-10. Retrieved October 10, 2003, from http://ieeexplore.ieee.org Chen, T. Y., Leung, H., & Yu, Y. T. (1995). On the analysis of subdomain testing strategies. IEEE Computer Society, 218-224. Retrieved October 10, 2003, from http://ieeexplore.ieee.org Chen, T. Y., Poon, P. L., Tang, S. F., & Yu, Y. T. (2000). White on black: A white-box-oriented approach for selecting black-box generated test cases. IEEE Computer Society, 275-284. Retrieved October 10, 2003, from http://ieeexplore.ieee.org Chen, T. Y., Poon, P. L., & Tse, T. H. (2003, July). Choice relation framework for supporting category-partition test case generation. IEEE Transactions on Software Engineering, 29(7), 577-593.

119

Chen, T. Y., Tse, T. H., & Zhou, Z. (2001). Fault-based testing in the absence of an oracle. IEEE Computer Society, 172-178. Retrieved September 23, 2003, from http://ieeexplore.ieee.org Chen, T. Y., Wong, P. K., & Yu, Y. T. (1999). Integrating approximation methods with the generalized proportional sampling strategy. IEEE Computer Society, 598-605. Retrieved November 6, 2003, from http://ieeexplore.ieee.org. Chen, T. Y., & Yu, Y. T. (1994, December). On the relationship between partition and random testing. IEEE Transactions on Software Engineering, 20(12), 977-980. Chen, T. Y., & Yu, Y. T. (1996a). Constraints for safe partition testing strategies. Computer Journal, 39(7), 619-625. Retrieved September 30, 2003, from http://ieeexplore.ieee.org Chen, T. Y., & Yu, Y. T. (1996b). More on the e-measure of subdomain testing strategies . IEEE Computer Society, 167-174. Retrieved September 30, 2003, from http://ieeexplore.ieee.org Chen, T. Y., & Yu, Y. T. (1996c). On the expected number of failures detected by subdomain testing and random testing. IEEE Transactions on Software Engineering, 22(2), 109-119. Retrieved September 30, 2003, from http://ieeexplore.ieee.org Chen, T. Y., & Yu, Y. T. (1997). Optimal improvement of the lower bound performance of partition testing strategies. IEE Proc.-Softw. Eng., 144(5-6), 271-278. Chen, T. Y., & Yu, Y. T. (1998). On the test allocations for the best lower bound performance of partition testing. Retrieved September 30, 2003, from http://ieeexplore.ieee.org Clarke, L. A. (1976, September). A system to generate test data and symbolically execute programs. IEEE Transaction on Software Engineering, SE-2(3), 215-222. 120

Clarke, L. A., Hassell, J., & Richardson, D. (1982, July). A close look at domain testing. IEEE Transactions on Software Engineering, SE-8(4), 380-390. Cohen, D. M., Dalal, S. R., Parelius, J., & Patton, G. C. (1996). The combinatorial design approach to automatic test generation. Retrieved December 7, 2003, from http://ieeexplore.ieee.org. Cohen, M. B., Colbourn, C. J., Gibbons, P. B., & Mugridge, W. B. (2003). Constructing test suites for interaction testing. IEEE Computer Society, 3848. Retrieved December 23, 2003, from http://ieeexplore.ieee.org DeMillo, R. A., Lipton, R. J., & Sayward, F. G. (1978, April). Hints on test data selection: Help for the practicing programmer. Computer, 11(4), 34-41. Dick, W., & Carey, L. (1985). The systematic design of instruction (2nd ed.). United States: Scott, Foresman and Company. Dick, W., Carey, L., & Carey, J. O. (2001). The systematic design of instruction (5th ed.). New York: Addison-Wesley. Driscoll, M. P. (2000). Psychology of learning for instruction (2nd ed.). Needham Heights, MA: Allyn and Bacon. Duran, J. W., & Ntafos, S. (1981). A report on random testing. IEEE Computer Society, 179-183. Retrieved December 13, 2003, from http://ieeexplore.ieee.org Elmendorf, W. R. (1973, November). Cause-effect graphs in functional testing. TR_00.2487. Poughkeepsie, NY: IBM. Elmendorf, W. R. (1974). Functional analysis using cause-effect graphs. Poughkeepsie, NY: IBM. Elmendorf, W. R. (1975). Functional testing of software using cause-effect graphs. Poughkeepsie, NY: IBM. Fenrich, P. (1997). Practical guidelines for creating instructional multimedia applications. Fort Worth, TX: Harcourt Brace. Ferguson, R., & Korel, B. (1996). The chaining approach for software test data generation. ACM Transactions on Software Engineering and Methodology, 5(1), 63-86. 121

Fosdick, L. D., & Osterweil, L. J. (1976). Data flow analysis in software reliability. Computing Surveys, 8(3), 305-330. Frankl, P., Hamlet, D., Littlewood, B., & Strigini, L. (1997). Choosing a testing method to deliver reliability. Paper presented at ICSE 97, Boston, MA, pp. 68-78. Frankl, P., Hamlet, D., Littlewood, B., & Strigini, L. (1998). Evaluating testing methods by delivered reliability. Retrieved November 21, 2003, from http://citeseer.nj.nec.com/frankl98evaluating.html. Frankl, P. G., & Weiss, S. N. (1993). An experimental comparison of the effectiveness of branch testing and data flow testing. Retrieved November 23, 2003, from http://citeseer.nj.nec.com/frankl93experimental.html. Gagne, R. M. (1985). The conditions of learning and theory of instruction (4th ed.). New York: Holt, Rinehart and Winston. Gagne, R. M., & Medsker, K. L. (1996). The conditions of learning: Training applications. Fort Worth, TX: Harcourt Brace. Gagne, R. M, Briggs, L. J., & Wager, W. W. (1988). Principles of instructional design (3rd ed.). New York: Holt, Rinehart and Winston. Gelperin, D., & Hetzel, B. (1988, June). The growth of software testing. Communications of the ACM, 31(6), 687-694. Gerrard, P., & Thompson, N. (2002). Risk-based e-business testing. Norwood, MA: Artech House. Goodenough, J. B., & Gerhart, S. L. (1975, June). Toward a theory of test data selection. IEEE Transactions on Software Engineering, 156-173. Gotlieb, A., Botella, B., & Rueher, M. (1998). Automatic test data generation using constraint solving techniques. Paper presented at ISSTA 98, Clearwater Beach, FL. Gronlund, N. E. (2000). How to write and use instructional objectives (6th ed.). Upper Saddle River, NJ: Prentice-Hall. Gutjahr, W. J. (1999). Partition testing vs. random testing: The influence of uncertainty. IEEE Transactions on Software Engineering, 25(5), 661-674. 122

Hajnal, ., & Forgcs, I. (1998). An applicable test data generation algorithm for domain errors. Paper presented at ISSTA 98, Clearwater Beach, FL. Hall, P. A. V., & May, J. H. R. (1997, December). Software unit test coverage and adequacy. ACM Computing Surveys, 29(4), 366-427. Hamlet, D. (1996, September 5). Software component dependability a subdomain-based theory. Retrieved November 22, 2003, from http://citeseer.nj.nec.com/hamlet96software.html Hamlet, D. (2000). On subdomains: Testing, profiles, and components. ACM, 7176. Hamlet, D., Manson, D., & Woit, D. (2001). Theory of software reliability based on components. Retrieved November 22, 2003, from http://citeseer.nj.nec.com/hamlet01theory.html. Hamlet, D., & Taylor, R. (1990). Partition testing does not inspire confidence. IEEE Transactions on Software Engineering, 16(12), 1402-1411. Hantler, S. L., & King, J. C. (1976, September). An introduction to proving the correctness of programs. ACM Computing Survey, 331-353. Harrell, J. M. (2001). Orthogonal array testing strategy (OATS). Retrieved December 7, 2003, from http://www.cvc.uab.es/shared/teach/a21291/apunts/provaOO/OATS.pdf. Hierons, R. M. (2002, October). Comparing test sets and criteria in the presence of test hypotheses and fault domains. ACM Transactions on Software Engineering and Methodology, 11(4), 427448. Howden, W. E. (1976, September). Reliability of the path analysis testing strategy. IEEE Transactions on Software Engineering, 2(3), 208-215. Howden, W. E. (1977, July). Symbolic testing and the DISSECT symbolic evaluation system. IEEE Transactions on Software Engineering, 4(4), 266278. Howden, W. E. (1980a). Functional program testing. IEEE Transactions on Software Engineering, SE-6(2), 162-169. 123

Howden, W. E. (1980b). Functional testing and design abstractions . The Journal of Systems and Software, 1, 307-313. Howden, W. E. (1981). Completeness criteria for testing elementary program functions. Paper presented at the 5th International Conference on Software Engineering, pp. 235-243. Howden, W. E. (1982, June). Validation of scientific programs . ACM Computing Surveys, 14(2), 193-227. Howden, W. E. (1986, October). A functional approach to program testing and analysis. IEEE Transactions on Software Engineering, SE-12(10), 9971004. Howden, W. E. (1989). Validating programs without specifications. ACM, 2-9. Retrieved September, 30, 2003, from http://portal.acm.org/ Huang, J. C. (1975, September). An approach to program testing. ACM Computing Surveys, 7(3), 113-128. Hutcheson, M. L. (2003). Software testing fundamentals: Methods and metrics. Indianapolis, IN: Wiley Publishing. IEEE Computer Society (2004). Definition of testing. Retrieved January 30, 2004, from http://www.computer.org/certification/guide/TestingDefinition.htm IEEE Std. 610.12. (1990). IEEE standard glossary of software engineering terminology. Retrieved December 21, 2003, from http://standards.ieee.org/ Intermath Dictionary. (2004a). Definition of average. Retrieved January 12, 2004, from http://www.intermathuga.gatech.edu/dictnary/descript.asp?termID=46 Intermath Dictionary. (2004b). Definition of standard deviation. Retrieved January 12, 2004, from http://www.intermathuga.gatech.edu/dictnary/descript.asp?termid=450 Jeng, B., & Weyuker, E. J. (1989). Some observations on partition testing. ACM SIGSOFT Software Engineering Notes, 14(8), 38-47. 124

Jeng, B., & Weyuker, E. J. (1994, July). A simplified domain testing strategy. ACM Transactions on Software Engineering and Methodology, 3(3), 254270. Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for instructional design. New Jersey: Lawrence Erlbaum Associates. Jorgensen, P. C. (2002). Software testing: A craftsmans approach (2nd ed.). Boca Raton, FL: CRC Press. Kaner, C. (2002a). Black box software testing: Professional seminar, section 4: The impossibility of complete testing. Retrieved May 23, 2003, from http://www.testingeducation.org/coursenotes/kaner_cem/cm_200204_black boxtesting/ Kaner, C. (2002b). Risk-based testing and risk-based test management. Professional Seminar on Black Box Software Testing. Retrieved May 23, 2003, from www.testingeducation.org Kaner, C. (2002c). Combination testing. Professional Seminar on Black Box Software Testing. Retrieved February 23, 2004, from www.testingeducation.org Kaner, C. (2003). Teaching domain testing: A status report. Retrieved December 2, 2003, from http://www.testingeducation.org/articles/ Kaner, C. (2004). Carts before horses: Using preparatory exercises to motivate lecture material. Retrieved March 4, 2004, from http://testingeducation.org/conference/wtst_page_2004.php Kaner, C., & Bach, J. (2003). Black box software testing: 2003 commercial edition. Retrieved October 11, 2003, from http://www.testingeducation.org/coursenotes/ Kaner, C., & Bach, J. (2004). Black box software testing: 2004 academic edition. Retrieved February 14, 2004, from http://www.testingeducation.org/k04/index.htm#coursenotes Kaner, C., Bach, J., & Pettichord, B. (2002). Lessons learned in software testing: A context driven approach. New York: Willey Co mputer. 125

Kaner, C., Falk, J., & Nguyen, H. Q. (1999). Testing computer software (2nd ed.). New York: John Wiley & Sons. Koh, L. S., & Liu, M. T. (1994). Test path selection based on effective domains. IEEE Computer Society, 64-71. Retrieved December 23, 2003, from http://ieeexplore.ieee.org Krathwohn, D. R., Benjamin, B. S., & Masia, B. B. (1956). Taxonomy of educational objectives. The classification of educational goals. Handbook II: Affective domain. New York: David McKay. Learning Skills Program. (2003). Blooms taxonomy. Retrieved May 2, 2003, from http://www.coun.uvic.ca/learn/program/hndouts/bloom.html Lee, H. K. (1997). Optimization based domain decomposition methods for linear and nonlinear problems. Retrieved December 7, 2003, from http://scholar.lib.vt.edu/theses/available/etd-7497163154/unrestricted/kwon.pdf Lei, Y., & Tai, K. C. (1988). In-parameter-order: A test generation strategy for pairwise testing. Retrieved December 21, 2003, from http://portal.acm.org/ Leung, H., & Chen, T. Y. (2000). A revisit of the proportional sampling strategy. IEEE Computer Society, 247-254. Retrieved December 13, 2003, from http://ieeexplore.ieee.org Mayrhauser, A. V., Mraz, R. T., & Walls, J. (1994). Domain based regression testing. IEEE Computer Society, 26-35. Retrieved December 23, 2003, from http://ieeexplore.ieee.org Mayrhauser, A. V., Mraz, R. T., Walls, J., & Ocken, P. (1994). Domain based testing: Increasing test case reuse. IEEE Computer Society, 1-16. Retrieved December 13, 2003, from http://ieeexplore.ieee.org Mayrhauser, A. V., Ocken, P., & Mraz, R. (1996). On domain models for system testing. IEEE Computer Society, 114-123. Retrieved November 15, 2003, from http://ieeexplore.ieee.org 126

Mayrhauser, A. V., Walls, J., & Mraz, R. T. (1994). Sleuth: A domain based testing tool. IEEE Computer Society, 840-849. Retrieved November 23, 2003, from http://ieeexplore.ieee.org Miller, E. F., & Melton, R. A. (1975). Automated generation of testcase datasets. Paper presented at the 1975 International Conference on Reliability Software, pp. 51-58. Morrison, G. R., Ross, S. M., & Kemp, J. E. (2004). Designing effective instruction (4th ed.). New York: John Wiley and Sons. Myers, G. J. (1979). The art of software testing. United States: John Wiley and Sons. Ntafos, S. (1998). On random and partition testing. Paper presented at ISSTA 98, Clearwater Beach, FL. Ntafos, S. C. (2001). On comparisons of random, partition, and proportional partition testing. IEEE Transactions on Software Engineering, 27(10), 949960. Nursimulu, K., & Probert, R. L. (1995). Cause-effect graphing analysis and validation of requirements. Retrieved December 2, 2003, from http://portal.acm.org/ Offutt, A. J., Jin, Z., & Pan, J. (1997). The dynamic domain reduction procedure for test data generation. Retrieved November 25, 2003, from http://citeseer.nj.nec.com/282317.html Ostrand, T. J., & Balcer, M. J. (1988, June). The category-partition method for specifying and generating functional tests. Communications of the ACM, 31(6), 676-686. Podgurski, A., Masri, W., Yolanda, M., & Wolff, F. G. (1999, July). Estimation of software reliability by stratified sampling. ACM Transactions on Software Engineering and Methodology, 8(3), 263283. Podgurski, A., & Yang, C. (1993). Partition testing, stratified sampling, and cluster analysis. ACM, 169-181. Retrieved December 2, 2003, from http://portal.acm.org/ 127

Powell, A. (1998). Experiences with category-partition testing. Retrieved September 2003, from http://www.castellan.com/~amp/bups/bups.html Pozewaunig, H., & Rauner, R. D. (1999). Support of semantics recovery during code scavenging using repository classification. ACM, 65-72. Retrieved December 2, 2003, from http://portal.acm.org/ Ramamoorthy, C. V., & Ho, S. F. (1975). Testing large software with automated software evaluation systems. ACM SIGPLAN Notices, 10(6), 382-394. Rapps, S., & Weyuker, E. J. (1982). Data flow analysis techniques for test data selection. ACM, 272-278. Retrieved December 15, 2003, from http://portal.acm.org/ Reid, S. C. (1997). An empirical analysis of equivalence partitioning, boundary-value analysis and random testing. IEEE Computer Society, 6473. Retrieved November 23, 2003, from http://ieeexplore.ieee.org Reiser, R. A., & Dempsey, J. V. (2002). Trends and issues in instructional design and technology. Saddle River, NJ: Merrill Prentice Hall. Richardson, D. J., & Clarke, L. A. (1981). A Partition analysis method to increase program reliability. ACM, 244-253. Retrieved November 23, 2003, from http://portal.acm.org/ Richardson, D. J., OMalley, O., & Tittle, C. (1989). Approaches to specification-based testing. Retrieved November 23, 2003, from http://portal.acm.org/ Rothwell, W. J., & Kazanas, H. C. (1998). Mastering the instructional design process: A systematic approach (2nd ed.). San Francisco: JosseyBass/Pfeiffer. Sayre, K., & Poore, J. H. (2000). Partition testing with usage models. IEEE Computer Society, 42(12), 845-850. Schroeder, P. J. (2004). Involving testing students in software projects, part II. Retrieved March 4, 2004, from http://testingeducation.org/conference/wtst_page_2004.php 128

Schroeder, P. J., & Korel, B. (2000). Black-box test reduction using input -output analysis. Retrieved November 23, 2003, from http://portal.acm.org/ Smith, P. L., & Ragan, T. J. (1999). Instructional design (2nd ed.). Upper Saddle River, NJ: Prentice-Hall. Srinivasan, R., Gupta, S. K., & Breuer, M. A. (1993). An efficient partitioning strategy for pseudo-exhaustive testing. ACM, 242-248. Retrieved November 23, 2003, from http://portal.acm.org/ Tsoukalas, M. Z., Duran, J. W., & Ntafos, S. C. (1993). On some reliability estimation problems in random and partition testing. IEEE Transactions on Software Engineering, 19(7), 687-697. Vagoun, T. (1996). Input domain partitioning in software testing. IEEE Computer Society, 261-268. Retrieved November 23, 2003, from http://ieeexplore.ieee.org Webopedia. (2003). Definition of function. Retrieved December 6, 2003, from http://www.webopedia.com/TERM/f/function.html Weiss, S. N., & Weyuker, E. J. (1988). An extended domain-based model of software reliability. IEEE Transactions on Software Engineering, 14(10), 1512-1524. Weyuker, E. J., & Jeng, B. (1991). Analyzing partition testing strategies. IEEE Transactions on Software Engineering, 17(7), 703-711. Weyuker, E. J., & Ostrand, T. J. (1980, May). Theories of program testing and the application of revealing subdomains. IEEE Transactions on Software Engineering, 6(3), 236-246. Weyuker, E. J., Weiss, S. N., & Hamlet, D. (1991). Comparison of program testing strategies. ACM, 1-10. Retrieved November 23, 2003, from http://portal.acm.org/ White, L. J. (1984). The evolution of an integrated testing environment by the domain testing strategy. ACM, 69-74. Retrieved November 23, 2003, from http://portal.acm.org/ 129

White, L. J., & Cohen, E. I. (1980, May). A domain strategy for computer program testing. IEEE Transactions on Software Engineering, 6(3), 247257. White, L. J., & Sahay, P. N. (1985). Experiments determining best-paths for testing computer program predicates. ACM, 238-243. Retrieved November 23, 2003, from http://portal.acm.org/ Whittaker, J. A., & Jorgensen, P. C. (2002). How to break software: A practical guide to testing. Boston: Addison Wesley. Worthen, B. R., Sanders, J. R., & Fitzpatrick, J. L. (1997). Program evaluation: Alternative approaches and practical guidelines (2nd ed.). New York: Longman. Wu, Y. (2001). Category partition testing. Retrieved October 14, 2003, from http://www.isse.gmu.edu/~wuye/classes/637/lecture/CategoryPartitionTesti ng.pdf Zeil, S. J. (1984). Perturbation testing for computation errors. Paper presented at the 7th International Conference on Software Engineering, Orlando, FL, pp. 257-265. Zeil, S. J., Afifi, F. H., & White, L. J. (1992a, October). Detection of linear errors via domain testing. ACM Transactions on Software Engineering and Methodology, 1(4), 422-451. Zeil, S. J., Afifi, F. H., & White, L. J. (1992b, October). Testing for linear errors in nonlinear programs . ACM, 81-91. Retrieved November 23, 2003, from http://portal.acm.org/ Zeil, S. J., & White, L. J. (1981). Sufficient test sets for path analysis testing strategies. Retrieved November 22, 2003, from http://portal.acm.org/citation.cfm?id=802531&jmp=abstract&dl=GUIDE& dl=ACM Zhao, R., Lyu, M. R., & Min, Y. (2003). Domain testing based on character string predicate. Retrieved September 12, 2003, from http://www.cse.cuhk.edu.hk/~lyu/paper_pdf/ats03.pdf 130

Zhu, H., Hall, P. A. V., & May, J. H. R. (1997, December). Software unit test coverage and adequacy. ACM Computing Surveys, 29(4), 366-427.

131

Appendices

Appendix A: Reference Material# 1 for Domain Testing Training (Reference Material 1: Identifying Variables of a Function and Their Characteristics in Training Material)

Domain Testing Identifying Variables of a Function and Their Characteristics

Phase I: Given a function, identify variables and their characteristics To conduct domain testing you will need either written or verbal specification, which includes code, or the product or prototype. If the product is available, you can do exploratory testing on it to get yourself familiarized with the product and the domain. The analysis to be done in this phase requires the following steps: Identify variables. Categorize variables. Explore functionality of the function with respect to the identified variables. Identify and analyze values taken by each variable. Determine whether the identified variables are one-dimensional or multidimensional. 6. Determine whether the identified variables are of range type or enumerated type. 7. Determine whether each of the identified variables is linearizable or not. Step 1: Identify variables What is a variable? In general a variable is something that varies or is prone to variation. is a quantity capable of assuming a set of values. is a symbol representing a quantity. For example, in the expression a2 + b2 = c2, a, b, and c are variables. is the mathematical model for a characteristic observed or measured in an experiment, survey, or observational study. In the computing world, a variable is the name given to a space in the memory to store a value of some data type. The variable names can be conveniently used to access the values stored in the memory. Step 2: Categorize variables Now that we have identified some variables, let us categorize these variables. What are the different kinds of variables? Input Output 1. 2. 3. 4. 5.

Sowmya Padmanabhan, 2003

What is an input variable? An input variable is a variable whose value serves as an input to a function. The value to an input variable inputted by an external entity like a user or some other program. What is an output variable? An output variable is a variable that stores the result of execution of the function to which it belongs. In other words, an output variables value indicates what happened as a result of execution of the function. For example, in z = x + y, x and y are plain input variables and z is an output variable storing the value of x plus y. Not all functions have explicitly visible output variables. However, usually there is at least one variable in the internal code that serves as an output variable. In the technique of domain testing, we are only concerned with explicit output variables. Step 3: Explore functionality of the function with respect to the identified variables. It is a good idea to explore the functionality of the function at hand and summarize the functionality you observe with respect to the identified variables. If a product is available then exploring its functions is easier. However, in some cases you are just given a prototype and a set of written or verbal specifications about the variables of the function. While this is an adequate way to conduct domain testing, it is important to realize that the specification may not match the actual prototype. Step 4: Identify and analyze values taken by each variable The next step is to identify the values taken by each of the identified variables. For every variable answer the following three questions: 1. What kind of values does the variable take? 2. What are the characteristics of these values? 3. Can you map the type of data it takes to standard data types like int, double, float, string, char, etc?

Sowmya Padmanabhan, 2003

Step 5: Determine whether the identified variables are one-dimensional or multidimensional and identify the dimensions What does dimension mean in domain testing? In simple words, a dimension of a variable is looking at the variable from a particular angle. For example, we are given a bunch of numbers and are told to represent or classify them in terms of some property. Then, one way to do this is to consider the bigness property and arrange them from the number that has the least value to the number that has the biggest value. You probably have already guessed what our dimension is. Yes, it is the property bigness. Now, we could also classify these as integers and reals with everything that is just an integer in the integer category and the other decimal numbers in the real category. So, what is our dimension of classification here? Correct! It is data type. What are multidimensional variables? A numeric field, for example, is multidimensional. The length of the input value in terms of the number of constituent characters, the type of constituent characters and the size of its input value all are its dimensions, that is they all are different ways to look at the same variable. Hence, a numeric field along the length dimension can be considered to be of integer data type, whereas the same field can be considered as being integer or floating point type data type along the dimensions Type of characters and size depending on the kind of values the field takes. Similarly a string or a text field is multidimensional. Length and type of the constituent characters of the input value are its dimensions. A string or text field can be viewed as integer data type along the length dimension whereas it can be viewed as string along the dimension Type of characters. So, just a bunch of numbers with no uniformity like- only integers, only decimals, only numbers greater that 1000 or only numbers less than -23 will be multi-dimensional and depending on what is required in your analysis, you might pick one or more dimensions that are relevant and base your analysis on them.

Sowmya Padmanabhan, 2003

Step 6: Determine whether the variables are range type or enumerated type We can categorize variables into two categories based on the kind of values they take:

What is a range type variable? Variables that can take a range of values are said to be of the range type. What is an enumerated type variable? Variables that can take only a set of values or a fixed number of values are said to be of the enumerated type. Step 7: Determine whether each of the identified variables is linearizable or not. What is a linearizable variable? A variable is said to be linearizable if its possible values can be represented on a number line. For example, say we have a variable x of real/float/double data type such that its possible valid values lie between -0.01 and -0.001 (inclusive) then any valid value (-0.006, -0.009, etc) between these two end points, including the end points, is a valid value for variable x. We can then represent the values of x on a number line with -0.001 on the left end and -0.01 on the right end. The choice of directions is in accordance with the general rule of representing quantities on a number line, in which quantities with lesser value go on the left and those with higher value go on the right.

-0.001

x What is the relation between linearizable and range type variables?

-0.01

Variables that are of the range type are usually linearizable whereas enumerated types are usually not.

Sowmya Padmanabhan, 2003

Appendix B: Reference Material# 2 for Domain Testing Training (Reference Material 2: Equivalence Class and Boundary Value Analysis in Training Material)

Domain Testing Equivalence Class and Boundary Value Analysis

Phase II: Given variables of a function, conduct equivalence class analysis on them. The analysis to be done in this phase requires the following steps: 1. 2. 3. 4. 5. 6. Review the analysis completed in phase I. Represent the variable(s) as a mathematical range expression. Identify the input domain for the variable. Determine the risks for each variables dimension(s). Set up your equivalence class table. Partition the input domain into sub-domains based on the identified risks by performing equivalence class analysis. 7. Apply boundary value analysis, wherever applicable, to pick best test cases from each sub-domain. Step 1: Review the analysis done in previous phase It is a good idea to review the analysis done in phase I before proceeding further with this phase. Step 2: Represent the variable(s) as a mathematical range expression For each of the range type variables identified in Phase I, represent the variable as a mathematical range expression. For example, if a variable x lies in the range of v1 and v2, then x can be represented as follows: v1 <= x <= v2 Step 3: Identify the input domain for the variable What is an input domain? The input domain is the set of all possible values for the variable. Step 4: Determine the risks for each variables dimension(s). What is a risk? A risk is an assertion about a function and its corresponding variables that states a failure of the function. To identify risks it is helpful to think of the ways the function or program might mess up the values taken by a Sowmya Padmanabhan, 2003 1

variable causing the function to fail. It is also helpful to list the associated risks for each dimension of all identified variables. Step 5: Set up the equivalence class table. Before continuing with the analysis of domain testing, it is convenient to construct an equivalence class analysis table. A typical equivalence class table looks like the following:

Variable

Equivalence classes

Test case (Best representatives)

Risks

Notes

In case of multi-dimensional variables, the above table has an extra column for dimension. The table then looks like the following: Variable Dimension Equivalence classes Test case (Best representatives) Risks Notes

Step 6: Partition the input domain into sub-domains based on the identified risks. What is a sub-domain? Sub-domains in our context can also be called sub-sets or categories or equivalence classes.

How to partition? Sowmya Padmanabhan, 2003 2

Partition the input domain into different sub-domains or subsets based on the identified risks by performing equivalence class analysis. What is an equivalence class? An equivalence class of a variable consists of a set of elements, called members of that class that are values belonging to the original input domain and have been categorized into this class based on a risk. The function to which the variable belongs fails in a particular way with respect to this characteristic. Thus, all members or elements of an equivalence class are equivalent to each other with respect to some risk.

Why should we do equivalence class analysis? The idea is that all elements within an equivalence class are essentially the same for the purposes of testing the corresponding program/function for a particular risk. The purpose of domain testing is to systematically reduce an enormous set of possible input values for a variable (called input domain) into few manageable sub-domains based on risks. Doing equivalence class analysis enables us to pick the best representatives or test cases from each class, thereby reducing the number of test data drastically. The aim is to alleviate the problem of impossibility of complete testing. Why is it impossible to do complete testing? Complete testing is impossible for several reasons: We cant test all the inputs to the program. We cant test all the combinations of inputs to the program. We cant test all the paths through the program. We cant test for all of the other potential failures, such as those caused by user interface design errors or incomplete requirements analyses. Here is a simple example from the paper on An approach to Program Testing by J. C. Huang:

Sowmya Padmanabhan, 2003

To assert that the output value Z assumes the correct value for all input values of X and Y, we have to test all possible assignment values of X and Y. Assume that X and Y take integer values. If the program is being tested on a computer for which the word size is 32 bits then the total possible values for X and Y, each are 2 32 .Since the combination of X and Y yields Z, we have to test X and Y in combination. The total possible combinations for X and Y are 2 32 x 2 32 Now, if we assume that the program takes about a millisecond to execute once, we will need more than 50 billion years to completely test this program! If testing a program as simple as this can lead to combinatorial explosion and make testing seem impossible, imagine what happens with huge complicated programs having hundreds of variables? This is where domain testing comes to our rescue by helping us systematically reduce an infinitely large input domain to few manageable sub-domains and then helping us choose the best test cases from each sub-domain, thereby reducing out testing effort (time, money, labor, etc) drastically.

Step 7: Apply boundary value analysis, wherever applicable, to pick best test cases from each sub-domain. Rather than selecting any arbitrary element in an equivalence class as being the best representative, the boundary-value analysis requires that one or more elements be selected such that each edge (border) of the equivalence class is the subject of a test. Hence, values on the boundary and values just off the boundary (below and/or beyond, whatever is applicable) are selected in an equivalence class to be the best representatives or test cases for that equivalence class. What are the pitfalls of boundary-value analysis? Boundary-value analysis works well only when the program to be tested is a function of several independent variables that represent bounded physical quantities. In other words, boundary value analysis does not find Sowmya Padmanabhan, 2003 4

domain specific special condition bugs. For example, merely performing boundary-value analysis on a function called NextDate that takes a date as input and outputs the date of the next day, will only find bugs corresponding to end-of the-month and end-of the year faults and not bugs corresponding to leap year and February faults. That is why we need to also look at special values (like leap year and February for the NextDate mentioned above) when looking for best representatives. Boundary-value analysis cannot be used to test dependent variables. As in the above date example, the number of days in February is dependent on the whether the year is leap year or if it is non-leap year. Hence, we should not just test for boundaries. Instead we should also try to find out dependency between variables. This is explained in the reference materials corresponding to Phase III: Combination Testing.

Sowmya Padmanabhan, 2003

How do I determine what members to categorize into an equivalence class? To partition the input domain space into sub-domains with equivalence class analysis it is helpful to use heuristics. What is a heuristic? 1. Of or relating to a usually speculative formulation serving as a guide in the investigation or solution of a problem: The historian discovers the past by the judicious use of such a heuristic device as the ideal type (Karl J. Weintraub). 2. Of or constituting an educational method in which learning takes place through discoveries that result from investigations made by the student. 3. Computer Science. Relating to or using a problem-solving technique in which the most appropriate solution of several found by alternative methods is selected at successive stages of a program for use in the next step of the program. Heuristics do not always work; else they would be called guidelines. Some common heuristics for equivalence class analysis is located in Appendix A: Equivalence Class Analysis Heuristics. Please note that all the examples in the heuristics presented in Appendix A deal with only one dimension of the concerned variable since the information presented in the examples gives enough information only along one dimension for analysis purposes. Examples given during training lectures will describe detailed analysis of multidimensional variables. Also note that the risks identified in the examples corresponding to the heuristics presented below are an incomplete list. Additional risks will be outlined during training lectures, but the ones listed here are the ones most often considered in domain testing.

Sowmya Padmanabhan, 2003

Appendix C: Reference Material# 3 for Domain Testing Training (Reference Material 3: Combination Testing in Training Material)

Domain Testing Combination Testing

Phase III: Given variables and best representatives of each of their equivalence classes, perform combination testing on them. The analysis for this phase requires the following steps: 1. Review the analysis done in previous phase. 2. Do combination testing. Step 1: Review the analysis done in previous phase It is a good idea to review the analysis done in Phase II before proceeding further with this phase. Step 2: Do combination testing Now that we have learned how to choose the best representatives (test cases) from each equivalence class for each variable, we next have to perform combination testing of all the identified variables using these best representatives in order to test these variables in combination. Why combine, why not test in isolation? We need to test variables in combination for the simple reason that all variables will interact as they are part of one functional unit and they have to unite to achieve the duty that the functional unit is designated for. Hence most of these variables influence each other in some or the other way. How to perform combination testing? Consider an example of a program (a simple GUI) that has 7 variables with 3, 2, 2, 2, 3, 2 and 2 representatives respectively. The number of all possible test cases is equivalent to 3x2x2x2x3x2x2 = 288 combinations, which means 288 test cases-one test case corresponding to each combination! If the number of test cases is so huge for as few as 7 variables, imagine what happens in a typical program where there are hundreds of variables. Yes, combinatorial explosion! A solution to this problem is All-pairs combination testing and is discussed next. Sowmya Padmanabhan, 2003 1

What is All-Pairs Combination Testing? All-pairs combination technique ensures that the set of test cases generated with this technique will include all the pairs of values for every variable. In other words, every value of a variable is combined with every value of every other variable at least once. This is known as achieving all-pairs. Doing all-pairs reduces the number of combinations drastically. The first step is to select the variables of interest from the pool of variables of the program or function under test. In all-pairs combination technique, only independent variables are combined. This is because having dependent variables may lead to many impossible combinations due to one or more dependency relationships amongst the variables. Therefore, we need to identify which variables are independent and which are dependent. Test dependent variables separately and use separate combinations that test for specific dependencies. To determine dependency among variables, follow these steps: 1. 2. 3. 4. Find out what variables are dependent upon on another. Determine the dependency relations of these variables. Identify the critical combination of values for the dependency. Separately list (dependency variables combination list) combinations.

A domain tester might not know all the dependencies during the first test run. This will depend on how far the tester has explored the product/system (if there is a product to play with) or to what extent the specification documents, if there are any, explain the dependencies. As the tester encounters failures s/he might discover new dependencies and then s/he can revise the test suite by adding additional test case combinations to test for these new found dependencies and special cases. Next, from the short-listed set of independent variables, select the test cases of interest for each variable. The selection criterion is that only those test cases should be included which are expected to be processed and handled properly by the program. Hence, error handling test cases or test cases that the program is expected to mishandle or combinations that are impossible should not be included. The reason to not have such test cases is that having even one such test case in a combination in all-pairs table would be wasteful as it would make existence of other acceptable pairs in the corresponding combination worthless as these pairs would not even be tested for. Error handling test cases or test cases that are expected to be mishandled by the program or function should be tested individually for every variable and one might even want to have one or two separate combinations that have all such error handing test cases corresponding to all the variables combined together, in order to test what happens when you combine multiple faults together.

Sowmya Padmanabhan, 2003

For example, consider a function that has five variables a, b, c, d and e. a has 3 test cases corresponding to it and each of the remaining four have two test cases each. Let us apply all-pairs combination to this function and see how we could achieve all-pairs, that is, each test case value of each of the five variables is paired (at least once) with each of the test case values of the remaining four variables. Lets assume that all the five variables are useful for all-pairs combination, since we really dont have any information about the variables other than the number of test cases they have. Let us also assume that each of their test cases produces an acceptable result by the function. Let us give symbolic representations to the test cases of each of the variables: Variable a b c d e The following is the result of all-pairs combination: Test case# 1. 2. 3. 4. 5. 6. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) C1 C2 C2 C1 C1 C2 d(2) D1 D2 D2 D1 D2 D1 e(2) E1 E2 E1 E2 E2 E1 Test Cases A1, A2, A3 B1, B2 C1, C2 D1, D2 E1, E2

If you observe carefully, you will see that every test case of a has been paired at least once with every test case of b, c, d and e. This holds true for b, c, d and e also. Hence, all-pairs of the five variables have been achieved in the above arrangement. Also, note that the number of combinations needed to arrive at this all-pairs configuration is six, which is the product of the number of test cases of a and b (3x2). In general, if we were to sort the variables in descending order in terms of the number of representatives (values chosen to be tested) and if Max1 and Max2 are the first two in this sorted list (the two highest) then the minimal number of combinations in all-pairs combination would be (Max1) x (Max2). We might not be able to fit in all pairs within Max1 x Max2 combinations, in which case, additional combinations might have to be added.

Sowmya Padmanabhan, 2003

If n is the total number of variables then each combination would have n (n-1)/2 pairs. In the above example, this formula gives 5(5-1)/2 = 5x4/2 = 10 pairs in every combination, which is easily verifiable by looking at the above table. Let us now look step by step as to how to arrive at this arrangement. The guidelines for performing all-pairs combination are located in Appendix B: Guidelines for filling in all-pairs combination table. a. Arrange the variables in descending order of the number of test cases they have. Call the first two Max1 and Max2. 1. 2. 3. 4. 5. a (3) Max1 b (2) Max2 c (2) d (2) e (2)

In this case, variables b through f have the same number of test cases; hence any one of these five could have been called Max2. Refer guideline numbered 3 in Appendix B: Guidelines for filling in allpairs combination table b. Give symbolic names to the test cases of each of the variables.

Variable a b c d e f

Test Cases A1, A2, A3 B1, B2 C1, C2 D1, D2 E1, E2 F1, F2

Refer guideline numbered 4 in Appendix B: Guidelines for filling in allpairs combination table c. Calculate the minimal number of all-pairs combinations. No. of test cases (Max1) x No. of test cases (Max2) = 3x2 =6 This is the minimal number of rows you will have in the all-pairs table plus the extra blank rows after every set. Contrast this with all possible combinations (3x2x2x2x2x2) = 96 Refer guideline numbered 5 in Appendix B: Guidelines for filling in allpairs combination table

Sowmya Padmanabhan, 2003

d.

Determine the number of pairs in each combination: Total number of variables x (Total number of variables 1) / 2 = 6(6 1)/ 2 = 6x5/2 = 15 Refer guideline numbered 7 in Appendix B: Guidelines for filling in allpairs combination table

e.

Build the all-pairs combination table.

Step 1: Test case# 1. 2. 3. 4. 5. 6. a(3) A1 A1 A2 A2 A3 A3 Refer guidelines numbered 6, 7 and 8 in Appendix B: Guidelines for filling in all-pairs combination table Step 2: Test case# 1. 2. 3. 4. 5. 6. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) d(2) e(2) b(2) c(2) d(2) e(2)

Refer guideline numbered 9 in Appendix B: Guidelines for filling in allpairs combination table

Sowmya Padmanabhan, 2003

Step 3: Test case# 1. 2. 3. 4. 5. 6. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) C1 C2 C2 C1 C1 C2 d(2) e(2)

Refer guidelines numbered 10 and 11 in Appendix B: Guidelines for filling in all-pairs combination table Step 4: Test case# 1. 2. 3. 4. 5. 6. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) C1 C2 C2 C1 C1 C2 d(2) D1 D2 D2 D1 D2 D1 e(2)

Refer guideline numbered 12 in Appendix B: Guidelines for filling in allpairs combination table Step 5: Let us try ordering the test cases of variable e in the same manner as we did for variable d since it worked for d and see if this ordering works for variable e also.

Sowmya Padmanabhan, 2003

Test case# 1. 2. 3. 4. 5. 6.

a(3) A1 A1 A2 A2 A3 A3

b(2) B1 B2 B1 B2 B1 B2

c(2) C1 C2 C2 C1 C1 C2

d(2) D1 D2 D2 D1 D2 D1

e(2) E1 E2 E2 E1 E2 E1

Refer guideline numbered 12 in Appendix B: Guidelines for filling in allpairs combination table We observe that the sequence of test cases of variable e in es column yields the following results: each test case of variable e is paired with each test case of variables a, b and c. each test case of variable e is NOT paired with each test case of variable d. For instance, D1 is paired with only E1 and D2 is paired with only E2 in all the three sets. Step 6: Backtracking: Erase the third set of variable e since the choice of ordering was based on the ordering of the previous set. Let us swap the ordering in set two for variable e, from E2 followed by E1 to E1 followed by E2 and see if this does any good. Test case# 1. 2. 3. 4. 5. 6. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) C1 C2 C2 C1 C1 C2 d(2) D1 D2 D2 D1 D2 D1 e(2) E1 E2 E1 E2

Let us now have E2 followed by E1 for the third set of variable e.

Sowmya Padmanabhan, 2003

Test case# 1. 2. 3. 4. 5. 6.

a(3) A1 A1 A2 A2 A3 A3

b(2) B1 B2 B1 B2 B1 B2

c(2) C1 C2 C2 C1 C1 C2

d(2) D1 D2 D2 D1 D2 D1

e(2) E1 E2 E1 E2 E2 E1

The above backtracking and refilling worked! Refer guideline numbered 12 in Appendix B: Guidelines for filling in allpairs combination table

Now, let us say we have to add an additional variable f having two test cases, say F1 and F2. Let us see if we can fit in the test cases of this sixth variable in the allpairs combination table and achieve all-pairs. Step 7: Let us first try the ordering sequence of variable d for variable f. Test case# 1. 2. 3. 4. 5. 6. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) C1 C2 C2 C1 C1 C2 d(2) D1 D2 D2 D1 D2 D1 e(2) E1 E2 E1 E2 E2 E1 f(2) F1 F2 F2 F1 F2 F1

We observe that the sequence of test cases of variable f in fs column yields the following results: each test case of variable f is paired with each test case of variables e, c, b and a. each test case of variable f is NOT paired with each test case of variable d. For instance, F1 is paired with only D1 and F2 is paired with only D2 in all the three sets. This problem is identical to the problem we encountered before. Let try to counter attack this problem with the solution we used before.

Sowmya Padmanabhan, 2003

Step 8: Backtracking: Erase the third set of variable f since the choice of ordering was based on the ordering of the previous set. Let us swap the ordering in set two for variable f, from F2 followed by F1 to F1 followed by F2 and see if this does any good. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) C1 C2 C2 C1 C1 C2 d(2) D1 D2 D2 D1 D2 D1 e(2) E1 E2 E1 E2 E2 E1 f(2) F1 F2 F1 F2

Test case# 1. 2. 3. 4. 5. 6. Test case# 1. 2. 3. 4. 5. 6.

Let us now have F2 followed by F1 for the third set of variable f. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) C1 C2 C2 C1 C1 C2 d(2) D1 D2 D2 D1 D2 D1 e(2) E1 E2 E1 E2 E2 E1 f(2) F1 F2 F1 F2 F2 F1

This arrangement does not work either since now F1 is paired with only E1 and F2 is paired with only E2. In fact, no matter what ordering you try you just cant fit in all-pairs in these six test cases or combinations. Refer guideline numbered 12 in Appendix B: Guidelines for filling in allpairs combination table

Sowmya Padmanabhan, 2003

Step 9: The only two pairs missing are F1 paired with E2 and F2 paired with E1. Hence, let us just add two additional combinations that contain these pairs in the available blank rows. a(3) A1 A1 A2 A2 A3 A3 b(2) B1 B2 B1 B2 B1 B2 c(2) C1 C2 C2 C1 C1 C2 d(2) D1 D2 D2 D1 D2 D1 e(2) E1 E2 E1 E1 E2 E2 E2 E1 f(2) F1 F2 F2 F1 F2 F1 F2 F1

Test case# 1. 2. 3. 4. 5. 6. 7. 8.

Sowmya Padmanabhan, 2003

10

Appendix C: Reference Material# 3 for Domain Testing Training (Reference Material 3: Combination Testing in Training Material)

Appendix D: Equivalence Class Analysis Heuristics1


Heuristic 1: Range of values If (min_value <--> max_value) is the given range for a numeric input variable/field then there are usually 4 equivalence classes: min_value <= everything <= max_value everything < min_value everything > max_value Sometimes one of these equivalence classes might not exist. For example, there might actually be nothing greater than the max_value. In the case of multiple ranges, every sub-range will have all elements within the sub-range representing one equivalence class. There will be two more equivalence classes for each sub-range, one for all elements below the smallest value in the sub-range and one for the elements for the largest value in the sub-range. Inclusive ranges (Closed-ended): Consider a variable x which can take values only between the range v1 to v2, the endpoints inclusive. Analysis: Step 1: Represent this variable as a mathematical range expression. v1 <= x <= v2 Step 2: Identify the risks associated with this variable. Failure to process values between v1 and v2 correctly Mishandling of values below v1 Mishandling of values above v2 Step 3: Determine the input domain for this variable The input domain is the set of all possible values that can ever be inputted to the variable x. Step 4: Partition the input domain based on the identified risks. The following is the corresponding equivalence class table for this example:
1

These heuristics are cited from the following: Kaner, C., Falk, J. and Nguyen, H. Q. (1999). Testing Computer Software, Second Edition, John Wiley & Sons, Inc. Myers, G. J. (1979). The Art of Software Testing. John Wiley & Sons, Inc.

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Variable Equivalence classes v1 -- v2 Test case (Best representatives) v1, v2 Risk Notes

< v1

v11

> v2

v2+1

1) Failure to process values between v1 and v2 correctly 2) Mishandling higher and lower boundary values. 1) Mishandling of values below v1 2) Mishandling of values just beneath the lower boundary. 1) Mishandling of values above v2 2) Mishandling of values just above the upper boundary.

Exclusive ranges (Open-ended): Consider a variable y which can take values only between the range v1 to v2, the end points being exclusive. Analysis: Step 1: Represent this variable as a mathematical range expression. v1 < y < v2 Step 2: Identify the risks associated with this variable. Failure to process values between v1+1 and v2-1 correctly Mishandling of values below v1+1 Mishandling of values above v2-1 Step 3: Determine the input domain for this variable The input domain is the set of all possible values that can ever be inputted to the variable y. Step 4: Partition the input domain based on the identified risks. The following is the corresponding equivalence class table for this example:

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Variable Equivalence classes v1+1 -- v2-1 Test case (Best representatives) v1+1, v2-1 Risk Notes

< v1+1

v1

> v2-1

v2

1) Failure to process values between v1+1 and v2-1 correctly 2) Mishandling higher and lower boundary values. 1) Mishandling of values below v1+1 2) Mishandling of values just beneath the lower boundary. 1) Mishandling of values above v2-1 2) Mishandling of values just above the upper boundary.

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Heuristic 2: Membership in a group One equivalence class would consist of all the members of the group. Another equivalence class would consist of all the other groups. If there is no natural ordering within the group then any member can be chosen to be the best representative or test case. If you have a basis to believe that there could be error with respect to the group, identify equivalence classes based on this error. Analysis: Step 1: Identify the risks associated with this variable. Failure to process the members of the group correctly Mishandling of non-members Step 2: Determine the input domain The input domain is the set of all possible groups and their corresponding members. Step 3: Partition the input domain based on the identified risks. The following is the corresponding equivalence class table for this example:

Variable

Equivalence classes All members of the group All other groups

Group

Test case (Best representatives) Any member

Risk

Notes

1) Failure to process the members of the group correctly. 1) Mishandling of non-members

Any element of any group

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Heuristic 3: Enumerated field Such variables usually are fields representing menus, drop-down combo boxes, lists or radio buttons or any form of fixed number of available choices. Such values are enumerated values (set of options/choices) and the corresponding variable is called a enumerated variable. Every option is an equivalence class in its own right since each option has the program/function respond differently. Depending on the time available, you would either test for all options or would randomly pick some options for test. Look for a grouping strategy within the list of options, if you can find one, then form partitions (sub-domains) and select a test case from each partition. Analysis: Step 1: Identify the risks associated with this variable. Failure to process each of the values in the enumerated set correctly Mishandling of values outside the enumerated set. Step 2: Determine the input domain The input domain is the set of values that can ever be inputted to the enumerated field. Step 3: Partition the input domain based on the identified risks. The following is the corresponding equivalence class table for this example:

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Variable Equivalence classes All cases with option 1 selected All cases with option 2 selected Test case (Best representatives) Option 1 Risk Notes

Enumerated field

1) Failure to process option 1 correctly. 1) Failure to process option 2 correctly.

Option 2

All cases with option n selected All cases with a value other than the listed options, selected for the variable

Option n

1) Failure to process option n correctly. 1) Mishandling of values outside the enumerated list of values.

Any value not present in the enumerated list

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Heuristic 4: Variables that have to be equal Consider two variables a and b. We know that at some point in the processing of the function to which they belong, value in a is the value in b. Manipulate b to see what you can do to a. Analysis: Step 1: Identify the risks associated with this variable. Failure to process identical values of a and b correctly Mishandling of non-identical values Mishandling of identical values but of data types other than what the values should be of. Step 2: Determine the input domain The input domain is the set of all possible a and b value pairs. Step 3: Partition the input domain based on the identified risks. The following is the corresponding equivalence class table for this example:

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Variable Equivalence classes All cases when values of a and b are identical All cases of values of a and b are not identical Test case (Best representatives) One case in which both are equal. One case in which they are not identical. Say, a and b have to be both equal to 10 and the precision is 4 decimal places after the decimal point then choose the following test case: Value of a is 9.9999 and b is 10 (they are almost equal, but not exactly equal) Risk Notes

a and b

1) Failure to process identical values of a and b correctly. 1) Mishandling of non-identical values of a and b correctly. 2) Mishandling of boundary value. 3) Mishandling of precision of floating point variables.

Values in a and b are identical but the values are of some other data type.

xyz and xyz

1) Mishandling of values of a and b that are identical but not of the data type that they are supposed to be of.

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Heuristic 5: Time-determined tasks Lets say our variable is a task that is time-determined. Then we have an equivalence class for each of the following: o Task completed before the required time-limit o Task completed at exactly the required time-limit o Task completed after the required time-limit Analysis: Step 1: Identify the risks associated with this variable. Failure to process tasks correctly that are completed within the timelimit. Mishandling of tasks completed after the time-limit Step 2: Determine the input domain The input domain is the set of all possible times taken by the task. Step 3: Partition the input domain based on the identified risks. The following is the corresponding equivalence class table for this example:

Sowmya Padmanabhan, 2003

Appendix A: Equivalence Class Analysis Heuristics (continued)


Variable Equivalence classes All tasks completed within the timelimit Test case (Best representatives) Tasks completed after the timelimit and exactly at the time-limit. time-limit - 1, time-limit. Risk Notes

timedependent variable

1) Failure to process tasks that are completed within and exactly at the defined time-limit. 2) Mishandling of boundary values. 1) Mishandling of tasks completed after the defined time-limit. 2) Mishandling of value just beyond the boundary.

All tasks completed after the time-limit.

Time-limit+1

Sowmya Padmanabhan, 2003

10

Appendix A: Equivalence Class Analysis Heuristics (continued)


Heuristic 6: Look for variable groups that must calculate to a certain value or range Consider i1 and i2 are input variables to a function and o is the output variable such that the value of o can lie only in the range of v1 and v2. In such cases you would test the output variable o in terms of the input variables. Analysis: Step 1: Represent this variable as a mathematical range expression. v1 <= o <= v2 Step 2: Identify the risks associated with this variable. Failure to process combinations of input values that yield o whose value lies in the range of v1 and v2. Mishandling of combinations of input values that yield o whose value is below v1. Mishandling of combinations of input values that yield o whose value is above v2. Step 3: Determine the input domain The input domain is the set of values that can ever be taken by output variable o. Step 4: Partition the input domain based on the identified risks. The following is the corresponding equivalence class table for this example:

Sowmya Padmanabhan, 2003

11

Appendix A: Equivalence Class Analysis Heuristics (continued)


Variable Test case (Best representatives) All 1) A combination combinations of values of input of values of variables v1 and input v2 that result in variables v1 the output and v2 that variables value result in the to be v1. output variables 2) A combination value to be of values of input within the variables v1 and range of v1 v2 that result in and v2 the output variables value to be v2. All A combination of combinations values of input of values of variables v1 and input v2 that result in variables v1 the output and v2 that variables value result in the to be v1-1. output variables value to be less than v1 A combination of All combinations values of input variables v1 and of values of v2 that result in input variables v1 the output variables value and v2 that to be v2+1. result in the output variables value to be greater than v2 Equivalence classes Risk Notes

1) Failure to process values between v1 and v2 correctly. 2) Mishandling higher and lower boundary values.

1) Mishandling of values below v1. 2) Mishandling of values just beneath the lower boundary.

1) Mishandling of values above v2 2) Mishandling of values just above the upper boundary.

Sowmya Padmanabhan, 2003

12

Appendix E: Guidelines for All-Pairs Combination (Appendix B: Guidelines for Filling in All-Pairs Combination Table in Training Material)

Appendix E: Guidelines for Filling in All-Pairs Combination Table1


1. Select the variables that you want to combine using all-pairs combination technique. Only independent variables should be combined using all-pairs technique. Do not include dependent variables because when such variables are combined with the remaining variables using all-pairs combination technique will lead to many impossible combinations due to one or more dependency relationships amongst the variables. Test dependent variables separately and use separate combinations that test for specific dependencies. 2. For each independent variable, decide which test cases would be interesting and useful enough to be included in all-pairs combination. Typically such test cases are the ones, which when tested individually, the program is expected to give a reasonably acceptable outcome or in other words the program is expected to process or handle them correctly. Test cases that were included in the equivalence class analysis table intentionally to see how the program mishandles them, also called error handling test cases, are not good choices for all-pairs combination. This is because including such test cases will render other acceptable pairs in the corresponding combination(s) in all-pairs table useless. Error handling cases should be tested separately and if required one might combine error handling test cases of multiple variables together to see how the program handles multiple faults at the same time. 3. Sort the independent variables in the descending order of the number of corresponding representatives or test cases chosen. Get the first two in the sorted list and assign the number of test case values they have to Max 1 and Max 2 respectively. 4. Give symbolic names to test cases of each variable. For example, represent the test cases of a variable x as X1, X2Xn 5. No. of rows in your table will be (Max 1 x Max 2), which is the minimal number of combinations you will have with all-pairs technique. The total no. of rows will also include blank rows in between which will be in addition to the number of rows corresponding to combinations of test cases (Max1 x Max2). We might not be able to fit in all pairs of all variables in this minimal number of combinations, in which case, additional combinations might have to be added (more on this in step 12). For now, let us work with Max1 x Max2 being the number of combinations. 6. No. of columns = total number of variables + 1 (to hold the test case number). Test case# goes in the cell in first row and first column.

These guidelines were adapted from Lessons Learned in Software Testing by Kaner, Bach and Pettichord.

Sowmya Padmanabhan, 2003

Appendix B: Guidelines for filling in all-pairs combination table (continued)


7. The first row has the variable names (with the one with max test cases first and the one with min test cases last). Each succeeding row (except blank rows) will contain a combination that addresses total (total-1)/2 pairs where total = total number of variables. 8. We begin filling the table by filling in the column corresponding to first variable first. To do that, follow the following steps: a. Repeat the first test case value of the first variable as many times as is the value of Max 2 (number of test case values of second variable) in the first column. Call this set 1. b. Leave a blank row after the set in case you need to add extra test cases later. c. Similarly, repeat the second test case value of the first variable as many times as is the value of Max 2 in the first column starting from the row that is after the blank row succeeding the first set. Now you have set 2. d. Do the above with the remaining test case values of first variable. Leave a blank row after every set so that incase you need to add an extra test case here, you wouldnt have to shift the rows up and down. By doing the above you will have filled in column corresponding to the first variable completely. Now, we have as many sets as the value of Max 1 is. You just now have to deal with the other variables. Please note that number of rows keep increasing progressively as you add blank rows. Make sure you dont number the blank rows. 9. The heuristic of filling in test case values of second variable (values in cells corresponding to second column) is to list the test case values of the second variable one after the other starting from value 1 and ending at last value in the column corresponding to second variable. By doing this for every set, you eventually will fill the column corresponding to the second variable completely. 10. If there is a third variable then for the first set in the column corresponding to the third variable, list the test cases in the order, that is, first test case followed by the second, so on and so forth until you reach the last test case. If the number of test cases of the third variable is less than the number of test cases of the second then you will have at least one cell in the set that remains unfilled by the procedure described above. In such an event, just repeat the sequence of the set again until you have filled all cells in the set. 11. By now, you will have filled in all the cells belonging to set 1 of the third variable. Now, start changing the order of the values for every successive set for the third variable. One way to do this is to imagine that the values are connected

Sowmya Padmanabhan, 2003

Appendix B: Guidelines for filling in all-pairs combination table (continued)


in a circular list. For the first set you enumerated the values starting from first and then second until finally you reach the first (which you just skip). Now, for the next set start at the second value followed by the third value and so on and so forth. By doing so, you will completely fill the column corresponding to the third variable. The above described procedure for filling in columns corresponding to first, second and third variable ensures that all-pairs has been achieved for the first three variables. 12. For each variable after the third one, if there is any, for every set, you have to order the test case values corresponding to the variable such that the final ordering of all sets corresponding to the variable, results in each value of the variable being paired with each value of every preceding variable. It might be required to backtrack if the current ordering makes it impossible to achieve the above mentioned end result. If, no matter what ordering you try and no matter how much you try to backtrack and redo the things, the end result (all-pairs) in not achieved then additional combinations might have to be added. Also note that it is possible that there is multiple occurrence of a pair of values of two variables since all-pairs technique only ensures that every value of a variable is combined with every value of every other variable at least once not only once but the former is the minimal requirement.

Sowmya Padmanabhan, 2003

Appendix F: Day 1 Lecture (Introduction to Domain Testing in Training Material)

Introduction to Domain Testing


Sowmya Padmanabhan

Why do we need to test software?


To find bugs. No matter how meticulously and carefully the underlying code of a software has been written, it will have bugs. To ensure that the software complies with its specifications. To ensure that the software does what is reasonably expected out of it. To ensure delivery of a quality product.

Sowmya Padmanabhan, 2003

Lets test a simple program


Here is a simple example from the paper on An approach to Program Testing by J. C. Huang:

Sowmya Padmanabhan, 2003

Analysis
To assert that the output value Z assumes the correct value for all input values of X and Y, we have to test all possible assignment values of X and Y. Suppose that X and Y take integer values and the program is tested on a computer whose word size is 32 bits.

What is the maximum integer value that the program can take?
Sowmya Padmanabhan, 2003 4

Analysis
Maximum integer value that X or Y can take is 2 32 Since the combination of X and Y yields Z, we have to test X and Y in combination.
What is the total number of possible combinations of X and Y?
Sowmya Padmanabhan, 2003 5

Analysis
The total possible combinations for X and Y is 2 32 x 2 32

Do you know how long it will take to test for 2 32 x 2 32 input combinations for X and Y?

Sowmya Padmanabhan, 2003

Analysis
If we assume that the program takes about a millisecond to execute once, we will need more than 50 billion years to completely test this program! If testing a program as simple as this can lead to combinatorial explosion and make testing seem impossible, imagine what happens with normal commercial programs with hundreds of variables?

Sowmya Padmanabhan, 2003

What is Domain Testing?


Domain testing is: one of several software testing techniques designed to help you find bugs in programs. a systematic approach for reducing an enormous test data set to few manageable test data subsets, and further reducing each of these sub-sets to few best representatives (best test cases). a proven way to manage risk and reduce the testing effort (time, money, labor, other resources).
Sowmya Padmanabhan, 2003 8

Domain testing-Terminology
Domain Testing: a technique to systematically reduce the set of all possible values to few manageable subsets. In domain testing, we partition an input domain into equivalence classes and choose few test cases from each class. Input Domain: set of all possible values that can be ever inputted to an input variable. Partitioning: dividing a set into non-overlapping subsets, usually on the basis of some major property or characteristic. Equivalence Class: All members of an equivalence class are equivalent with respect to some risk, but the classification could be imperfect. This analysis is called equivalence class analysis.
Sowmya Padmanabhan, 2003 9

Domain testing-Terminology
Boundary value analysis: This analysis helps
us in selecting one or more best representatives from each equivalence class. These are values on the boundary and values just beyond the boundary values. The best representatives are the test cases. In domain testing, we partition an input domain into equivalence classes and choose a few test cases (typically boundaries) from each class.

Sowmya Padmanabhan, 2003

10

Domain testing-Terminology
Risk: It is an assertion about how a program could fail. Test case: A test case is a combination of values of input variables of a program that is used to test the program against one or more risks. When considering one variable, every value that you would test the variable for, is a test case for that variable.

Sowmya Padmanabhan, 2003

11

Summary
Our goal is to not only do effective testing but to reduce the testing effort, which includes cost, labor, time, etc We will learn how domain testing can help us in achieving this goal.

Sowmya Padmanabhan, 2003

12

Appendix G: Day 2 Lecture (Day 2 Lecture in Training Material)

Session 2: Domain Testing


Sowmya Padmanabhan

Sowmya Padmanabhan, 2003

Testing Numeric fields


We encounter numeric fields everyday everywhere. Money, Bank account number, Credit card number, Student ID number, Number of hours you work in a week, Salary, Pay rate, Number of computers you own, Number of courses you have taken, Your bank balance, Number of calories consumed in a day, Number of bugs reported, Number of reservations, Time frame within which a satellite should be launched, Time to finish a project, so on and so forth are all examples of numeric fields.
Sowmya Padmanabhan, 2003 2

Testing Integer fields


Example 1:

An integer field/variable x can take values in the range 99 and 99, the end-points being inclusive. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable.
Sowmya Padmanabhan, 2003 3

Testing Integer fields Example 1


Step 1:
Represent this variable as a mathematical range expression.

Sowmya Padmanabhan, 2003

Testing Integer fields Example 1


-99 <= x <=99 This is inclusive (or closed-ended) from both sides (right and left).

Sowmya Padmanabhan, 2003

Testing Integer fields Example 1


Step 2:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

Testing Integer fields Example 1


The input domain is the set of all possible values that can ever be inputted to the variable x.

Sowmya Padmanabhan, 2003

Testing Integer fields Example 1


Step 3:
Identify the risks associated with the variable.

Sowmya Padmanabhan, 2003

Testing Integer fields Example 1


Failure to process values between 99 and 99 correctly Mishandling of values less than 99 Mishandling of values greater than 99
There are other risks that we will consider later, but the ones listed here are the ones most often considered in domain testing.
Sowmya Padmanabhan, 2003

Testing Integer fields Example 1


Step 4:
Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

10

Variable(s) x

Equivalence Class(es) All values from 99 to 99

Test cases -99, 99

Risks 1) Failure to process values between 99 and 99 correctly. 2) Mishandling of lower and upper boundary values. 1) Mishandling of values less than 99. 2) Mishandling of value just beneath the lower boundary. 1) Mishandling of values greater than 99. 2) Mishandling of value just beyond the upper boundary.

Notes

All values less than -99

-100

All values greater than 99

100

Testing Floating-point fields


Example 2:

A floating-point field/variable m can take values only between 14.09 and 99.99, excluding 99.99. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable.
Sowmya Padmanabhan, 2003 11

Testing Integer fields Example 2


Step 1:
Represent this variable as a mathematical range expression.

Sowmya Padmanabhan, 2003

12

Testing Integer fields Example 2


-14.09 <= m < 99.99 This is inclusive (or closed-ended) from left end and exclusive (or open-ended) from the right end.

Sowmya Padmanabhan, 2003

13

Testing Integer fields Example 2


Step 2:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

14

Testing Integer fields Example 2


The input domain is the set of all possible values that can ever be inputted to the variable m.

Sowmya Padmanabhan, 2003

15

Testing Integer fields Example 2


Step 3:
Identify the risks associated with the variable.

Sowmya Padmanabhan, 2003

16

Testing Integer fields Example 2


Failure to process values between 14.09 and 99.98 correctly Mishandling of values less than 14.09 Mishandling of values greater than 99.98
There are other risks that we will consider later, but the ones listed here are the ones most often considered in domain testing.
Sowmya Padmanabhan, 2003

17

Testing Integer fields Example 2


Step 4:
Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

18

Variable(s) m

Equivalence Class(es) All values from 14.09 to 99.98

Test cases -14.09, 99.98

Risks 1) Failure to process values between 14.09 and 99.98 correctly. 2) Mishandling of lower and upper boundary values. 1) Mishandling of values less than 14.09. 2) Mishandling of value just beneath the lower boundary. 1) Mishandling of values greater than 99.99. 2) Mishandling of value just beyond the upper boundary.

Notes

All values less than 14.09

-14.08

All values greater than 99.98

99.99

Sowmya Padmanabhan, 2003

19

Word Problems
Real world problems are rarely explicitly specified as the problem descriptions for Examples and Exercises 1 and 2. Real world problems present a scenario in natural language and it is our job to extract relevant information from it and translate that to symbolic information and apply the skills and knowledge we have to this symbolic information. In mathematics, we attempt to bridge the gap between academic examples and real-world problems with word problems.
Sowmya Padmanabhan, 2003 20

Word Problems
SunTrust issues Visa credit cards with credit limits in the range of $400 to $40000. A customer is not to be approved for credit limits outside this range. A customer can apply for the card using an online application form in which one of the fields requires that the customer type in his/her desired credit limit. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Assume integer values.
Sowmya Padmanabhan, 2003 21

Example 3:

Word Problems Example 3


Step 1:
What variables could be involved in analysis of this group of facts?

Sowmya Padmanabhan, 2003

22

Word Problems Example 3


Credit card, credit card number, credit limit, customer.

Sowmya Padmanabhan, 2003

23

Word Problems Example 3


Step 2:
What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

Sowmya Padmanabhan, 2003

24

Word Problems Example 3


credit-limit

Sowmya Padmanabhan, 2003

25

Word Problems Example 3


Step 3:
Represent this variable as a mathematical range expression.

Sowmya Padmanabhan, 2003

26

Word Problems Example 3


$400 <= credit-limit <= $40000 This is inclusive from both ends.

Sowmya Padmanabhan, 2003

27

Word Problems Example 3


Step 4:
Determine what the input domain is.

Sowmya Padmanabhan, 2003

28

Word Problems Example 3


The input domain is the set of all possible values that can ever be inputted to the variable credit-limit.

Sowmya Padmanabhan, 2003

29

Word Problems Example 3


Step 5:
Identify the risks associated with the variable.

Sowmya Padmanabhan, 2003

30

Word Problems Example 3


Failure to process credit limit requests between $400 and $40000 correctly Failure to disapprove credit limit requests less than $400 Failure to disapprove credit limit requests greater than $40000 Mishandling of negative credit limit requests
There are other risks that we will consider later, but the ones listed here are the ones most often considered in domain testing.
Sowmya Padmanabhan, 2003

31

Word Problems Example 3


Step 6:
Partition the input domain into equivalence classes based on the identified risks.

Sowmya Padmanabhan, 2003

32

Variable(s) creditlimit

Equivalence Class(es) All credit-limit requests between $400 to $40000 All credit-limit requests less than $400

Test cases $400, $40000

Risks 1) Failure to process creditlimit requests between $400 and $40000 correctly. 2) Mishandling of lower and upper boundary values. 1) Failure to disapprove credit-limit requests less than $400 2) Mishandling of value just beneath the lower boundary. 1) Failure to disapprove credit-limit requests greater than $40000 2) Mishandling of value just beyond the upper boundary. 1) Mishandling of negative credit-limit requests. Taking the absolute value of the negative credit-limit request and approving it. 2) Mishandling of upper and lower boundary values.

Notes

$399

All credit-limit requests greater than $40000

$40001

All negative credit-limit requests

-$400, -$40000

The program should be capable of correctly handling negative amounts

Word Problems
Example 4: The passing score for any course at ZLTech is 60/100. If a student scores less than 60 for a course, the student gets an F grade in the course .

What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Assume integer values.
Sowmya Padmanabhan, 2003

33

Word Problems Example 4


Step 1:
What variables could be involved in analysis of this group of facts?

Sowmya Padmanabhan, 2003

34

Word Problems Example 4


Student_name, student_number, passing_score, student_score, course_name, grade.

Sowmya Padmanabhan, 2003

35

Word Problems Example 4


Step 2:
What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

Sowmya Padmanabhan, 2003

36

Word Problems Example 4


student_score

Sowmya Padmanabhan, 2003

37

Word Problems Example 4


Step 3:
Represent this variable as a mathematical range expression.

Sowmya Padmanabhan, 2003

38

Word Problems Example 4


60 <= student_score <=100 This is inclusive from both ends.

Sowmya Padmanabhan, 2003

39

Word Problems Example 4


Step 4:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

40

Word Problems Example 4


The input domain is the set of all possible values that can ever be inputted to the variable student_score.

Sowmya Padmanabhan, 2003

41

Word Problems Example 4


Step 5:
Identify the risks associated with the variable.

Sowmya Padmanabhan, 2003

42

Word Problems Example 4


Failure to give a student F in a course with score less than 60 Assigning F to a student in a course with score 60 and greater. Mishandling of scores above 100 Mishandling of negative scores.
There are other risks that we will consider later, but the ones listed here are the ones most often considered in domain testing.
Sowmya Padmanabhan, 2003 43

Word Problems Example 4


Step 6:
Partition the input domain into equivalence classes based on the identified risks.

Sowmya Padmanabhan, 2003

44

Variable(s) student score

Equivalence Class(es) All scores 60 and greater

Test cases 60, 100

Risks 1) Failure to correctly process scores 60 and above, that is, failure to NOT assign a grade of F to scores 60 and above. 2) Mishandling of lower and upper boundary values. 1) Failure to give a student F grade in a course with score less than 60 2) Mishandling of value just beneath the lower boundary. 1) Mishandling of scores greater than 100 2) Mishandling of value just beyond the upper boundary value. 1) Mishandling of negative amounts. Taking the absolute value of the negative score and declaring the grade as not F when it should be F. 2) Mishandling of boundary value.

Notes

All scores less than 60

59

All scores greater than 100

101

All negative scores

-60

What about bonus points that add up to make the score greater than 100? What about negative scoring?

Word Problems
Example 5: The page setup function of a text editor allows a user to set the width of the page only in the range of 1 to 56 inches. The precision of the width is up to 30 places after the decimal point.

What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable.
Sowmya Padmanabhan, 2003

45

Word Problems Example 5


Step 1:
What variables could be involved in analysis of this group of facts?

Sowmya Padmanabhan, 2003

46

Word Problems Example 5


User, Page, Width.

Sowmya Padmanabhan, 2003

47

Word Problems Example 5


Step 2:
What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

Sowmya Padmanabhan, 2003

48

Word Problems Example 5


width

Sowmya Padmanabhan, 2003

49

Word Problems Example 5


Step 3:
Represent this variable as a mathematical range expression.

Sowmya Padmanabhan, 2003

50

Word Problems Example 5


1 <= width <=56 This is inclusive from both ends.

Sowmya Padmanabhan, 2003

51

Word Problems Example 5


Step 4:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

52

Word Problems Example 5


The input domain is the set of all possible values that can ever be inputted to the variable width.

Sowmya Padmanabhan, 2003

53

Word Problems Example 5


Step 5:
Identify the risks associated with the variable.

Sowmya Padmanabhan, 2003

54

Word Problems Example 5


Failure to correctly process the width values that lie between 1 inch and 56 inches Failure to handle up to 30 places after the decimal point Mishandling of more than 30 places after the decimal point. Mishandling of width values less than 1 inch Mishandling of width values greater than 56 inches Mishandling of negative width values
There are other risks that we will consider later, but the ones listed here are the ones most often considered in domain testing.
Sowmya Padmanabhan, 2003

55

Word Problems Example 5


Step 6:
Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

56

Variable (s) width

Equivalence Class(es) All widths between 1 to 56 inches

Test cases

Risks

Notes

1.0000000000 1) Failure to process width values that 00000000000 lie between 1 inch and 56 inches 000000000, correctly. 56.000000000 00000000000 0000000000, (30 places after decimal point) 0.9999999999 99999999999 999999999 (30 places after the decimal point) 2) Failure to handle up to 30 places after the decimal point. 3) Mishandling of lower and upper boundary values. 1) Mishandling of width values less than 1 inch 2) Failure to handle up to 30 places after the decimal point.

All widths less than 1 inch

All widths greater than 56 inches

All negative widths

3) Mishandling of value just beneath the lower boundary. 56.000000000 1) Mishandling of width values greater 00000000000 than 56 inches 0000000001 (30 places 2) Failure to handle up to 30 places after the after the decimal point. decimal point) 3) Mishandling of value just beyond the upper boundary. -1.0000 1) Mishandling of negative values. 00000000000 00000000000 Taking the absolute value of the 0000, negative width value and accepting it -56.000 as an allowable width. 00000000000 00000000000 2) Failure to handle up to 30 places 00000 after the decimal point. (30 places after the 4) Mishandling of lower and upper decimal point) boundary values. 1.0000000000 00000000000 0000000000 (31 places after the decimal point) 1) Mishandling of more than 30 places after the decimal point. 2) Mishandling of value just beyond the upper boundary value.

The program should be capable of correctly handling negative values.

All widths with more than 30 decimal places after the decimal point.

Sowmya Padmanabhan, 2003

57

Testing multiple ranges


The problems that we have dealt with so far only had one range defined for a variable. Often, you will encounter problems that are defined over multiple ranges. Let us learn how to extend what we have learned so far and apply that to ranges that in turn are defined over multiple sub-ranges.

Sowmya Padmanabhan, 2003

58

Testing multiple ranges


Example 6:
According to Boris Beizer (Black-Box Testing, 1995), in 1993, IRS defined the following tax table:

Sowmya Padmanabhan, 2003

59

Testing multiple ranges Example 6


Assume that there is an online program provided by IRS that lets a user enter his taxable income in a text field and in turn the program tells the user what the corresponding tax is.

Sowmya Padmanabhan, 2003

60

Testing multiple ranges Example 6


What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable.

Sowmya Padmanabhan, 2003

61

Word Problems Example 6


Step 1:
What variables could be involved in analysis of this group of facts?

Sowmya Padmanabhan, 2003

62

Word Problems Example 6


Taxable_income (tax_inc), Tax, User.

Sowmya Padmanabhan, 2003

63

Word Problems Example 6


Step 2:
What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

Sowmya Padmanabhan, 2003

64

Word Problems Example 6


tax_inc

Sowmya Padmanabhan, 2003

65

Word Problems Example 6


Step 3:
Represent this variable as a mathematical range expression.

Sowmya Padmanabhan, 2003

66

Word Problems Example 6

Sowmya Padmanabhan, 2003

67

Word Problems Example 6


Step 4:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

68

Word Problems Example 6


The input domain is the set of all possible values that can ever be inputted to the variable tax_inc.

Sowmya Padmanabhan, 2003

69

Word Problems Example 6


Step 5:
Identify the risks associated with the variable.

Sowmya Padmanabhan, 2003

70

Word Problems Example 6


Mishandling of negative incomes Mishandling of zero taxable incomes Failure to calculate the tax correctly for each of the income sub-ranges Mishandling of low and high boundaries of each of the sub-ranges Mishandling of values just beneath and beyond low and high boundaries respectively for each of the sub-ranges Mishandling of non-numbers Mishandling of smallest and largest value at the system level, and beyond.
Sowmya Padmanabhan, 2003 71

Word Problems Example 6


Step 6:
Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

72

Variable tax-inc

Equivalence classes <0 (-1 to negative infinity)

Test cases (Best representatives) -1

Risks 1) Mishandling of negative incomes. 2) Mishandling of upper boundary values.

Notes Negative incomes in the form of losses/ debts are carried over to the next year Special case, nil taxable income

Zero

min_number, 3) Mishandling of extremely low min_number-1 values. at the system level 4) Mishandling of lower boundary value 5) Mishandling of value just beneath the lower boundary value 0 1) Mishandling of zero taxable income 1, $22,100 1) Failure to calculate the tax correctly for the income range of 1 -- $22,100 2) Mishandling of low and high boundaries. 1) Failure to calculate the tax correctly for the income range of $22,101 -- $53,500 2) Mishandling of low and high boundaries. 1) Failure to calculate the tax correctly for the income range of $53,501 -- $115,000 2) Mishandling of low and high boundaries. 1) Failure to calculate the tax correctly for the income range of $115,001 -- $250,000 2) Mishandling of low and high boundaries. 1) Failure to calculate the tax correctly for the incomes >$250,000 2) Mishandling of low and upper boundary values 3) Mishandling of extremely large numbers. 4) Mishandling of highest boundary value and value just beyond the highest boundary. Mishandling of non-numbers

All incomes from 1 to $22,100

All incomes from $22,101 to $53,500 All incomes from $53,501 to $115,000 All incomes from $115,001 to $250,000 All incomes greater than $250,000 ($250,001 to positive infinity) Nonnumbers

$22,101, $53,500

$53,501, $115,000

$115,001, $250,000

$250,001, max_number, max_number+1

See non-numbers table

Analyzing non-numbers
Let us first review the concept of ASCII characters What does ASCII stand for?

Sowmya Padmanabhan, 2003

73

ASCII characters
ASCII, pronounced "ask-ee" is the acronym
for American Standard Code for Information Interchange. Internally in the computer all characters are represented in ASCII format. Standard ASCII set: characters 0 to 127 Extended ASCII set: characters 128 to 255

Sowmya Padmanabhan, 2003

74

Standard ASCII
The first 32 characters (0-31) are control codes. ASCII 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 DISPLAY Description NUL SOH STX ETX EOT ENQ ACK BEL BS HT LF VT FF CR SO SI DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN Null Start of heading Start of text End of text End of transmit Enquiry Acknowledge Audible bell Backspace Horizontal tab Line feed Vertical tab Form feed Carriage return Shift out Shift in Data link escape Device control 1 Device control 2 Device control 3 Device control 4 Neg. acknowledge Synchronous idle End trans. block Cancel

25 26 27 28 29 30 31 32 ASCII 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

EM SUB ESC FS GS RS US SP DISPLAY ! " # $ % & ' ( ) * + , . / 0 1 2 3

End of medium Substitution Escape Figures shift Group separator Record separator Unit separator Blank Space (Space Bar)

52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79

4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O

80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107

P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k

108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127

l m n o p q r s t u v w x y z { | } ~

Extended ASCII Characters

Reference: http://www.telacommunications.com/nutshell/ascii.htm

ASCII characters
ASCII files:
are data or text files containing characters coded from ASCII character set. can be used as a common denominator for data conversions. Two programs dealing with different data formats can exchange data by inputting and outputting data in the form of ASCII files. For more details visit:

http://www.telacommunications.com/nutshell/ascii.htm

Sowmya Padmanabhan, 2003

75

Why bother about ASCII characters in our analysis?


Consider the case of an integer field. This field should accept only values that contain digits 0-9 (ASCII 48-57) as the constituent characters. Assume that the following is the corresponding pseudo code for the internal logic of this program:
if ASCII (char) >= 48 and ASCII (char) <= 57 then accept else reject
Sowmya Padmanabhan, 2003 76

Why bother about ASCII characters in our analysis?


Imagine what happens if the programmer messes up any of the following:
The relational operators (<=, >=) The conditional values (48, 57)

Sowmya Padmanabhan, 2003

77

Why bother about ASCII characters in our analysis?


Non-numbers will be also be accepted as valid inputs
if ASCII (char) <= 48 and ASCII (char) <= 57 if ASCII (char) >= 48 and ASCII (char) >= 57 if ASCII (char) >= 46 and ASCII (char) <= 57 if ASCII (char) >= 48 and ASCII (char) <= 59

Digits will be rejected

if ASCII (char) > 48 and ASCII (char) <= 57 if ASCII (char) >= 48 and ASCII (char) < 57 if ASCII (char) >= 49 and ASCII (char) <= 57 if ASCII (char) >= 48 and ASCII (char) <= 56
Sowmya Padmanabhan, 2003

78

Analyzing non-numbers Example 6


Identify the risks associated with inputting non-numbers in the Taxable Income field and list the identified risks.

Sowmya Padmanabhan, 2003

79

Analyzing non-numbers Example 6


Failure to process the values containing only digits 09 correctly Mishandling of characters just beneath and beyond the ASCII range for 0-9 (48-57 ASCII) Mishandling of commas Mishandling of characters just beneath and beyond the ASCII value for comma (,) Mishandling of decimal points Mishandling of characters just beneath and beyond the ASCII value for decimal point (.) Mishandling of dollar $ sign. Mishandling of characters just beneath and beyond the ASCII value for the dollar sign ($)
Sowmya Padmanabhan, 2003 80

Analyzing non-numbers Example 6


Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

81

Variable
tax-inc

Equivalence classes
All numeric values having only digits 0-9

Test cases (Best representatives)


1) An integer value containing 0 and 9

Risks
1) Failure to process the values containing only digits 0-9 correctly 2) Mishandling of lower and upper boundary values for the ASCII range for 0-9 which is ASCII 48-57 1) Mishandling of commas by considering it an invalid character and rejecting it. 1) Mishandling of characters just beneath and beyond the ASCII value for comma , 2) Mishandling of values just beneath and beyond the lower and upper boundaries respectively 1) Mishandling of one decimal point by considering it an invalid character and rejecting it. 2) Failure to reject more than one decimal point in one numeric value. 1) Mishandling of characters just beneath and beyond the ASCII value for decimal point . 2) Mishandling of values just beneath and beyond the lower and upper boundaries respectively. 1) Mishandling of dollar $ sign by considering it a special character and rejecting it. 2) Failure to reject two dollar signs in one numeric value.

Notes

All numeric values containing one or more commas All values containing characters beneath or/and beyond the ASCII value for , (ASCII value 44) All numeric values containing decimal point(s)

1) A valid number that has at least one comma ($22,100) 1) A value containing + (ASCII value 43), 2) a value containing (ASCII value 45) 1) A value containing one decimal point at the right place, 2) A value containing two decimal points 1) A value containing - (ASCII value 45), 2) A value containing / (ASCII value 47) 1) A value containing one $ preceding the very first character 2) A value containing two $ both preceding the first character 3) A value containing one $ after the very first character

All values containing characters beneath or/and beyond the ASCII value for . (ASCII value 46) All numeric values containing the $ sign.

3) Failure to reject a value that has dollar sign at a position other than the position preceding the first character.

Variable

Equivalence classes All values containing characters beneath or/and beyond the ASCII value for $ (ASCII value 36) All values containing characters beneath or/and beyond the ASCII sub-range of 48-57 for digits 0-9 All values containing a space in between

tax-inc

Test cases (Best representatives ) 1) A value containing # (ASCII value 35), 2) A value containing % (ASCII value 37) A value containing / (ASCII value 47), A value containing : (ASCII value 58) A value containing a white space in between digits. ($22, 001)

Risks

Notes

1) Mishandling of characters just beneath and beyond the ASCII value for the dollar sign ($) 2) Mishandling of values just beneath and beyond the lower and upper boundaries respectively. 1) Mishandling of characters just beneath and beyond the ASCII range for 0-9 (48-57 ASCII) 2) Mishandling of values just beneath and beyond the lower and upper boundaries respectively 1) Mishandling of white space.

Special case

Sowmya Padmanabhan, 2003

82

Testing String fields


We encounter String fields everyday everywhere. First name, Last name, Course name, City name, Qualification, Degree, in fact any field that requires characters including non-digits as input (unless it is specified that the field needs strictly numeric values), are all examples of string fields. Please note that a text field does not mean that it takes only string values.
Sowmya Padmanabhan, 2003 83

Testing String fields


Example 7:

A string variable s has to have a letter for the first character; the rest of them can be any characters. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable.
Sowmya Padmanabhan, 2003 84

Testing String fields Example 7


Step 1:
Represent this variable as a mathematical range expression.

Sowmya Padmanabhan, 2003

85

Testing String fields Example 7


First character
(A-Z)
65 <= ASCII (first character) <= 90

(a-z)
97 <= ASCII (first character) <= 122

Remaining characters
0 <= ASCII (remaining characters) <= 127
Basically all standard ASCII characters.
Sowmya Padmanabhan, 2003

86

Testing String fields Example 7


Step 2:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

87

Testing String fields Example 7


The input domain is the set of all possible values that can ever be inputted to variable s.

Sowmya Padmanabhan, 2003

88

Testing String fields Example 7


Step 3:
Identify the risks associated with the variable.

Sowmya Padmanabhan, 2003

89

Testing String fields Example 7


Failure to process strings correctly that have letters as their first character Mishandling of string values that have nonletters for the first character Failure to process a string correctly that has characters from the standard ASCII set for the remaining characters Mishandling of characters that belong to the extended ASCII set
Sowmya Padmanabhan, 2003 90

Testing String fields Example 7


Step 4:
Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

91

Variable s

Equivalence classes All strings with a letter as the first character. (A-Z: ASCII code 65-90 or a-z: ASCII code 97-122) and the remaining characters are characters from the standard ASCII set.

Test case (Best representatives) A string having: First character letter A First character letter Z First character Letter a First character letter z For the above four test cases, the remaining characters are from the standard ASCII set. First character any one of the four letters: a, A, z or Z and the second character corresponding to ASCII code 0. First character any one of the four letters: a, A, z or Z and the second character corresponding to ASCII code 127.

Risks 1) Failure to process strings correctly that have upper case letters as their first character and characters from the standard ASCII set for the remaining characters. 2) Mishandling of lower and upper lower boundary values. 3) Failure to process strings correctly that have lower case letters as their first character and characters from the standard ASCII set for the remaining characters. 4) Mishandling of lower and upper lower boundary values. 5) Failure to process strings correctly that have characters in the standard ASCII set. 6) Mishandling of lower and upper lower boundary values. 1) Mishandling of strings not having letters as their first character. 2) Mishandling of characters just beneath and beyond the lower and upper boundaries for the ASCII sub-ranges for lower and uppercase letters respectively. 1) Mishandling of strings that have characters from the extended ASCII set. 2) Mishandling of value just beyond the upper boundary of standard ASCII set.

Notes

All strings with the first character being NOT a letter and the remaining characters from the standard ASCII set. All strings having at least one character corresponding to extended ASCII set (>127)

A string having: First character @ First character [ First character . First character {

A string having: one of the characters corresponding to ASCII code 128 belonging to extended ASCII set.

Sowmya Padmanabhan, 2003

92

Standard ASCII
The first 32 characters (0-31) are control codes. ASCII 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 DISPLAY Description NUL SOH STX ETX EOT ENQ ACK BEL BS HT LF VT FF CR SO SI DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN Null Start of heading Start of text End of text End of transmit Enquiry Acknowledge Audible bell Backspace Horizontal tab Line feed Vertical tab Form feed Carriage return Shift out Shift in Data link escape Device control 1 Device control 2 Device control 3 Device control 4 Neg. acknowledge Synchronous idle End trans. block Cancel

25 26 27 28 29 30 31 32 ASCII 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51

EM SUB ESC FS GS RS US SP DISPLAY ! " # $ % & ' ( ) * + , . / 0 1 2 3

End of medium Substitution Escape Figures shift Group separator Record separator Unit separator Blank Space (Space Bar)

52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79

4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O

80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107

P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k

108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127

l m n o p q r s t u v w x y z { | } ~

Extended ASCII Characters

Reference: http://www.telacommunications.com/nutshell/ascii.htm

Appendix H: Day 3 Lecture (Day 3 Lecture in Training Material)

Session 3: Domain Testing


Sowmya Padmanabhan

Sowmya Padmanabhan, 2003

Testing multidimensional variables


What is meant by dimension of a variable?
It is a particular way of looking at a variable based on some property of the variable. One-dimensional variable: There is only one way of analyzing such a variable. Multidimensional variable: There is more than one way of analyzing such a variable.
Sowmya Padmanabhan, 2003 2

Testing multidimensional variables


You are given a bunch of numbers and are told to represent or classify them in terms of some property. Then, one way to do this is to consider the bigness or size property and arrange them from the number that has the least value to the number that has the biggest value.

What is the dimension here?


Sowmya Padmanabhan, 2003 3

Testing multidimensional variables


The bigness or size property. Now, we could also classify these as integers and reals with everything that is just an integer in the integer category and the other floating-point or decimal numbers in the real category.

So, what is our dimension of classification here?


Sowmya Padmanabhan, 2003 4

Testing multidimensional variables


It is data type. A bunch of numbers with no uniformity likeonly integers, only decimals, only numbers greater that 1000 or only numbers less than 23 will be multi-dimensional and depending on what is required in your analysis, you might pick one or more or all dimensions that are relevant and base your analysis on them.
Sowmya Padmanabhan, 2003 5

Testing multidimensional variables


If the variable is students of FIT then there are many angles from which you could look at the values of this variable, which would consist of all students of FIT. We could look at the students along the following dimensions: Graduate or undergraduate level For Graduates Degree (Masters, PhD) For undergraduates: Year (Freshman, Sophomore, Junior and Senior) Major (Computer Sciences, Software Engineering, Computer Engineering, Chemical Engineering, etc) and may more Hence the variable students of FIT is multi-dimensional.
Sowmya Padmanabhan, 2003 6

General analysis of multidimensional numeric fields


Example 1:
Given a numeric field/variable, find its general dimensions. Develop a series of tests by performing equivalence class analysis and boundary value analysis on each of the dimensions of this variable.
Sowmya Padmanabhan, 2003 7

Multidimensional numeric fields Example 1


Step 1:
Identify the general dimensions of the variable.

Sowmya Padmanabhan, 2003

Multidimensional numeric fields Example 1


Length Type of characters Size (or Magnitude)

Sowmya Padmanabhan, 2003

Multidimensional numeric fields

Example 1
Step 2:
Represent the variable as a mathematical range expression along each of the identified dimensions.

Sowmya Padmanabhan, 2003

10

Multidimensional numeric fields

Example 1
Length
Min-allowed-length <= length (variable) <= Max-allowed-length

Type of characters

ASCII (0) <= ASCII (characters) <= ASCII (9)

If the numeric field represents decimal or floating-point values, then in such cases decimal point is also an allowed character, hence the ASCII of the decimal point will represent a single boundary.

Size
Min-allowed-value <= size (variable) <= Max-allowed-value
Sowmya Padmanabhan, 2003 11

Multidimensional numeric fields

Example 1
Step 3:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

12

Multidimensional numeric fields

Example 1
The input domain is the set of all possible values that can ever be inputted to the numeric variable.

Sowmya Padmanabhan, 2003

13

Multidimensional numeric fields

Example 1
Step 4:
Identify the risks along each dimension of the variable.

Sowmya Padmanabhan, 2003

14

Multidimensional numeric fields

Example 1
Length

Failure to process numeric values correctly that have their lengths within the allowed range Mishandling of numeric values that have their lengths outside the allowed range.

Sowmya Padmanabhan, 2003

15

Multidimensional numeric fields

Example 1
Type of characters

Failure to process numeric values correctly that only have allowed characters Mishandling of numeric values that have characters other then the allowed ones

Sowmya Padmanabhan, 2003

16

Multidimensional numeric fields

Example 1
Size

Failure to process numeric values correctly that are within the allowed range of sizes Mishandling of numeric values that are outside the allowed range of sizes

Sowmya Padmanabhan, 2003

17

Testing numeric fields Example 1


Step 5:
Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

18

Sowmya Padmanabhan, 2003

19

General analysis of multidimensional string fields


Example 2:
Given a string field/variable, find its general dimensions. Develop a series of tests by performing equivalence class analysis and boundary value analysis on each of the dimensions of this variable.
Sowmya Padmanabhan, 2003 20

Multidimensional string fields Example 2


Step 1:
Identify the general dimensions of the variable.

Sowmya Padmanabhan, 2003

21

Multidimensional string fields Example 2


Length Type of characters

Sowmya Padmanabhan, 2003

22

Multidimensional string fields

Example 2
Step 2:
Represent the variable as a mathematical range expression along each of the identified dimensions.

Sowmya Padmanabhan, 2003

23

Multidimensional string fields

Example 2
Length
Min-allowed-length <= length (variable) <= Max-allowed-length

Type of characters
ASCII (0) <= ASCII (characters) <= ASCII (127)

In particular situations depending on what characters are allowed and what are not, you would have one or more subranges for each of the groups of allowed characters.

Sowmya Padmanabhan, 2003

24

Multidimensional string fields

Example 2
Step 3:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

25

Multidimensional string fields

Example 2
The input domain is the set of all possible values that can ever be inputted to the string variable.

Sowmya Padmanabhan, 2003

26

Multidimensional string fields

Example 2
Step 4:
Identify the risks along each dimension of the variable.

Sowmya Padmanabhan, 2003

27

Multidimensional string fields

Example 2
Length

Failure to process string values correctly that have their lengths within the allowed range Mishandling of string values that have their lengths outside the allowed range.

Sowmya Padmanabhan, 2003

28

Multidimensional string fields

Example 2
Type of characters

Failure to process string values correctly that only have allowed characters Mishandling of string values that have characters other then the allowed ones
Sowmya Padmanabhan, 2003

29

Multidimensional string fields

Example 2
Step 5:
Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

30

Multidimensional string fields


Example 3:

ZLTech has a web-based mail system. A user has to enter his/her user name and password and then click on the Sign in button to log in and access his/her inbox. The user name can have five to fifteen characters. Also, only digits and lowercase characters are allowed for the user name. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Identify the relevant dimensions for this variable. Develop a series of tests by performing equivalence class analysis and boundary value analysis on each of the identified dimensions of this variable.
Sowmya Padmanabhan, 2003 31

Multidimensional string fields Example 3


Step 1:
What variables could be involved in analysis of this group of facts?

Sowmya Padmanabhan, 2003

32

Multidimensional string fields Example 3


User name, password, sign in button, mail server, inbox, user

Sowmya Padmanabhan, 2003

33

Multidimensional string fields Example 3


Step 2:

What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

Sowmya Padmanabhan, 2003

34

Multidimensional string fields Example 3


User name

Sowmya Padmanabhan, 2003

35

Multidimensional string fields Example 3


Step 3:
Identify the general dimensions of the variable.

Sowmya Padmanabhan, 2003

36

Multidimensional string fields Example 3


Length Type of characters

Sowmya Padmanabhan, 2003

37

Multidimensional string fields

Example 3
Step 4:
Represent the variable as a mathematical range expression along each of the identified dimensions.

Sowmya Padmanabhan, 2003

38

Multidimensional string fields

Example 3
Length
5 <= length (variable) <= 15

Type of characters
Digits: 48 <= ASCII (characters) <= 57 Lowercase letters: 97 <= ASCII (characters) <= 122

Sowmya Padmanabhan, 2003

39

Multidimensional string fields

Example 3
Step 5:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

40

Multidimensional string fields

Example 3
The input domain is the set of all possible values that can ever be inputted to the user name variable.

Sowmya Padmanabhan, 2003

41

Multidimensional string fields

Example 3
Step 6:
Identify the risks associated with each dimension of the variable.

Sowmya Padmanabhan, 2003

42

Multidimensional string fields

Example 3
Length
Failure to process user names correctly that have their lengths within the allowed range Mishandling of user names that have their lengths outside the allowed range.

Sowmya Padmanabhan, 2003

43

Multidimensional string fields

Example 3
Type of characters

Failure to process user names correctly that only have digits and lowercase letters Mishandling of user names that have characters other then the allowed ones

Sowmya Padmanabhan, 2003

44

Multidimensional string fields

Example 3
Step 7:
Partition the input domain into equivalence classes based on the risks identified.

Sowmya Padmanabhan, 2003

45

Multidimensional string fields

Example 3
Non-existent or existent user names
We need to make sure that the test cases in the equivalence class table address the following risks:
Failure to process valid user names (valid syntax) that exist in the database. Mishandling of user names that are valid in terms of the syntax but do not exist in the database.
Sowmya Padmanabhan, 2003 46

Testing Enumerated fields


Range data type variable: A variable whose
values lie in a range. A range variable is linearizable, that is, can be mapped on a number line. Boundary value analysis can be applied only to range data type variables. We have dealt with range data types so far. But, we often encounter enumerated data types also. Fields representing drop down combo boxes, lists, checkboxes, radio buttons, etc are all examples of enumerated data types.
Sowmya Padmanabhan, 2003

47

Testing Enumerated fields


take only a set of values/set of options. Boundary-value analysis usually cannot be applied to such variables.

Enumerated variable: A variable which

Sowmya Padmanabhan, 2003

48

Testing Enumerated fields


Example 4:
In the online tax return form of some country, the field marital status has the following options available: Single Married Married and separated Divorced. Only one of the above options can be selected.

Develop a series of tests by performing equivalence class analysis on this variable.


Sowmya Padmanabhan, 2003 49

Testing Enumerated fields Example 4


Step 1:
Can you represent this variable as a mathematical range expression?

Sowmya Padmanabhan, 2003

50

Testing Enumerated fields Example 4


No. This is an enumerated type variable.

Sowmya Padmanabhan, 2003

51

Testing Enumerated fields Example 4


Step 2:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

52

Testing Enumerated fields Example 4


The input domain is the set of all possible values that can ever be inputted to the marital status field.

Sowmya Padmanabhan, 2003

53

Testing Enumerated fields Example 4


Step 3:
Identify the risks associated with this variable.

Sowmya Padmanabhan, 2003

54

Testing Enumerated fields Example 4


Failure to process tax return applications correctly that have the Single option selected Failure to process tax return applications correctly that have the Married option selected Failure to process the tax return applications correctly that have the Married and separated option selected Failure to process the tax return applications correctly that have the Divorced option selected Mishandling of a choice which is outside the available set of options An additional risk for enumerated fields allowing multiple selections is mishandling of multiple options, which occurs when two or more options are selected. In this example, this risk could be stated as failure to not allow multiple selections. Yet another risk for enumerated fields is mishandling of tax return applications that have no option selected. In case of this example, this is a valid risk since it is not mentioned whether by default an option is selected or not.
Sowmya Padmanabhan, 2003 55

Testing Enumerated fields Example 4


Step 4:
Partition the input domain into equivalence classes based on the risks identified.
The tax return program will behave uniquely with respect to each of the options; hence each of these options represents an equivalence class.
Sowmya Padmanabhan, 2003 56

Sowmya Padmanabhan, 2003

57

Identifying variables of a given program


You are made to sit down in front of a computer and told to test some live application/program.
How to you begin doing domain testing?

Sowmya Padmanabhan, 2003

58

Identifying variables of a given program


First, you identify the various functions of the application. Second, you take up each individual function and identify the variables/fields of the function. Third, you take up each variable and develop a series of tests for it by performing equivalence class and boundary value analysis on it, just as we have been doing so far in all the examples we have seen and the exercises you have solved. Fourth, you perform combination testing of independent variables of the function and test dependent variables separately. We shall look at combination testing later.
Sowmya Padmanabhan, 2003 59

Identifying variables of a given program


Let us look at few live application functions and try to identify the variables of each of the functions.

Sowmya Padmanabhan, 2003

60

Identifying variables of a given program


Example 5:
For each of the following dialog boxes, identify variables. For each variable, identify its data type and state whether it is an input or output variable.

Sowmya Padmanabhan, 2003

61

Identifying variables of a given program Example 5a


When you open Microsoft PowerPoint, you are presented the following dialog window:

Sowmya Padmanabhan, 2003

62

Identifying variables of a given program Example 5a

What are the input variables of this function?

Sowmya Padmanabhan, 2003

63

Identifying variables of a given program Example 5a Input variables


Create a new presentation using (radio buttons, enumerated) Dont show this dialog box again (checkbox, enumerated) Open an existing presentation (non-editable list box, enumerated)

Sowmya Padmanabhan, 2003

64

Identifying variables of a given program Example 5a

What are the output variables of this function?

Sowmya Padmanabhan, 2003

65

Identifying variables of a given program Example 5a Output variables No obviously visible output variables.

Sowmya Padmanabhan, 2003

66

Identifying variables of a given program Example 5b


Internet Explorer, in the Favorites menu, clicking on the Add a favorite option brings up the following dialog box:

Sowmya Padmanabhan, 2003

67

Identifying variables of a given program Example 5b

What are the input variables of this function?

Sowmya Padmanabhan, 2003

68

Identifying variables of a given program Example 5b Input variables


Make available offline (checkbox, enumerated) Name (String field, range type) Create in (Non-editable, drop-down combo box, enumerated)

Sowmya Padmanabhan, 2003

69

Identifying variables of a given program Example 5b

What are the output variables of this function?

Sowmya Padmanabhan, 2003

70

Identifying variables of a given program Example 5b Output variables No obviously visible output variables.

Sowmya Padmanabhan, 2003

71

Sowmya Padmanabhan, 2003

72

Developing test cases for given program functions


Example 6:
For the File Open function of the notepad program, identify its variables, the dimensions, data type along each dimension. Develop a series of test cases by performing equivalence class analysis and boundary-value analysis on each of the variables. A screenshot of the function is provided on the right hand side.

Sowmya Padmanabhan, 2003

73

Developing test cases for given program functions Example 6

Step 1:
Identify the function variables.

Sowmya Padmanabhan, 2003

74

Developing test cases for given program functions Example 6

Input variables
Look in File name Files of type Encoding

Output variable
Display of files
Sowmya Padmanabhan, 2003 75

Developing test cases for given program functions Example 6

Step 2:

Analyze the values taken by each variable


Lets address the following three questions for each identified variable:
What kind of values does the variable take? What are the characteristics of these values? Is the variable multidimensional? What are the dimensions? Can you map the variable to standard data types like int, double, float, string, char, etc along each of its dimensions?
Sowmya Padmanabhan, 2003

76

Developing test cases for given program functions Example 6

What kind of values does each of the variables of File Open function take?

Sowmya Padmanabhan, 2003

77

Developing test cases for given program functions Example 6


Look in:
A drop-down combo box that lists the available sources of files (disk drives, etc) and directories in each of them and then the files in each of these directories. Word(s), Text.

File Name:

Files of type:

A drop-down combo box that lists the types of files that can be opened with notepad program, which are Text Documents (*.txt) and All Files. A drop-down combo box that lists the different encoding available for the files to be opened which are ANSI, Unicode, Unicode big endian and UTF-8.
Sowmya Padmanabhan, 2003 78

Encoding:

Developing test cases for given program functions Example 6

What are the characteristics of these values?

Sowmya Padmanabhan, 2003

79

Developing test cases for given program functions Example 6

Look in:
The structure of the list of options and the successive options in each of the options in this combo box is a directory structure. Only one of the options available can be selected at a time. The user cannot enter an external value for this field.
Sowmya Padmanabhan, 2003 80

Developing test cases for given program functions Example 6 File Name:
Length: the file name could be of variable length depending on what file you are trying to open. Constituent components are characters from the English alphabet, special characters, and any character on the keyword including space. The text may or may not represent a file that is present in the currently selected directory, which was determined by the Look in. The text could be an invalid file name, which means that the text does not abide by the rules set for representing a valid file name.
Sowmya Padmanabhan, 2003 81

Developing test cases for given program functions Example 6

Files of type:
Only one of the two available options in this combo box can be selected which means they are mutually exclusive values. A mutually exclusive option field means if one option is selected then the other option has to be unselected and vice versa. The user cannot enter an external value for this field.
Sowmya Padmanabhan, 2003 82

Developing test cases for given program functions Example 6

Encoding:
Only one of the four available options in this combo box can be selected which means they are mutually exclusive values. A mutually exclusive option field means if one option is selected then the other option has to be unselected and vice versa. The user cannot enter an external value for this field.
Sowmya Padmanabhan, 2003 83

Developing test cases for given program functions Example 6

Is the variable multidimensional? What are the dimensions? Can you map the variable to standard data types like int, double, float, string, char, etc along each of its dimensions?

Sowmya Padmanabhan, 2003

84

Developing test cases for given program functions Example 6


Look in:
This is one-dimensional and the dimension is the storage location. It is enumerated data type. For the purposes of our analysis, lets assume that the File Open function recognizes three drives corresponding to the hard disk, floppy and CD-ROM drives. Lets call these drives drive1, drive2 and drive3 respectively.

File Name:

File name is multi-dimensional. The dimensions are length and type of characters. Length can be considered as int data type. The constituent characters can be considered as string data type.
Sowmya Padmanabhan, 2003 85

Developing test cases for given program functions Example 6

Files of type: This is one-dimensional. Enumerated data type. Encoding: This is one-dimensional. Enumerated data type.
Sowmya Padmanabhan, 2003

86

Developing test cases for given program functions Example 6

Step 3:
Can you represent each of these variables as a mathematical range expression?

Sowmya Padmanabhan, 2003

87

Developing test cases for given program functions Example 6


Only Look in can be represented as a mathematical range expression along each of its dimensions. The analysis is similar to the general analysis done on a string field.

Length
Min-allowed-by-function <= length (File Name) <= Max-allowed-byfunction

Type of characters
Lowest ASCII of allowed character set <= ASCII (characters in File Name) <= Highest ASCII of allowed character set

Sowmya Padmanabhan, 2003

88

Developing test cases for given program functions Example 6

Step 4:
Identify all the risks for this function. Do not forget to identify risks for each dimension of every variable.

Sowmya Padmanabhan, 2003

89

Developing test cases for given program functions Example 6

Look In
Failure to open files existing in drive 1 correctly Failure to open files existing in drive 2 correctly Failure to open files existing in drive 3 correctly Mishandling when opening of files from nonexistent drives or from drives that do not have any storage in them currently.
Sowmya Padmanabhan, 2003 90

Developing test cases for given program functions Example 6


File Name
Length:
Failure to open files of allowable lengths correctly Mishandling of file names whose lengths are outside the allowable range.

Type of characters:
Failure to open files correctly that contain only characters that are allowed for file names in File name field. Mishandling of file names that contain non-allowed characters
Sowmya Padmanabhan, 2003

91

Developing test cases for given program functions Example 6

Files of type
Failure to open files of the text type correctly. Mishandling of non-text files.

Sowmya Padmanabhan, 2003

92

Developing test cases for given program functions Example 6

Encoding
Failure to open files in the encoding of ANSI correctly Failure to open files in the encoding of Unicode correctly Failure to open files in the encoding of Unicode big endian correctly Failure to open files in the encoding of UTF-8 correctly
Sowmya Padmanabhan, 2003

format format format format

93

Developing test cases for given program functions Example 6

Step 5:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

94

Developing test cases for given program functions Example 6

The input domain of each variable is the set of all possible values that can ever be inputted to the variable.

Sowmya Padmanabhan, 2003

95

Developing test cases for given program functions Example 6

Step 6:
Partition the input domain into equivalence classes based on risks identified.

Sowmya Padmanabhan, 2003

96

Sowmya Padmanabhan, 2003

97

Developing test cases for given program functions


Example 7:
For the Flip and Rotate function of the Paint program, identify its variables, the dimensions, data type along each dimension. Develop a series of test cases by performing equivalence class analysis and boundary-value analysis on each of the variables. A screenshot of the function is provided on the right hand side.

Sowmya Padmanabhan, 2003

98

Developing test cases for given program functions Example 7

Step 1:
Identify the function variables.

Sowmya Padmanabhan, 2003

99

Developing test cases for given program functions Example 7

Flip or rotate Rotate by angle

Sowmya Padmanabhan, 2003

100

Developing test cases for given program functions Example 7

Step 2:

Analyze the values taken by each variable


Lets address the following three questions for each identified variable:
What kind of values does the variable take? What are the characteristics of these values? Is the variable multidimensional? What are the dimensions? Can you map the variable to standard data types like int, double, float, string, char, etc along each of its dimensions?
Sowmya Padmanabhan, 2003

101

Developing test cases for given program functions Example 7

What kind of values does each of the variables of Flip and rotate function take?

Sowmya Padmanabhan, 2003

102

Developing test cases for given program functions Example 7


Flip or rotate: Three options available in the form of radio buttons. Rotate by angle: Three options available in the form of radio buttons.

Sowmya Padmanabhan, 2003

103

Developing test cases for given program functions Example 7

What are the characteristics of these values?

Sowmya Padmanabhan, 2003

104

Developing test cases for given program functions Example 7

Flip or rotate:
Only one of the options available can be selected at a time (mutually exclusive values). The user cannot enter an external value for this field.
Sowmya Padmanabhan, 2003

105

Developing test cases for given program functions Example 7

Rotate by angle:
Only one of the options available can be selected at a time. The user cannot enter an external value for this field. The selection of values of this field is possible only if Rotate by angle was the option selected for Flip or rotate field.
Sowmya Padmanabhan, 2003

106

Developing test cases for given program functions Example 7

Is the variable multidimensional? What are the dimensions? Can you map the variable to standard data types like int, double, float, string, char, etc along each of its dimensions?

Sowmya Padmanabhan, 2003

107

Developing test cases for given program functions Example 7


Flip or rotate: This is one-dimensional. Enumerated data type. Rotate by angle: This is one-dimensional. Enumerated data type.

Sowmya Padmanabhan, 2003

108

Developing test cases for given program functions Example 7

Step 3:
Can you represent each of these variables as a mathematical range expression?

Sowmya Padmanabhan, 2003

109

Developing test cases for given program functions Example 7

No. All the variables are enumerated.

Sowmya Padmanabhan, 2003

110

Developing test cases for given program functions Example 7

Step 4:
Identify all the risks for this function. Do not forget to identify risks for each dimension of every variable.

Sowmya Padmanabhan, 2003

111

Developing test cases for given program functions Example 7

Flip or rotate
Failure to flip the paint file horizontally Failure to flip the paint file vertically Failure to rotate the paint file

Sowmya Padmanabhan, 2003

112

Developing test cases for given program functions Example 7

Rotate by angle
Failure to rotate the paint file by 90o Failure to rotate the paint file by 180o Failure to rotate the paint file by 270o

Sowmya Padmanabhan, 2003

113

Developing test cases for given program functions Example 7

Step 5:

Determine what the input domain is.

Sowmya Padmanabhan, 2003

114

Developing test cases for given program functions Example 7

The input domain of each variable is the set of all possible values that can ever be inputted to the variable.

Sowmya Padmanabhan, 2003

115

Developing test cases for given program functions Example 7

Step 6:
Partition the input domain into equivalence classes based on risks identified.

Sowmya Padmanabhan, 2003

116

Sowmya Padmanabhan, 2003

117

Variable

Dimension

Numeric field

Length

Equivalence classes All values having lengths in the range of Min-allowedlength to Maxallowed-length

Test cases (Best representatives) A value having: 1) Min-allowed-length 2) Max-allowed-length

Risks

These two are at the function level.

1) Failure to process numeric values correctly, that have their lengths within the allowed range. 2) Mishandling of lower and upper boundary values.

All values having lengths outside the allowed range

Type of Characters

All values containing allowed characters. 0-9 for integer and floatingpoint values, and an additional decimal point for floating-point values. All values containing at least one character that is NOT allowed.

A value having: 1) Min-allowed-length -1 2) Max-allowed-length + 1 3) Zero length (if this is other than the min-allowed or minallowed-1) 4) Max-length allowed by the system, if different from that of the function 5) Max-length allowed by the system + 1, if different from that of the function. For integer and floating-point values: 1) Value having 0 as one of the digits 2) Value having 9 as one of the digits For only floating-point values: 3) Value having one decimal point at right place

1) Mishandling of numeric values that have their lengths outside the allowed range. 2) Mishandling of values just beneath and beyond the lower and upper boundaries respectively. 3) Mishandling of too small, too long strings and beyond, at the system level. 1) Failure to process numeric values correctly, that have digits 0-9 and one decimal point for floatingpoint values. 2) Mishandling of lower and upper boundary values.

For integer and floating-point values: 1) Value having the character corresponding to one ASCII code just beneath the ASCII for 0. 2) Value having the character corresponding to ASCII code just beyond the ASCII for 9. For only floating-point values: 3) Value having more than one decimal point 4) Value having a character corresponding to ASCII code just beneath the decimal point. 5) Value having a character corresponding to ASCII code just beyond the decimal point.

1) Mishandling of values that have non-digits 2) Mishandling of values having more than one decimal point, in case of a floating-point field. 3) Mishandling of lower and upper and boundary values.

Variable Numeric field

Dimension
Size

Equivalence classes
All values within the size range of minallowed-value to maxallowed-value

Test cases (Best representatives)


1) Min-allowed-value 2) Max-allowed-value These two are at the function level.

Risks 1) Failure to process numeric values correctly that are within the allowed range of sizes 2) Mishandling of lower and upper boundary values.
1) Mishandling of numeric values that are outside the allowed range of size 2) Mishandling of values just beneath and beyond the lower and upper boundaries respectively. 3) Mishandling of values that are too small and too big at the system level and even beyond.

All values having sizes outside the allowed range

1) Min-allowed-value -1 2) Max-allowed-value + 1 3) Null 4) Minimum possible number allowed to be entered in the field at the system level 5) Maximum possible number allowed to be entered in the field at the system level 6) Min-possible number - 1 at the system level. 7) Max-possible number + 1 at the system level

Variable String field

Dimension
Length

Equivalence classes
All strings with lengths in the range of Min-allowedlength to Maxallowed-length

Test cases (Best representatives)


A string having: 1) Min-allowed-length 2) Max-allowed-length These two are at the function level.

Risks
1) Failure to process strings correctly, that have their lengths within the allowed range. 2) Mishandling of lower and upper boundary values.

All strings with lengths outside the allowed range

Type of Characters

All strings containing allowed characters

A string having: 1) Min-allowed-length -1 2) Max-allowed-length + 1 3) Zero length string (if this is other than the minallowed or min-allowed-1) 4) Max-length allowed by the system, if different from that of the function 5) Max-length allowed by the system + 1, if different from that of the function. For each sub-range of allowed characters: 1) String containing the character having the least corresponding ASCII value in the sub-range as one of its characters 2) Value having the character having the highest corresponding ASCII value in the sub-range as one of its characters For each sub-range of allowed characters: 1) A string containing the character corresponding to one ASCII code just beneath the one with the least ASCII in the sub-range 2) A string containing the character corresponding to ASCII code just beyond the one with the highest ASCII code in the sub-range. 3) A string containing the character corresponding to ASCII 128

1) Mishandling of strings that have their lengths outside the allowed range. 2) Mishandling of lengths just beneath and beyond the lower and upper boundaries respectively. 3) Mishandling of too small, too long strings and beyond.

1) Failure to process allowed characters correctly. 2) Mishandling of lower and upper boundary values.

All strings containing at least one character that is NOT allowed.

1) Mishandling of strings that contain non-allowed characters 2) Mishandling of values just beneath and beyond the lower and upper and boundary values respectively. 3) Mishandling of characters from the extended ASCII set.

Variable
User name

Dimension
Length

Equivalence classes
All values having lengths in the range of 5 to 15

Test cases (Best representatives)


A user name having length of: 1) 5 2) 15

Risks
1) Failure to process user names correctly, that have their lengths within the allowed range. 2) Mishandling of lower and upper boundary values.

All values having lengths outside the allowed range

A user name having length of: 1) 4 2) 16 3) 0 4) Max-length at the system level 5) Max-length + 1 at the system level

Type of Characters

All values containing allowed characters

Digits: A user name containing 1) 0 (ASCII code 48) 2) 9 (ASCII code 57) Lowercase characters: A user name containing 1) a (ASCII code 97) 2) z (ASCII code 122) Digits: A user name containing 1) character with ASCII code 47 2) character with ASCII code 58 Lowercase characters: A user name containing 1) a character with ASCII code 96 2) a character with ASCII code 123 3) character corresponding to ASCII 128 of the extended ASCII set

1) Mishandling of user names that have their lengths outside the allowed range. 2) Mishandling of user names just beneath and beyond the lower and upper boundaries respectively. 3) Mishandling of too small, too long strings and beyond. 1) Failure to process allowable characters correctly. 2) Mishandling of lower and upper boundary values.

All values containing at least one character that is NOT allowed.

1) Mishandling of user names that have nondigits 2) Mishandling of user names that contain characters that are NOT allowed. 3) Mishandling of values just beyond lower and upper and boundary values respectively. 4) Mishandling of characters from the extended ASCII set. 5) Mishandling of value just beyond the upper boundary of the standard ASCII set.

Variable(s) Marital status

Equivalence Class(es) All applications that have the Single option selected for marital status All applications that have the Married option selected for marital status All applications that have the Married and separated option selected for marital status All applications that have the Divorced option selected for marital status All applications that have a value for marital status that is other than the four options available, which includes none.

Test cases Single

Risks 1) Failure to process tax return applications correctly that have the Single option selected

Notes

Married

1) Failure to process tax return applications correctly that have the Married option selected

Married 1) Failure to process tax and return applications correctly separated that have the Married and separated option selected

Divorced 1) Failure to process tax return applications correctly that have the Divorced option selected 1) Any option other than the four 2) No option selected 1) Failure to NOT allow inputting an external value that is other than the available set of options. The marital field might be a noneditable field like a bunch of radio buttons or a noneditable combo-box in which case selecting options outside of the available set of options will be impossible. The specification of this example states that multiple selections are not possible.

2) Mishandling of tax return applications that have no option selected.

All applications that have multiple options selected.

Two Failure to NOT allow options selection of multiple options selected at a time.

Look In

Variable Look In

Equivalence classes
All files in drive1.

Test cases (Best representatives)


A file existing in: 1) drive 1, lowest level directory 2) drive 1, highest level directory A file existing in: 1) drive 2, lowest level directory 2) drive 2, highest level directory A file existing in: 1) drive 3, lowest level directory 2) drive 3, highest level directory

Risks
1) Failure to open files existing in drive 1 correctly 2) Mishandling of lower and upper boundary values.

All files in drive2.

1) Failure to open files existing in drive 2 correctly 2) Mishandling of lower and upper boundary values. 1) Failure to open files existing in drive 3 correctly 2) Mishandling of lower and upper boundary values.

All files in drive3.

All files in currently nonexistent drives or drives not having any storage in it currently or a corrupt drive.

Any such drive can be the best representative

1) Mishandling opening of files from non-existent drives or from drives that do not have any storage in them currently.

Variable

Dimension
Length

Equivalence classes
All values having lengths in the range of Minallowed-length-byfunction and Maxallowed-length-byfunction

Test cases (Best representatives)


A file name having length of: 1) Min-allowed-length-by-function (1) 2) Max-allowed-length-by-function (215)

Risks
1) Failure to process values correctly, that have their lengths within the allowed range. 2) Mishandling of lower and upper boundary values.

File Name

All values having lengths outside the allowed range

A file name having length of: 1) Min-allowed-length-by-function 1 (0) 2) Max-allowed-length-by-function + 1 (216) 5) 0 (this is same as the Minallowed-length-by-function - 1) 4) Max-length at the system level (259) 5) Max-length + 1 at the system level (260)

Type of Characters

All values containing only allowed characters

For ASCII sub-range of allowed groups of characters from ASCII 32-33: A file name that contains: 1) the character (blank space) corresponding to ASCII 32. 2) the character (!) corresponding to ASCII 33. For ASCII sub-range of allowed groups of characters from ASCII 35-41: A file name that contains: 3) the character (#) corresponding to ASCII 35. 4) the character ()) corresponding to ASCII 41.

1) Mishandling of values that have their lengths outside the allowed range. 2) Mishandling of values just beneath and beyond the lower and upper boundaries respectively. 3) Mishandling of too small and too long strings 1) Failure to process allowable characters correctly. 2) Mishandling of lower and upper boundary values of each allowed ASCII range of characters. 3) Failure to correctly process characters from the extended ASCII character set.

For ASCII sub-range of allowed groups of characters from ASCII 4345: A file name that contains: 5) the character (+) corresponding to ASCII 43. 6) the character (-) corresponding to ASCII 45. A file name that contains: 7) the character (.) corresponding to ASCII 46 at a position other than the first position. For ASCII sub-range of allowed groups of characters from ASCII 4857: A file name that contains: 8) the character (0) corresponding to ASCII 48. 9) the character (9) corresponding to ASCII 57. A file name that contains: 10) the character (;) corresponding to ASCII 59. 11) the character (-) corresponding to ASCII 61. For ASCII sub-range of allowed groups of characters from ASCII 6491: A file name that contains: 12) the character (@) corresponding to ASCII 64. 13) the character ([) corresponding to ASCII 91. For ASCII sub-range of allowed groups of characters from ASCII 93123:

A file name that contains: 14) the character (]) corresponding to ASCII 93. 15) the character ({) corresponding to ASCII 123. A file name that contains: 16) the character corresponding to ASCII 128 of the extended ASCII character set. 17) the character corresponding to ASCII 255 of the extended ASCII character set.

All values A file name that contains: containing at least one character that 1) the character (Unit is NOT allowed or Separator) corresponding to a valid character in ASCII 31. a position that is 2) the character () NOT allowed. corresponding to ASCII 34. 3) the character (*) corresponding to ASCII 42. 4) the character (.) corresponding to ASCII 46 in the first position. 5) the character (/) corresponding to ASCII 47. 6) The character (:) corresponding to ASCII 58. 7) The character (<) corresponding to ASCII 60. 8) The character (>) corresponding to ASCII 62. 9) The character (?) corresponding to ASCII 63. 10) The character (\) corresponding to ASCII 92. 11) The character (|) corresponding to ASCII 124.

1) Mishandling of values that contain characters that are NOT allowed. 2) Mishandling of valid characters in positions that are not allowed. 3) Mishandling of values just beneath and beyond the lower and upper and boundary values of each ASCII subrange of allowed characters.

Files of Type Variable Files of Type Equivalence classes


All text files

Test cases (Best representatives)


A file: 1) of text type

Risks
1) Failure to open files of the text type correctly.

All other file types

A file: 1) that is non-text type

1) Mishandling of non-text type of files.

Encoding Variable Encoding Equivalence classes


All files being opened in ANSI encoding All files being opened in Unicode encoding All files being opened in Unicode big endian encoding All files being opened in UTF-8 encoding

Test cases (Best representatives)


Any file being opened in ANSI encoding.

Risks
Failure to open files in the encoding format of ANSI correctly

Any file being opened in Unicode encoding.

Failure to open files in the encoding format of Unicode correctly Failure to open files in the encoding format of Unicode big endian correctly

Any file being opened in Unicode big endian encoding.

Any file being opened in UTF-8 encoding.

Failure to open files in the encoding format of UTF-8 correctly

Flip or rotate Variable Flip or rotate Equivalence classes


All files that have the Flip horizontal option selected. All files that have the Flip vertical option selected. All files that have the Rotate by angle option selected.

Test cases (Best representatives)


1) Any file that has the Flip horizontal option selected.

Risks Failure to flip the paint file correctly in the horizontal direction Failure to flip the paint file correctly in the vertical direction Failure to rotate the paint image correctly by the specified angle.

1) Any file that has the Flip vertical option selected.

Any file that has the Rotate by angle option selected with the following angle selected: 1) 90 degrees 2) 180 degrees 3) 270 degrees

Appendix I: Day 4 Lecture (Day 4 Lecture in Training Material - Original and Revised)

Session 4: Domain Testing


Sowmya Padmanabhan

Sowmya Padmanabhan, 2003

Testing in combination-Why?
All variables will interact as they are part of one functional unit and they have to unite to achieve the duty that the functional unit is designated for. Hence most of these variables influence each other in some or the other way. Testing variables in combination may reveal certain bugs that might not have been found by testing the variables in isolation.
Sowmya Padmanabhan, 2003 2

All possible combinations testing


All possible combinations testing
Consider an example of a program (a simple GUI) that has 7 variables with 3, 2, 2, 2, 3, 2 and 2 representatives respectively.

If we were to test all possible combinations of test cases of the seven variables, what would the total number of combinations be?

Sowmya Padmanabhan, 2003

All possible combinations testing


All possible combinations testing
3x2x2x2x3x2x2 = 288 combinations
which means 288 test cases-one test case corresponding to each combination!

Sowmya Padmanabhan, 2003

All possible combinations testing


If the number of test cases is so huge for as few as 7 variables, imagine what happens in a typical commercial program where there are hundreds of variables. Yes, combinatorial explosion! A solution to this problem is All-pairs combination testing and is discussed next.
Sowmya Padmanabhan, 2003 5

All-Pairs Combination Testing


Many pairs are tested at the same time in one combination. Allpairs technique ensures all-pairs, that means that every value of a variable is combined with every value of every other variable at least once. Only independent variables should be tested using all-pairs technique. For each independent variable, only non-error handling test cases should be involved in all-pairs combination. In particular, if there are n variables, then each combination generated by all-pairs combination has n x (n-1) / 2 pairs. If we arrange all independent variables in the descending order of the number of acceptable (non-error handling) test cases they have and if max 1 and max 2 represent the first two variables in such an arrangement, the minimal number of combinations due to all-pairs combination technique is:

No. of test cases (Max1) x No. of test cases (Max2)


Sowmya Padmanabhan, 2003 6

All-Pairs Combination Testing


You have to order the test case values corresponding to each succeeding variable such that the final ordering corresponding to the variable, results in each value of the variable being paired with each value of every preceding variable. It might be required to backtrack (if other ordering is possible) if the current ordering makes it impossible to achieve the above mentioned end result. If, no matter what ordering you try and no matter how much you try to backtrack and redo the things, the end result (all-pairs) in not achieved then additional combinations might have to be added.
Refer Appendix B: Guidelines for filling in all-pairs combination table.
Sowmya Padmanabhan, 2003 7

All-Pairs Combination Testing


In our 7-variable example, assuming all seven variables are independent and all the test cases corresponding to each variable are acceptable (non-error handling test cases), what would the minimal number of combinations yielded by all-pairs combination technique be?

Sowmya Padmanabhan, 2003

All-Pairs Combination Testing


3x3=9 This is much lower than 288, which was due to all-possible combinations.
How many pairs would a single combination have for this example if we do all-pairs combination?

Sowmya Padmanabhan, 2003

All-Pairs Combination Testing


7 x (7-1)/2 = 7 x 6/2 = 7 x 3 = 21.

Sowmya Padmanabhan, 2003

10

All-Pairs Combination Testing


Example 1:

There are three variables x, y and z of some function. x has 3 test cases corresponding to it, y has 4 and z has 2 test cases corresponding to it.
Develop a series of tests by performing allpairs combination on these variables. Assume independence of variables and assume that all test cases are acceptable (non-error handling test cases).
Sowmya Padmanabhan, 2003 11

All-Pairs Combination Testing Example 1


Step 1:

Arrange the variables in descending order of the number of test cases they have. Call the first two Max1 and Max2.

Sowmya Padmanabhan, 2003

12

All-Pairs Combination Testing Example 1


1. 2. 3.

y (4) Max1 x (3) Max2 z (2)

Sowmya Padmanabhan, 2003

13

All-Pairs Combination Testing Example 1


Step 2:
Give symbolic names to the test cases of each of the variables.

Sowmya Padmanabhan, 2003

14

All-Pairs Combination Testing Example 1


Variable Test cases

y x z

Y1, Y2, Y3, Y4 X1, X2, X3 Z1, Z2

Sowmya Padmanabhan, 2003

15

All-Pairs Combination Testing Example 1


Step 3:

Calculate the minimal number of allpairs combinations.

Sowmya Padmanabhan, 2003

16

All-Pairs Combination Testing Example 1


No. of test cases (Max1) x No. of test cases (Max2) = 4*3 =12
Contrast this with all possible combinations (4*3*2) = 24

Sowmya Padmanabhan, 2003

17

All-Pairs Combination Testing Example 1


Step 4: Determine the number of pairs in each combination.

Sowmya Padmanabhan, 2003

18

All-Pairs Combination Testing Example 1


3(3 1)/ 2 = 3 x 2 / 2 = 3

Sowmya Padmanabhan, 2003

19

All-Pairs Combination Testing Example 1 Step 5:


Build the all-pairs combination table.

Sowmya Padmanabhan, 2003

20

All-Pairs Combination Testing


Example 2:

There are four variables a, b, c and d of some function. Each variable has four test cases corresponding to it.
Develop a series of tests by performing allpairs combination on these variables. Assume independence of variables and assume that all test cases are acceptable (non-error handling test cases).
Sowmya Padmanabhan, 2003 21

All-Pairs Combination Testing Example 2


Step 1:

Arrange the variables in descending order of the number of test cases they have. Call the first two Max1 and Max2.

Sowmya Padmanabhan, 2003

22

All-Pairs Combination Testing Example 2


1. 2. 3. 4.

a (4) Max1 b (4) Max2 c (4) d (4)

Sowmya Padmanabhan, 2003

23

All-Pairs Combination Testing Example 2


Step 2:
Give symbolic names to the test cases of each of the variables.

Sowmya Padmanabhan, 2003

24

All-Pairs Combination Testing Example 2


Variable Test cases

a b c d

A1, A2, A3, A4 B1, B2, B3, B4 C1, C2, C3, C4 D1, D2, D3, D4

Sowmya Padmanabhan, 2003

25

All-Pairs Combination Testing Example 2


Step 3:

Calculate the minimal number of allpairs combinations.

Sowmya Padmanabhan, 2003

26

All-Pairs Combination Testing Example 2


No. of test cases (Max1) x No. of test cases (Max2) = 4*4 =16
Contrast this with all possible combinations (4*4*4*4) = 256!

Sowmya Padmanabhan, 2003

27

All-Pairs Combination Testing Example 2


Step 4: Determine the number of pairs in each combination.

Sowmya Padmanabhan, 2003

28

All-Pairs Combination Testing Example 2


4(4 1)/ 2 = 4 x 3 / 2 = 6

Sowmya Padmanabhan, 2003

29

All-Pairs Combination Testing Example 2 Step 5:


Build the all-pairs combination table.

Sowmya Padmanabhan, 2003

30

Sowmya Padmanabhan, 2003

31

Application of all-pairs combination to real world applications Example 3:

Perform all-pairs combination testing on the variables of the File Open function of notepad program.

Sowmya Padmanabhan, 2003

32

All-Pairs Combination Testing Example 3


Lets review the equivalence class analysis done on the variables of File Open function.

Sowmya Padmanabhan, 2003

33

All-Pairs Combination Testing Example 3


Step 1:

Select the variables that you want to combine using all-pairs.

Sowmya Padmanabhan, 2003

34

All-Pairs Combination Testing Example 3


Look in File name Encoding Files of type
All input variables are independent, hence all of them can be used in all-pairs combination technique.

Sowmya Padmanabhan, 2003

35

All-Pairs Combination Testing Example 3


Step 2:

For each selected variable, select acceptable (non error-handling) test cases that you want to use in all-pairs.

Sowmya Padmanabhan, 2003

36

All-Pairs Combination Testing Example 3


Look in (6 test cases, Test case# 16) File Name (2 test cases, Test case# 1, 2)

the test cases corresponding to type of characters dimension will be tested separately since including them increases the number of combinations required in all-pairs by a huge amount.

Encoding (4 test cases, Test case#1, 2, 3 and 4) Files of type (2 test cases, Test case#1 and 2)
Sowmya Padmanabhan, 2003 37

All-Pairs Combination Testing Example 3


Step 3:

Arrange the selected variables of File Open function in descending order of the number of test cases selected for each. Call the first two Max1 and Max2.

Sowmya Padmanabhan, 2003

38

All-Pairs Combination Testing Example 3


Look in (6) Max1 Encoding (4) Max2 File Name (2) Files of type (2)

Sowmya Padmanabhan, 2003

39

All-Pairs Combination Testing Example 3


Step 4:
Give symbolic names to the test cases of each of the variables.

Sowmya Padmanabhan, 2003

40

All-Pairs Combination Testing Example 3


Variable
Look In Encoding File Name Files of type

Test cases
L1, L2L6 E1, E2, E3, E4 FN1, FN2 FT1, FT2

Sowmya Padmanabhan, 2003

41

All-Pairs Combination Testing Example 3


Step 5:

Calculate the minimal number of allpairs combinations.

Sowmya Padmanabhan, 2003

42

All-Pairs Combination Testing Example 3


No. of test cases(Max1) x No. of test cases (Max2) = 6*4= 24
Contrast this with all possible combinations (6*4*2*2) = 96!

Sowmya Padmanabhan, 2003

43

All-Pairs Combination Testing Example 3


Step 6: Determine the number of pairs in each combination.

Sowmya Padmanabhan, 2003

44

All-Pairs Combination Testing Example 3


4(4 1)/ 2 = 4 x 3/2 = 6

Sowmya Padmanabhan, 2003

45

All-Pairs Combination Testing Example 3 Step 7:


Build the all-pairs combination table.

Sowmya Padmanabhan, 2003

46

All-Pairs Combination Testing Example 3


Error handling test cases are tested separately for each variable individually, to start with. Error handling test cases of all the four variables can be combined to see what happens when multiple faulty variables interact. The following three are some of such possible combinations:
Sowmya Padmanabhan, 2003 47

All-Pairs Combination Testing Example 3

Sowmya Padmanabhan, 2003

48

Application of all-pairs combination to real world applications Example 4:

Perform all-pairs combination testing on the variables of the Flip and Rotate function of paint program.

Sowmya Padmanabhan, 2003

49

All-Pairs Combination Testing Example 4


Lets review the analysis done on the variables of Flip and rotate function.

Sowmya Padmanabhan, 2003

50

All-Pairs Combination Testing Example 4


We actually have only one variable and the other is nested within the first. Hence there is no scope of performing any kind of combination. We will have in all 5 test cases: 2 corresponding to the first two options of Flip or Rotate and 3 corresponding to (90, 180 and 270 degrees) when the third option Rotate by angle is selected.
Sowmya Padmanabhan, 2003 51

Sowmya Padmanabhan, 2003

52

Iteration 1: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Y (4) Y1 Y1 Y1 Y2 Y2 Y2 Y3 Y3 Y3 Y4 Y4 Y4 X (3) Z (2)

Iteration 2: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Y (4) Y1 Y1 Y1 Y2 Y2 Y2 Y3 Y3 Y3 Y4 Y4 Y4 X (3) X1 X2 X3 X1 X2 X3 X1 X2 X3 X1 X2 X3 Z (2)

Iteration 3: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Y (4) Y1 Y1 Y1 Y2 Y2 Y2 Y3 Y3 Y3 Y4 Y4 Y4 X (3) X1 X2 X3 X1 X2 X3 X1 X2 X3 X1 X2 X3 Z (2) Z1 Z2 Z1 Z2 Z1 Z2

Iteration 4: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Y (4) Y1 Y1 Y1 Y2 Y2 Y2 Y3 Y3 Y3 Y4 Y4 Y4 X (3) X1 X2 X3 X1 X2 X3 X1 X2 X3 X1 X2 X3 Z (2) Z1 Z2 Z1 Z2 Z1 Z2 Z1 Z2 Z1 Z2 Z1 Z2

Iteration 1: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. Iteration 2: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. a (4) A1 A1 A1 A1 A2 A2 A2 A2 A3 A3 A3 A3 D4 D4 D4 D4 b (3) B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 c (2) d (2) a (4) A1 A1 A1 A1 A2 A2 A2 A2 A3 A3 A3 A3 D4 D4 D4 D4 b (3) c (2) d (2)

Iteration 3: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. Iteration 4: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. a (4) A1 A1 A1 A1 A2 A2 A2 A2 A3 A3 A3 A3 D4 D4 D4 D4 b (3) B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 c (2) C1 C2 C3 C4 C2 C3 C4 C1 C3 C4 C1 C2 C4 C1 C2 C3 d (2) a (4) A1 A1 A1 A1 A2 A2 A2 A2 A3 A3 A3 A3 D4 D4 D4 D4 b (3) B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 c (2) d (2)

Iteration 5: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.

a (4) A1 A1 A1 A1 A2 A2 A2 A2 A3 A3 A3 A3 D4 D4 D4 D4

b (3) B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4

c (2) C1 C2 C3 C4 C2 C3 C4 C1 C3 C4 C1 C2 C4 C1 C2 C3

d (2) D1 D2 D3 D4 D3 D4 D1 D2 D2 D3 D4 D1

No ordering of test cases in set four for variabled will yield all pairs of d with other variables. Let us backtrack and redo sets two and three of variabled. Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. a (4) A1 A1 A1 A1 A2 A2 A2 A2 A3 A3 A3 A3 D4 D4 D4 D4 b (3) B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 c (2) C1 C2 C3 C4 C2 C3 C4 C1 C3 C4 C1 C2 C4 C1 C2 C3 d (2) D1 D2 D3 D4

Iteration 6: Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.

a (4) A1 A1 A1 A1 A2 A2 A2 A2 A3 A3 A3 A3 D4 D4 D4 D4

b (3) B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4

c (2) C1 C2 C3 C4 C2 C3 C4 C1 C3 C4 C1 C2 C4 C1 C2 C3

d (2) D1 D2 D3 D4 D4 D1 D2 D3 D2 D3 D4 D1

No ordering of test cases in set four for variable d will yield all pairs of d with all other variables. Even if we backtrack, we basically get stuck in the same combinations; this is because there are no more alternatives available. This means that we cannot fit all-pairs in sixteen combinations. Let us have the ordering in set four of variable d that makes all-pairs of d with variable c, b and a. After that let us add four additional combinations to take care of pairs of d and b.

Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.

a (4) A1 A1 A1 A1 A2 A2 A2 A2 A3 A3 A3 A3 D4 D4 D4 D4

b (3) B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4 B1 B2 B3 B4

c (2) C1 C2 C3 C4 C2 C3 C4 C1 C3 C4 C1 C2 C4 C1 C2 C3

d (2) D1 D2 D3 D4 D4 D1 D2 D3 D2 D3 D4 D1 D1 D2 D3 D4 D3 D4 D1 D2

Look In

Variable Look In

Equivalence classes
All files in drive1.

Test cases (Best representatives)


A file existing in: 1) drive 1, lowest level directory 2) drive 1, highest level directory A file existing in: 1) drive 2, lowest level directory 2) drive 2, highest level directory A file existing in: 1) drive 3, lowest level directory 2) drive 3, highest level directory

Risks
1) Failure to open files existing in drive 1 correctly 2) Mishandling of lower and upper boundary values.

All files in drive2.

1) Failure to open files existing in drive 2 correctly 2) Mishandling of lower and upper boundary values. 1) Failure to open files existing in drive 3 correctly 2) Mishandling of lower and upper boundary values.

All files in drive3.

All files in currently nonexistent drive or drive not having any storage in it currently or corrupt drive.

Any such drive can be the best representative

1) Mishandling opening of files from non-existent drives or from drives that do not have any storage in them currently.

Variable

Dimension
Length

Equivalence classes
All values having lengths in the range of Minallowed-length-byfunction and Maxallowed-length-byfunction

Test cases (Best representatives)


A file name having length of: 1) Min-allowed-length-by-function (1) 2) Max-allowed-length-by-function (215)

Risks
1) Failure to process values correctly, that have their lengths within the allowed range. 2) Mishandling of lower and upper boundary values.

File Name

All values having lengths outside the allowed range

A file name having length of: 3) Min-allowed-length-by-function 1 (0) 4) Max-allowed-length-by-function + 1 (216) 5) Max-length at the system level (259) 6) Max-length + 1 at the system level (260)

Type of Characters

All values containing only allowed characters

For ASCII sub-range of allowed groups of characters from ASCII 32-33: A file name that contains: 7) the character (blank space) corresponding to ASCII 32. 8) the character (!) corresponding to ASCII 33. For ASCII sub-range of allowed groups of characters from ASCII 35-41: A file name that contains: 9) the character (#) corresponding to ASCII 35. 10) the character ()) corresponding to ASCII 41.

1) Mishandling of values that have their lengths outside the allowed range. 2) Mishandling of values just beneath and beyond the lower and upper boundaries respectively. 3) Mishandling of too small and too long strings 1) Failure to process allowable characters correctly. 2) Mishandling of lower and upper boundary values of each allowed ASCII range of characters. 3) Failure to correctly process characters from the extended ASCII character set.

For ASCII sub-range of allowed groups of characters from ASCII 4345: A file name that contains: 11) the character (+) corresponding to ASCII 43. 12) the character (-) corresponding to ASCII 45. A file name that contains: 13) the character (.) corresponding to ASCII 46 at a position other than the first position. For ASCII sub-range of allowed groups of characters from ASCII 4857: A file name that contains: 14) the character (0) corresponding to ASCII 48. 15) the character (9) corresponding to ASCII 57. A file name that contains: 16) the character (;) corresponding to ASCII 59. 17) the character (-) corresponding to ASCII 61. For ASCII sub-range of allowed groups of characters from ASCII 6491: A file name that contains: 18) the character (@) corresponding to ASCII 64. 19) the character ([) corresponding to ASCII 91. For ASCII sub-range of allowed groups of characters from ASCII 93123:

A file name that contains: 20) the character (]) corresponding to ASCII 93. 21) the character ({) corresponding to ASCII 123. A file name that contains: 22) the character corresponding to ASCII 128 of the extended ASCII character set. 23) the character corresponding to ASCII 255 of the extended ASCII character set.

All values A file name that contains: containing at least one character that 1) the character (Unit is NOT allowed or Separator) corresponding to a valid character in ASCII 31. a position that is 2) the character () NOT allowed. corresponding to ASCII 34. 3) the character (*) corresponding to ASCII 42. 4) the character (.) corresponding to ASCII 46 in the first position. 5) the character (/) corresponding to ASCII 47. 6) The character (:) corresponding to ASCII 58. 7) The character (<) corresponding to ASCII 60. 8) The character (>) corresponding to ASCII 62. 9) The character (?) corresponding to ASCII 63. 10) The character (\) corresponding to ASCII 92. 11) The character (|) corresponding to ASCII 124.

1) Mishandling of values that contain characters that are NOT allowed. 2) Mishandling of valid characters in positions that are not allowed. 3) Mishandling of values just beneath and beyond the lower and upper and boundary values of each ASCII subrange of allowed characters.

Files of Type Variable Files of Type Equivalence classes


All text files

Test cases (Best representatives)


A file: 1) of text type

Risks
1) Failure to open files of the text type correctly.

All other file types

A file: 1) that is non-text type

1) Mishandling of non-text type of files.

Encoding Variable Encoding Equivalence classes


All files being opened in ANSI encoding All files being opened in Unicode encoding All files being opened in Unicode big endian encoding All files being opened in UTF-8 encoding

Test cases (Best representatives)


Any file being opened in ANSI encoding.

Risks
Failure to open files in the encoding format of ANSI correctly

Any file being opened in Unicode encoding.

Failure to open files in the encoding format of Unicode correctly Failure to open files in the encoding format of Unicode big endian correctly

Any file being opened in Unicode big endian encoding.

Any file being opened in UTF-8 encoding.

Failure to open files in the encoding format of UTF-8 correctly

All-Pairs Combination table for File Open function of Notepad program

Iteration 1 Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. LI (6) LI1 LI1 LI1 LI1 LI2 LI2 LI2 LI2 LI3 LI3 LI3 LI3 LI4 LI4 LI4 LI4 LI5 LI5 LI5 LI5 LI6 LI6 LI6 LI6 E(4) E1 E2 E3 E4 E1 E2 E3 E4 E1 E2 E3 E4 E1 E2 E3 E4 E1 E2 E3 E4 E1 E2 E3 E4 FN (2) FN1 FN2 FN1 FN2 FN2 FN1 FN2 FN1 FN1 FN2 FN1 FN2 FN2 FN1 FN2 FN1 FN1 FN2 FN1 FN2 FN2 FN1 FN2 FN1 FT(2)

Iteration 2 Test case# 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. LI (6) LI1 LI1 LI1 LI1 LI2 LI2 LI2 LI2 LI3 LI3 LI3 LI3 LI4 LI4 LI4 LI4 LI5 LI5 LI5 LI5 LI6 LI6 LI6 LI6 E(4) E1 E2 E3 E4 E1 E2 E3 E4 E1 E2 E3 E4 E1 E2 E3 E4 E1 E2 E3 E4 E1 E2 E3 E4 FN (2) FN1 FN2 FN1 FN2 FN2 FN1 FN2 FN1 FN1 FN2 FN1 FN2 FN2 FN1 FN2 FN1 FN1 FN2 FN1 FN2 FN2 FN1 FN2 FN1 FT(2) FT1 FT2 FT1 FT2 FT1 FT2 FT1 FT2 FT2 FT1 FT2 FT1 FT1 FT2 FT1 FT2 FT1 FT2 FT1 FT2 FT2 FT1 FT2 FT1

Flip or rotate Variable Flip or rotate Equivalence classes


All files that have the Flip horizontal option selected. All files that have the Flip vertical option selected. All files that have the Rotate by angle option selected.

Test cases (Best representatives)


1) Any file that has the Flip horizontal option selected.

Risk Failure to flip the paint file correctly in the horizontal direction Failure to flip the paint file correctly in the vertical direction Failure to rotate the paint image correctly according to the angle specified.

2) Any file that has the Flip vertical option selected.

Any file that has the Rotate by angle option selected with the following angle selected: 3) 90 degrees 4) 180 degrees 5) 270 degrees

Appendix J: Day 2 Exercises (Day 2 Exercises in Training Material)

Day 2 Exercises Date: ____________ Integer fields 1. An integer field/variable q can take values only between 1255 and -1, excluding the end points. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. a. Represent this variable as a mathematical range expression. What kind of a range is this? Student#: ___________

b. What is the input domain?

c. Next, identify the risks associated and list the identified risks.

d. Partition the input domain into equivalence classes based on risks identified. Variable(s) Equivalence Class(es) Test cases Risks Notes

Floating-point fields 2. A floating-point field/variable u can take values only between 19.999 and 20.999, the end-points being inclusive. The precision is 3 decimal places after the decimal point. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. a. Represent this variable as a mathematical range expression. What kind of a range is this?

b. What is the input domain?

c. Next, identify the risks associated and list the identified risks.

d. Partition the input domain into equivalence classes based on risks identified. Variable(s) Equivalence Class(es) Test cases Risks Notes

Word problems for numeric fields 3. Any new course at ZLTech, once approved by the corresponding academic department, has to be registered with the registration and admissions department. The registration can be done using an online registration form. Of the many details that need to be provided in the form, one of them is the number of credits the course is worth. A course at ZLTech can only be worth between 3 and 6 credits. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. a. What variables could be involved in analysis of this group of facts?

b. What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

c. Represent this variable as a mathematical range expression. What kind of a range is this?

d. What is the input domain?

e. Next, identify the risks associated and list the identified risks.

f. Partition the input domain into equivalence classes based on risks identified.

4. The minimum balance required in any checking account at Washington Mutual Bank is $50. If a customers balance in his/her checking account drops below $50, the account is charged a penalty of $29. The precision of the balances is calculated up to 2 decimal places after the decimal point. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Assume integer values a. What variables could be involved in analysis of this group of facts?

b. What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

c. Represent this variable as a mathematical range expression. What kind of a range is this?

d. What is the input domain?

e. Next, identify the risks associated and list the identified risks.

f. Partition the input domain into equivalence classes based on risks identified.

5. An online application can run on an Internet connection whose connection speed is from 100kbps to 100mbps. The application is not capable of functioning correctly at other speeds. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. a. What variables could be involved in analysis of this group of facts?

b. What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

c. Represent this variable as a mathematical range expression. What kind of a range is this?

d. What is the input domain?

e. Next, identify the risks associated and list the identified risks.

f. Partition the input domain into equivalence classes based on risks identified.

6. The Login function of an online e-mail system requires that user enter the user name and password and click on the Log in button within 5 minutes. If the time taken by a user is not within 5 minutes the Login page times out and refreshes, which means the user has to enter the information all over again. Develop a series of tests for the Login function with respect to the time constraint presented. Please note that the values in the fields User name and password are irrelevant for this analysis. Also assume that millisecond is the smallest unit of measurement of time for the Login function. a. What variables could be involved in analysis of this group of facts?

b. What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

c. Represent this variable as a mathematical range expression. What kind of a range is this?

d. What is the input domain?

e. Next, identify the risks associated and list the identified risks.

f. Partition the input domain into equivalence classes based on risks identified.

Problems with multiple ranges 7. A grade calculator program takes a score for a course as input and outputs the corresponding grade. The grade is determined on the basis of the following: 8. Score range 90.00 <= Score <= 100.00 80.00 <= Score < 90.00 70.00 <= Score < 80.00 60.00 <= Score < 70.00 0 <= Score < 60.00 Grade A B C D F

Develop a series of tests by performing equivalence class analysis and boundary value analysis on the Grade Calculator program. a. Identify the input and output variables. Input variable(s):

Output variable(s):

b. Represent the input variable as mathematical range expression(s).

c. What is the input domain?

d. Next, identify the risks associated and list the identified risks.

e. Partition the input domain into equivalence classes based on risks identified.

Non-Numbers Identify the risks associated with inputting non-numbers in the Score field and list the identified risks.

Develop a series of tests by performing equivalence class analysis and boundary value analysis on the Grade Calculator program with respect to non-numbers.

String variables/fields 8. An income field accepts a string as input. The first character needs to the dollar character $ and the others have to be digits (0-9). The string can also have commas and a decimal point. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. a. Represent the income string variable as mathematical range expression(s).

b. What is the input domain?

c. Next, identify the risks associated and list the identified risks.

d. Partition the input domain into equivalence classes based on risks identified.

Appendix K: Day 3 Exercises (Day 3 Exercises in Training Material)

Day 3 Exercises Date: ____________ Student#: ______________

PLEASE TYPE ALL YOUR ANSWERS USING AN AVAILABLE TEXT EDITOR (MS WORD)

Multidimensional variables General analysis of multidimensional numeric fields 1. Delta Airlines provides its customers with a Delta Skymile number to enable them to earn points for every flight they take with Delta. A Skymile number is a 10-digit numeric value. The minimum Skymile number that can be assigned to a customer is 1000000000 and the maximum is 9999955590. Delta Airlines also provides an online program by which a Delta Airlines customer can enter his/her Skymile number and access his/her travel history with Delta and corresponding skymiles earned. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Identify the relevant dimensions for this variable. Develop a series of tests by performing equivalence class analysis and boundary value analysis on each of the identified dimensions of this variable.

12. a. What variables could be involved in analysis of this group of facts?

b. What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

c. Identify the relevant dimensions of the variable

d. Represent the variable as a mathematical range expression along each of the identified dimensions.

e. What is the input domain?

f. Next, identify the risks associated with each of the dimension and list the identified risks.

g. Partition the input domain into equivalence classes based on dimensions and corresponding risks identified.

2. ZLTech has a web-based mail system. A user has to enter his/her user name and password and then click on the Log in button to log in and access his/her inbox. The password can have six to nine characters. Also, only underscores _, except as the first character, and uppercase letters are allowed for the password. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Identify the relevant dimensions for this variable. Develop a series of tests by performing equivalence class analysis and boundary value analysis on each of the identified dimensions of this variable.

h. a. What variables could be involved in analysis of this group of facts?

b. What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis?

c. Identify the general dimensions of the variable

d. Represent the variable as a mathematical range expression along each of the identified dimensions.

e. What is the input domain?

f. Next, identify the risks associated with each of the dimension and list the identified risks.

g. Partition the input domain into equivalence classes based on dimensions and corresponding risks identified.

Enumerated fields 3. In the online bug-tracking system of Neon Technology, a user can report bugs related to any product the company makes. One of the functions of the bug-tracking system is the bug-reporting function. This function, apart from other fields, has a non-editable combo-box field called bug-severity that can take one of the following available input choices, with 1 meaning least severe and 5 meaning the most: 1 2 3 4 5 Develop a series of tests by performing equivalence class analysis and boundary value analysis, if applicable, on the bug-severity field.

Identifying variables of a given program 4. For each of the following programs/functions, identify its variables; its data type and state whether the variable is input or output. a. In the Microsoft PowerPoint application, in the Insert menu, choosing the ClipArt option brings up the following dialog box:

b. In Internet Explorer, in the File menu, choosing the Save As option brings up the following dialog box:

c. In Microsoft Excel application, in the Format menu, choosing the Cells option brings up the following dialog box:

Alignment

Font

Border

Patterns

Protection

d. The following is a screenshot of the sign-up form for free Yahoo mail:

Analyzing given programs/functions and developing a series of test cases for each of the variables 5. For the Find Next function of the Notepad program, whose screenshot is shown below, identify its variables, the dimensions, data type along each dimension and develop a series of test cases by performing equivalence class analysis and boundary-value analysis on each of the variables.

i.

Identify the variables in this function.

ii.

Analyze the values taken by each of the variables.

1. What kind of values does the variable take?

2.

2. What are the characteristics of these values?

3. Is the variable multidimensional? What are the dimensions? Can you map the variable to standard data types like int, double, float, string, char, etc along each of its dimensions?

iii. Can you represent each of the variables as a mathematical range expression?

iv. Next, identify the risks associated with respect to each variable along each of its dimensions and list the identified risks.

v.

Build the equivalence class tables.

6. For the Insert Table function of Microsoft Word application, whose screenshot is shown below, identify its variables, the dimensions, data type along each dimension and develop a series of test cases by performing equivalence class analysis and boundary-value analysis on each of the variables.

i. Identify the variables in this function.

ii. Analyze the values taken by each of the variables.

1. What kind of values does the variable take?

2.

2. What are the characteristics of these values?

3. Is the variable multidimensional? What are the dimensions? Can you map the variable to standard data types like int, double, float, string, char, etc along each of its dimensions?

iii. Can you represent each of the variables as a mathematical range expression?

iv. Next, identify the risks associated with respect to each variable along each of its dimensions and list the identified risks.

v. Build the equivalence class tables.

Appendix L: Day 4 Exercises (Day 4 Exercises in Training Material - Original and Revised)

Day 4 Exercises Date: ____________ All-pairs combination


USE MS EXCEL/MS WORD TO CONSTRUCT ALL-PAIRS COMBINATION TABLE FOR EACH EXERCISE

Student#: ______________

1.

There are five variables l, m, n, o and p of some function. l has 2 test cases corresponding to it, m has 4, n has 3, o has 5 and p has 2 test cases corresponding to it. Develop a series of combination tests on these five variables by performing all-pairs combination on them. Show all iterations. Assume all variables are independent and also assume that none of the test cases are error-handling test cases. i. Arrange the variables in descending order of the number of test cases they have. Call the first two Max1 and Max2.

ii.

Give symbolic names to the test cases of each of the variables.

iii.

Calculate the minimal number of all-pairs combinations.

iv.

Determine the number of pairs in each combination:

v.

Build the all-pairs combination table.

2.

There are five variables a, b, c, d and e of some function. Each variable has three test cases corresponding to it. Develop a series of combination tests on these five variables by performing all-pairs combination on them. Show all iterations. Assume all variables are independent and also assume that none of the test cases are error-handling test cases.

i. Arrange the variables in descending order of the number of test cases they have. Call the first two Max1 and Max2.

ii. Give symbolic names to the test cases of each of the variables.

iii. Calculate the minimal number of all-pairs combinations.

iv. Determine the number of pairs in each combination:

v. Build the all-pairs combination table.

Application of all-pairs combination to real application functions and identifying dependency relationships 3. Develop a series of combination tests on the variables of the Find Next function of the Notepad program by performing all-pairs combination on them. Use MS Excel/ MS Word to build your all-pairs combination table. i. Indicate what variables you would select for doing all-pairs combination. Justify your selection. ii. Indicate which test cases of the chosen variables you will use for doing all-pairs. Justify your selection. iii. Show all iterations. Give relevant comments when you backtrack and redo any ordering.

Discuss the relationships that exist between variables, if any, and give examples of how you would test them.

4. Develop a series of combination tests on the variables of the Insert table function of MS Word application by performing all-pairs combination on them. Use MS Excel/ MS Word to build your all-pairs combination table. i.Indicate what variables you would select for doing all-pairs combination. Justify your selection. ii.Indicate which test cases of the chosen variables you will use for doing all-pairs. Justify your selection. iii.Show all iterations. Give relevant comments when you backtrack and redo any ordering. Discuss the relationships that exist between variables, if any, and give examples of how you would test them.

Appendix M: Paper-Based Tests (Test A and Test B in Training Material)

Domain Testing-Test A1 (100 points) Date: ________________ Student# ________________ 1. An integer field/variable i can take values in the range 999 and 999, the end-points being inclusive. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here. 10 points

Extra blank pages that were provided to the learners for solving the exercise questions have been deleted.

2. A floating-point field/variable f can take values only between 70.000 and 70.000, the left-end being inclusive and the right end being exclusive. The precision is 3 places after the decimal point. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here. 10 points

3. A string field/variable s can take only upper and lower case letters, from the English alphabet. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here. 10 points

4. In an online airlines reservation system there is a year combo-box field that has the following options available: 2003 2004 2005

Develop a series of tests by performing equivalence class analysis and boundary value analysis, if applicable, on this field. 10 points

5. The screenshot of the Login function of Yahoo messenger application program is shown below. For this Login function, identify its variables. For each variable, determine the data type of each of the variables and state whether the variables are input or output. 5 points

6. There are five variables l, m, n, o and p in some function. l, m, n and o have five test cases each and p has two test cases. How many total combinations of test cases are possible for these five variables? How many minimal combinations does all-pairs combination technique yield? How many pairs will each combination have if all-pairs combination technique is used? Develop a series of combination tests on these five variables by performing all-pairs combination on them. Show all iterations. Give relevant comments when you backtrack and redo any ordering. Please have separate table for each re-ordering. 10 points

How many total combinations of test cases are possible for these five variables?

How many minimal combinations does all-pairs combination technique yield?

How many pairs will each combination have if all-pairs combination technique is used?

Develop a series of combination tests on these five variables by performing all-pairs combination on them.

7. An I-20 VISA2 program for international students of a school lets a student know the status of his/her application for a new I-20 by entering his/her corresponding VISA number. The VISA number is a 16-digit numeric value with no dashes or commas or spaces in between. The minimum allowed VISA number that a student could have ever have been assigned is 1000009999000000 and the maximum is 9999955590999800. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class and boundary value analysis on this variable. 15 points

The name of the I-20 VISA program has been changed to something more generic and nonspecific than before.

8. Bank X3 has started a new savings account program, which allows customers to earn interest based on their savings account balance. The interest is calculated using the following table:

Balance range $5000.00 <= balance <= $15000.00 $15000.00 < balance <= $30000.00 $30000.00 < balance <= $55000.00 $55000.00 < balance <= $80000.00 balance > $80000.00

Interest that can be earned 1.09% 1.50% 1.73% 2.09% 2.50%

What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. 15 points

The name of the bank been changed to something fictitious and non-specific than before.

9. ZLTech has a web-based student records system. A student has to enter his/her social security number in the text field provided for it and then click on the Sign in button to log in and access his/her records, which include courses taken by the student so far and the corresponding grades. The social security number can have only eleven characters. The first three characters have to be digits, then a hyphen and then two more digits, a hyphen and finally four digits; no other characters whatsoever are allowed to be entered for the social security number field. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. 15 points

Domain Testing-Test B1 (100 points) Date: ________________ Student# ________________ 1. An integer field/variable i can take values in the range 1000 and 1000, the end-points being exclusive. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here. 10 points

11

Extra blank pages that were provided to the learners for solving the exercise questions have been deleted.

2. A floating-point field/variable f can take values only between 50.00 and 49.99, both end-points being inclusive. The precision is 2 places after the decimal point. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here. 10 points

3. A string field/variable s can take upper and lower case letters from the English alphabet, and digits. Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. Analyze only the dimension that is explicitly specified here. 10 points

4. DVD Collections, Inc. has a shopping website where a user can purchase DVDs. For checking out, a user needs to enter his/her credit card number and the type of the credit card, which is a combo-box field having the following options available: American Express VISA MasterCard Discover

Develop a series of tests by performing equivalence class analysis and boundary value analysis, if applicable, on the type of credit card variable/field. 10 points

5. The screenshot of an online length or distance unit converter program is provided below. The program takes an input value for length and the corresponding unit of measurement and outputs the corresponding equivalent value for the output unit of measurement chosen by the user. Identify the variables of the unit converter program, the data type of each of the variables and state whether the variables are input or output. 5 points

Length or distance
Input
1 inches

Output
0.000016 miles (UK and US)

6. There are four variables a, b, c and d in some function. All the four variables have five test cases each. How many total combinations of test cases are possible for these four variables? How many minimal combinations does all-pairs combination technique yield? How many pairs will each combination have if all-pairs combination technique is used? Develop a series of combination tests on these four variables by performing all-pairs combination on them. Show all iterations. Give relevant comments when you backtrack and redo any ordering. Please have separate table for each re-ordering. 10 points

How many total combinations of test cases are possible for these four variables?

How many minimal combinations does all-pairs combination technique yield?

How many pairs will each combination have if all-pairs combination technique is used?

Develop a series of combination tests on these four variables by performing all-pairs combination on them.

7. Creative Technologies offers its employees the opportunity to attend conferences and training that helps in the enhancement of their skill set and in turn the companys profits. The company requires that an employee use the companys reimbursement system to report expenses. The system has an expense report function that has various fields like employee ID, meal expenses, travel expenses, training/conference registration expenses, miscellaneous expenses and total expenditure. The company reimburses the employees up to $5,000. All expense-related fields require entering of only whole numbers, which means that the user should round each expense to the nearest whole number and then enter this number in the corresponding field. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class and boundary value analysis on this variable.2 15 points

This question erroneously appeared as question nine in the original test.

8. XYZ3 credit cards offer a cash-back award program. After a customer spends a particular amount of money, s/he receives a cash back award, according to the following table: Amount spent range $1000.00<= amount spent <= $1500.00 $1500.00 < amount spent <= $2000.00 $2000.00 < amount spent <= $2500.00 $2500.00 < amount spent <= $3500.00 amount spent > $3500.00 Cash back that can be earned $50 $75 $95 $110 $150

What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class analysis and boundary value analysis on this variable. 15 points

The name of the credit card has been changed to make it more general and non-specific.

9. The INS4 of some country has an online application that lets international students check the status of their work authorization application in the United States. Upon applying for work authorization, every student is given a ticket number, which is an 11-character value. Every ticket number consists of digits and uppercase letters, with a letter in the fourth and eighth position. The other characters have to be strictly digits. What variables could be involved in analysis of this group of facts? What variable do we know enough about to perform equivalence class analysis and then a boundary value analysis? Develop a series of tests by performing equivalence class and boundary value analysis on this variable. 5 15 points

4 5

The phrase INS has been changed to INS of some country to make it more general. This question erroneously appeared as question seven in the original test.

Appendix N: Performance Test (Performance Test in Training Material - Original and Revised)

Date: __________________

Performance Test (Original) Student#_______________

For the Page Setup function of Microsoft PowerPoint application, whose screenshot is provided below, develop a series of tests for this function by performing equivalence class and boundary value analysis on it.

Instruction: 1. Please type all your answers using an available text editor (MS Word) and/or MS Excel in your computer and e-mail all your files to spadmana@fit.edu. 2. Name each file starting with your student number, for example, Stuent#_EqClassTable.doc, Student#_All-Pairs.doc and Student#_Dependency.doc. Checklist: Make sure you include the following in your analysis: a. List of variables; indicate input or output, their dimensions and the data type they map to. b. Equivalence class table(s) that shows the complete equivalence class and boundary value analysis for the feature under test. c. All-pairs combination table. d. List of dependency risks (at least 3). e. Illustration of how you could test the dependency risks listed, using the combinations (test cases) present in the all-pairs combination table by mapping a dependency risk to one or more available combination(s) in the all-pairs combination table constructed by you. Make sure you list the corresponding combination(s) and the test case number(s) beneath each dependency risk.

Performance Test (Revised) (100 points) Date: __________________ Student#_______________ For the Page Setup function of Microsoft PowerPoint application, whose screenshot is provided below, develop a series of tests for this function by performing equivalence class and boundary value analysis on it.

Instructions: 1. Please type all your answers using an available text editor (MS Word) and/or MS Excel in your computer and e-mail all your files to sowmya_testing_a2529@yahoo.com. 2. Name each file starting with your student number, for example, Stuent#_EqClassTable.doc, Student#_All-Pairs.doc and Student#_Relationships.doc. Checklist: Make sure you include the following in your analysis: a. List of variables; indicate input or output, their dimensions and the data type they map to. b. Equivalence class table(s) that shows the complete equivalence class and boundary value analysis for the function under test. c. All-pairs combination. i. Indicate what variables you would select for doing all-pairs combination. Justify your selection. ii. Indicate which test cases of the chosen variables you will use for doing all-pairs. Justify your selection. iii. Show all iterations. Give relevant comments when you backtrack and redo any ordering. d. Discuss the relationships that exist between variables, if any, and give examples of how you would test them.

Appendix O: Questionnaires (Day 1-Day 5 Questionnaires in Training Material)

Evaluation of Domain Testing Training Day 1 Date: _________________


Please take a moment to complete this questionnaire. Your response will help me improve my training.

1. 2. 3. 4. 5. 6. 7.

How would you rate the overall clarity of todays lecture? How would you rate the overall clarity of the pretest? How would you rate your overall satisfaction with todays lecture? How would you rate your overall satisfaction with the pretest? How would you rate todays session in terms of how interesting it was? Rate your confidence when answering the questions on the pretest. Was enough time given to solve the pretest? More than enough Enough Little short of enough

Excellent Excellent Excellent Excellent Excellent Excellent

Good Good Good Good Good Good

Fair Fair Fair Fair Fair Fair

Poor Poor Poor Poor Poor Poor

Very less time provided

8.

Were too many or too less questions included in the pretest? Too many questions Enough Little short of enough Very less questions Excellent Good Fair Poor

9.

How would you rate your competence in doing domain testing?

10. What was the most useful part of todays session?

11. What did you like best about todays session?

12. What was the least useful part of todays session?

13. What other information would you like to see added to todays lecture and test?

14. Would you like to suggest specific improvements to todays lecture?

15. Would you like to suggest specific improvements to the pretest?

Evaluation of Domain Testing Training Day 2 Date: _________________


Please take a moment to complete this questionnaire. Your response will help me improve my training.

1. 2. 3. 4. 5. 6. 7.

How would you rate the overall clarity of todays lecture? How would you rate the overall clarity of todays exercises? How would you rate your overall satisfaction with todays lecture? How would you rate your overall satisfaction with todays exercises? How would you rate todays session in terms of how interesting it was? Rate your confidence when solving exercises. Was enough feedback provided for the exercises? More than enough Enough Little short of enough

Excellent Excellent Excellent Excellent Excellent Excellent

Good Good Good Good Good Good

Fair Fair Fair Fair Fair Fair

Poor Poor Poor Poor Poor Poor

Very less

8.

Was enough time given to solve the exercises? More than enough Enough Little short of enough Very less time provided

9.

What was the most useful part of todays session?

10. What did you like best about todays session?

11. What was the least useful part of todays session?

12. What other information would you like to see added to todays lecture?

13. What other information would you like to see added to todays exercises?

14. Would you like to suggest specific improvements to todays lecture?

15. Would you like to suggest specific improvements to todays exercises?

Evaluation of Domain Testing Training Day 3 Date: _________________


Please take a moment to complete this questionnaire. Your response will help me improve my training.

1. 2. 3. 4. 5. 6. 7.

How would you rate the overall clarity of todays lecture? How would you rate the overall clarity of todays exercises? How would you rate your overall satisfaction with todays lecture? How would you rate your overall satisfaction with todays exercises? How would you rate todays session in terms of how interesting it was? Rate your confidence when solving exercises. Was enough feedback provided for the exercises? More than enough Enough Little short of enough

Excellent Excellent Excellent Excellent Excellent Excellent

Good Good Good Good Good Good

Fair Fair Fair Fair Fair Fair

Poor Poor Poor Poor Poor Poor

Very less

8.

Was enough time given to solve the exercises? More than enough Enough Little short of enough Very less time provided

9.

What was the most useful part of todays session?

10. What did you like best about todays session?

11. What was the least useful part of todays session?

12. What other information would you like to see added to todays lecture?

13. What other information would you like to see added to todays exercises?

14. Would you like to suggest specific improvements to todays lecture?

15. Would you like to suggest specific improvements to todays exercises?

Evaluation of Domain Testing Training Day 4 Date: _________________


Please take a moment to complete this questionnaire. Your response will help me improve my training.

1. 2. 3. 4. 5. 6. 7.

How would you rate the overall clarity of todays lecture? How would you rate the overall clarity of todays exercises? How would you rate your overall satisfaction with todays lecture? How would you rate your overall satisfaction with todays exercises? How would you rate todays session in terms of how interesting it was? Rate your confidence when solving exercises. Was enough feedback provided for the exercises? More than enough Enough Little short of enough

Excellent Excellent Excellent Excellent Excellent Excellent

Good Good Good Good Good Good

Fair Fair Fair Fair Fair Fair

Poor Poor Poor Poor Poor Poor

Very less

8.

Was enough time given to solve the exercises? More than enough Enough Little short of enough Very less time provided

9.

What was the most useful part of todays session?

10. What did you like best about todays session?

11. What was the least useful part of todays session?

12. What other information would you like to see added to todays lecture?

13. What other information would you like to see added to todays exercises?

14. Would you like to suggest specific improvements to todays lecture?

15. Would you like to suggest specific improvements to todays exercises?

Evaluation of Domain Testing Training Day 5 Date: _________________


Please take a moment to complete this questionnaire. Your response will help me improve my training.

1. 2. 3. 4. 5. 6.

How would you rate the overall training in terms of how interesting it was? How would you rate your overall satisfaction with the training? How would you rate the overall clarity of the posttest? How would you rate your overall satisfaction with the posttest? Rate your confidence when answering the questions on the posttest. Was enough time given to solve the posttests? More than enough Enough Little short of enough

Excellent Excellent Excellent Excellent Excellent

Good Good Good Good Good

Fair Fair Fair Fair Fair

Poor Poor Poor Poor Poor

Very less time provided

7.

Were too many or too less questions included in the posttests? Too many questions Enough Little short of enough Very less questions

8.

How would you rate your competence in doing domain testing?

Excellent

Good

Fair

Poor

9.

How likely is it that you will recommend this training to somebody else? Very likely Likely Less likely Not likely

10. What was the most useful part of this training?

11. What did you like best about the training?

12. What was the least useful part the training?

13. What other information would you like to see added to or deleted from todays tests?

14. Would you like to suggest specific improvements to the posttests?

15. Would you like to suggest specific improvements to the training?

Appendix P: Instructional Objectives

Instructional Objectives Basic Instructional Objectives: 1. Ability to analyze simple integer fields/variables, defined in terms of range of values, do equivalence class analysis and boundary value analysis on them. 2. Ability to analyze simple floating-point fields/variables, defined in terms of range of values, do equivalence class analysis and boundary value analysis on them. 3. Ability to analyze simple string fields/variables, defined in terms of range of values, and equivalence class analysis and boundary value analysis on them. 4. Ability to analyze simple enumerated variables/fields. 5. Ability to analyze variables with multiple sub-ranges and do equivalence class analysis and boundary value analysis on them. 6. Ability to analyze variables along one dimension. 7. Ability to analyze variables along multiple dimensions. 8. Ability to analyze non-numbers for strictly numeric values. 9. Ability to analyze characters in terms of ASCII. 10. Ability to do identify variables of a given function/program/application and classify them as input or output variables. 11. Ability to identify independent and dependent variables of a function. 12. Ability to identify dependency relationships of the dependent variables. 13. Ability to do all-pairs combination on a given set of variables. 14. Ability to calculate the total number of possible combinations (total number of test cases) of all fields of a function/program. 15. Ability to look at a program or function, identify variables/fields, determine the kind of values each takes, determine the dimensions of each variable, and do equivalence class analysis and boundary-values analysis along each of the dimensions, if applicable. Higher Order Objectives (HO) 1. Ability to recognize patterns and generalize 2. Ability to translate verbal form of specification to symbolic form for problem solving. 3. Ability to develop a list of risks from a spec 4. Ability to map risks to test cases. These higher order objectives are co-requisites of many of the above listed basic instructional objectives, which means that these are to be achieved in parallel with many of the basic instructional objectives.

Appendix Q: Blooms Taxonomy of Learning Outcomes

Blooms Taxonomy The following table has been retrieved from http://www.coun.uvic.ca/learn/program/hndouts/bloom.html 1. Knowledge

observation and recall of information knowledge of dates, events, places knowledge of major ideas mastery of subject matter Question Cues: list, define, tell, describe, identify, show, label, collect, examine, tabulate, quote, name, who, when, where, etc.

2.

Comprehension

understanding information grasp meaning translate knowledge into new context interpret facts, compare, contrast order, group, infer causes predict consequences Question Cues: summarize, describe, interpret, contrast, predict, associate, distinguish, estimate, differentiate, discuss, extend

3.

Application

use information use methods, concepts, theories in new situations solve problems using required skills or knowledge Questions Cues: apply, demonstrate, calculate, complete, illustrate, show, solve, examine, modify, relate, change, classify, experiment, discover

4.

Analysis

seeing patterns organization of parts recognition of hidden meanings identification of components Question Cues: analyze, separate, order, explain, connect, classify, arrange, divide, compare, select, explain, infer

5.

Synthesis

use old ideas to create new ones generalize from given facts relate knowledge from several areas predict, draw conclusions Question Cues: combine, integrate, modify, rearrange, substitute, plan, create, design, invent, what if?, compose, formulate, prepare, generalize, rewrite

6.

Evaluation

compare and discriminate between ideas assess value of theories, presentations make choices based on reasoned argument verify value of evidence recognize subjectivity Question Cues assess, decide, rank, grade, test, measure, recommend, convince, select, judge, explain, discriminate, support, conclude, compare, summarize

Appendix R: Traceability Matrices: Mapping Assessment Items to Instructional Objectives

Traceability Matrices Mapping Assessment Items to Instructional Objectives Day 2 Examples Example Level1 HOO IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 IG9 IG10 IG11 IG12 IG13 IG14 IG15 3, 4 2 1. 3, 4 2 2. 2, 3, 4 3 3. 2, 3, 4 3 4. 2, 3, 4 3 5. 2, 3, 4 4 6. 3, 4 2 7. Day 3 Examples Example Level HOO IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 IG9 IG10 IG11 IG12 IG13 IG14 IG15 1, 3, 4 5 1. 1, 3, 4 5 2. 2, 3, 4 4 3. 2, 3, 4 2 4. 1 2 5. 1,2,3,4 5 6. 1,2,3,4 5 7.

The level of difficulty of the questions is based on Blooms Taxonomy of Learning Outcomes, which is described in Appendix Q: Blooms Taxonomy of Learning Outcomes.

Day 4 Examples Example Level HOO IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 IG9 IG10 IG11 1 3 1. 1 3 2. 1, 2 5 3. 1, 2 5 4. Day 2 Exercises Example Level HOO IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 IG9 IG10 IG11 IG12 IG13 IG14 IG15 3, 4 2 1. 3, 4 2 2. 2, 3, 4 3 3. 2, 3, 4 3 4. 2, 3, 4 5 5. 2, 3, 4 5 6. 3, 4 5 7. 1,2,3,4 3 8. Day 3 Exercises Example Level HOO IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 IG9 IG10 IG11 IG12 IG13 IG14 IG15 1, 3, 4 5 1. 1, 3, 4 5 2. 2, 3, 4 2 3. 1 2 4. 1,2,3,4 5 5. 1,2,3,4 5 6. IG12 IG13 IG14 IG15

Day 4 Exercises Example Level HOO IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 IG9 IG10 IG11 1 3 1. 1 3 2. 1, 2 5 3. 1, 2 5 4. Test A/Test B Test Level HOO IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 IG9 IG10 IG11 IG12 IG13 IG14 IG15 Question 3, 4 2 1. 3, 4 2 2. 3, 4 2 3. 2, 3, 4 2 4. 1 2 5. 1 3 6. 1,2,3,4 5 7. 1, 2, 3, 4 4 8. 1, 2, 3, 4 4 9. IG12 IG13 IG14 IG15

Performance Posttest Test Level HOO IG1 IG2 IG3 IG4 IG5 IG6 IG7 IG8 IG9 IG10 IG11 IG12 IG13 IG14 IG15 Question 1, 2, 3, 4 5 1.

Appendix S: Training Schedule: Examples, Exercises and Tests Timings

Training Schedule Day 1 2, Morning Instructional Item (s) Introductory Lecture on Domain Testing Pretest Day 2 Lecture: Example 1 Day 2 Lecture: Example 1 Day 2 Exercises: #1 Day 2 Exercises : #2 Day 2 Lecture: Example 3 Day 2 Lecture: Example 4 Day 2 Lecture: Example 5 Day 2 Exercises : #3 Day 2 Exercises : #4 Day 2 Exercises: #5, #6 (Group Discussion) Day 2 Lecture: Example 6.1 Discussion on ASCII Day 2 Lecture: Example 6.2 Day 2 Lecture: Example 7 Day 2 Exercises: #7.1 Day 2 Exercises: #7.2 Exercise 8 Day 3 Lecture: Example 1 Day 3 Exercises: #1 Day 3 Lecture: Example 2 Day 3 Lecture: Example 3 Day 3 Exercises: #2 Day 3 Lecture: Example 3 Day 3 Exercises: #3 Day 3 Lecture: Example 5 Day 3 Exercises: #4 Day 3 Lecture: Example 6 Day 3 Lecture: Example 7 Day 3 Exercises: #5 Day 3 Exercises: #6 Day 4 Lecture: Example 1 Day 4 Exercises: #1 Day 4 Lecture: Example 2 Day 4 Exercises: #2 Day 4 Lecture: Example 3 Day 4 Lecture: Example 4 Day 4 Exercises: #3 Day 4 Exercises: #4 Paper-Based Posttest Performance Test Time allotted (minutes) 30 90 10 5 10 10 5 5 5 10 10 10 15 10 10 10 15 10 15 15 15 10 10 15 10 15 5 20 20 10 40 50 15 30 15 30 15 30 15 30 90 90

2, Afternoon

3, Morning

3, Afternoon

4, Morning

4, Afternoon

5, Morning 5, Afternoon

Appendix T: Student Application for Research Involving Human Subjects

STUDENT APPLICATION RESEARCH INVOLVING HUMAN SUBJECTS

Name: Sowmya Padmanabhan Major: Computer Sciences Title of Project: Domain Testing

Date: April 29, 2003 Course: Masters Thesis

Directions: Please provide the information requested below (please type). Use a continuation sheet if necessary, and provide appropriate supporting documentation. Please sign and date this form and then return it to your major advisor. You should consult the university's document "Principles, Policy and Applicability for Research Involving Human Subjects" prior to completing this form. Copies may be obtained from the Office of the Vice President for Research.

1. List the objectives of the proposed project. Develop and evaluate materials for improving the teaching of domain testing, a software testing technique that is routinely taught in Florida Techs software testing courses. Use the instructional materials and the evidence of their effectiveness / ineffectiveness as part of my thesis. Enhance the testing course and publish the enhancements at our testingeducation.org website and in journals associated with computer science education, in order to assist other faculty to improve their testing courses.

2. Briefly describe the research project design/methodology. An experiment will last for approximately 15 hours. We are unsure how the time will be divided. We may divide the experiment into five 3-hour sub-sessions spread across five days or two 8-hour sessions or some other combination. We will determine these details in pilot studies that (typically) use volunteers, such as students who do research in our lab. The essence of the project design is that We will provide the student with a mix of oral and written instruction on how to do some task within domain testing. o An example of a task is to identify the range of values that a function can (in theory) accept as input, and then to construct a set of tests to check whether the function can in fact process those values correctly and can in fact handle out-of-range values in a sensible way. Then we will provide the student with sample exercises to work through, that we have designed to give the student practice in the skills needed to do the task well. Then we will test the student on her or his skills After we have trained students in this way on several tasks, we will put this together into summary assignments and tests to see how they handle the full task. To evaluate the results of the experiments, we will have some context-setting at the start of the experiment and some results analysis left to do at the end: Everyone will get a demographic questionnaire

Some subjects may get a learning styles questionnaire or another questionnaire related to subject characteristics (such as Myers-Briggs) Everyone will take a pre-test that will give us an indication of the skills they already have in the type of tasks we are teaching. After they go through the training, everyone will take a post-test that will check their mastery of the skills / tasks. At the end of a subjects training, the subject will fill out a post-test questionnaire that tells us their subjective impressions of the experiment.

3. Describe the characteristics of the subject population, including number, age, sex, etc. An open invitation to students of Florida Tech who have CSE 1001/1002 or equivalent and MTH 2051. Outsiders who have software testing background might also be recruited. Anticipated number: 20-60 (20 initially plus 10 for pilot studies) Age Range: 19 and above Sex: Male or female; gender doesnt matter Ethnic background: doesnt matter Health status: not obviously ill Use of special classes (pregnant women, prisoners, etc): N/A -- We will not reject women just because they are pregnant. We are not recruiting specifically for subject characteristics other than a minimum level of computing background and age of maturity.

4. Describe any potential risks to the subjects -- physical, psychological, social, legal, etc. and assess their likelihood and seriousness. The usual minor risks associated with working and sitting in a classroom or conference room for several hours. 5. Describe the procedures you will use to maintain confidentiality for your research subjects and project data. What problems, if any, do you anticipate in this regard? We will not be using any personal data collected like name, etc in my thesis or elsewhere. Subjects will be assigned numbers, which they will use on all materials they hand in. We will keep a list matching subject name and number, primarily for auditing purposes. We will keep this list in an offsite file (Kaners house) and will not share it with others unless they have a lawful need to know. We explain this in the consent form. 6. Describe your plan for obtaining informed consent (attach proposed form). Florida Tech IRB: 2/96 Consent will be sought using the attached consent form. 7. Discuss what benefits will accrue to your subjects and the importance of the knowledge that will result from your study. Training that is relevant to the participants field of interest. Pay by the hour (typically $150 for the experiment or $10 per hour)

8. Explain how your proposed study meets the criteria for exemption from Institutional Review Board review. The following protocol applies to my research that should be exempted, because it meets the criterion: 1. Research conducted in established or commonly accepted educational settings, involving normal educational practices, such as: i. research on regular and special education instructional strategies, or ii. research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods. I understand Florida Institute of Technology's policy concerning research involving human subjects and I agree: 1. to accept responsibility for the scientific and ethical conduct of this research study. 2. to obtain prior approval from the Institutional Review Board before amending or altering the research protocol or implementing changes in the approved consent form. 3. to immediately report to the IRB any serious adverse reactions and/or unanticipated effects on subjects which may occur as a result of this study. 4. to complete, on request by the IRB, a Continuation Review Form if the study exceeds its estimated duration.

Signature

Sowmya Padmanabhan

Date April 29, 2003

This is to certify that I have reviewed this research protocol and that I attest to the scientific merit of the study, the necessity for the use of human subjects in the study to the student's academic program, and the competency of the student to conduct the project.

Major Advisor _____________________________ Date _______________ This is to certify that I have reviewed this research protocol and that I attest to the scientific merit of this study and the competency of the investigator(s) to conduct the study.

Academic Unit Head __________________________Date ______________

IRB Approval _______________________________ Date ______________ Name ______________________________________________ Title

Appendix U: Consent Form

CONSENT FORM: DOMAIN TESTING EXPERIMENT BY SOWMYA PADMANABHAN

We are seeking your participation in a research project in which we are developing instructional materials for teaching testing skills. We will use data from this experiment to improve Florida Techs software testing courses and we will publish the results for use by other software testing teachers and researchers. If you agree to participate, we may ask you to attend one or more lectures, read materials, complete practice exercises, and take written tests at various points in the study. When you do an exercise or take a test, you will fill out an answer sheet. To preserve your privacy, you will identify yourself on answer sheets with an experimenter-assigned number. You might review answers written by other students and they might review your answers. You may be assigned to work on an exercise with another student, and if so, each of you will fill out your own answer sheet. The experiment will require several hours of your participation. If you cannot participate for all of the scheduled hours, please do not begin this experiment. We cannot use partial data and we cannot compensate you for partial participation. We will split the experiment into sessions, which will last between 15 and 18 hours. We estimate that your participation will require each. sessions of __ hours

Participants in most of the phases of this work will be paid. Some of our colleagues will serve as volunteers during the exploratory phases of the experiment and will not be paid. You will be paid $__ for your participation in this study. You will be paid at the completion of your role in the experiment (all sessions assigned to you). We can afford to pay you only if you attend all the assigned sessions and complete the required assignments/exercises/quizzes. We cannot use your data if you skip any session. If we have to terminate your role in the experiment prematurely because of an error that we made in running the experiment, we will pay you for the time you have spent in the experiment to date of termination, at a rate of $10 per hour, rounded up to the nearest whole hour. Your participation will not subject you to any physical pain or risk. We do not anticipate that you will be subject to any stress or embarrassment. We will ask you to fill out one or more questionnaires that give us demographic information about you and/or that give us insight into how you learn. Your name will not be recorded on any answer sheet. You will be assigned an anonymous code number. You will use that code number on your answer sheet. Your responses will be tracked under that code number, not under your name. Any reports about this research will contain only data of an anonymous or statistical nature. Your name will not be used. For auditing purposes, the experimenter will keep a list of all people who participated in the experiment and the anonymous code assigned to them. That list might be reviewed by the student experimenter, Sowmya Padmanabhan, the projects Principal Investigator, Cem Kaner, or by anyone designated by the Florida Institute of Technology or an agency of the Government of the United States, including the National Science Foundation, as having such legitimate administrative interests in the project as analysis of the treatment of the subjects, the legitimacy of the data, or the financial management of the project. We will file this list in a place we consider

safe and secure and take what we consider to be reasonable measures to protect its confidentiality. We will treat it with the same (or greater) care as we would treat our own confidential materials. Any questions you have regarding this research may be directed to the experimenter (Sowmya Padmanabhan) or to Cem Kaner at Florida Tech's Department of Computer Sciences, 321-6747137. Information involving the conduct and review of research involving humans may be obtained from the Chairman of the Institutional Review Board of the Florida Institute of Technology, Dr. Ronald Hansrote at 321-674-8120. Your signature (below) indicates that you agree to participate in this research and that: You have read and understand the information provided above. You understand that participation is voluntary and that refusal to participate will involve no penalty or loss of benefits to which you are otherwise entitled; and, You understand that you are free to discontinue participation at any time without penalty or loss of benefits to which you are otherwise entitled except that you wont be entitled to be paid for the experiment since you did not attend all the sessions.

____________________________________________ Participant

____________________ Date

I have explained the research procedures in which the subject has consented to participate.

____________________________________________ Experimenter

____________________ Date

Appendix V: Call for Participation: First Version

Research on Software Testing Education


Call for participants
($10/hr, 5-15 hours) My thesis research involves developing instructional material for some widely used software testing technique. Participants in my experiment will be paid $10/hr to study the software testing technique, while I study how effective my materials are at helping them learn. I need up to 40 participants in total, including people: now (negotiable hours) in late June, 2003 (12-15 hours) in mid-July, 2003 (12-15 hours)

Students from Florida Tech, Brevard Community College or elsewhere are welcome so long as they meet the following pre-requisites: completion of two semesters of programming (equivalent to Florida Techs CSE 1001 and 1002) completion of a course in Discrete Math (equivalent to Florida Techs CSE 2051) have not taken any software testing course before

Please contact me immediately at spadmana@fit.edu or 321-674-7395 and let me know if you would be interested in participating in these training sessions. Once you contact me, we can talk further about specific details. Looking forward to hearing from you soon Thanks, Sowmya Padmanabhan.

Appendix W: Call for Participation: Revised Version (after pilot studies)

Software Testing Education Research


Revised Call for participants ($10/hr, 18 hours)
My thesis research involves developing instructional material for some widely used software testing technique. Participants in my experiment will be paid $10/hr to study the software testing technique, while I study how effective my materials are at helping them learn. I have had many enthusiastic responses to my previous call for participation and the sessions in the last week of June and the first three two in July have been filled. I now need up to 4 participants for the following weeks: July 14-18, 2003 (18 hours) July 21-25, 2003 (18 hours) July 28-Aug 1, 2003 (18 hours) Students from Florida Tech, Brevard Community College or elsewhere are welcome as long as they meet the following pre-requisites: completion of at least two programming courses(C, C++, or Java) completion of a course in Discrete Math (equivalent to Florida Techs CSE 2051) and/or completion of calc 1, 2 and 3 and/or completion of courses in probability, statistics and set theory. have not taken any software testing course before Please contact me at spadmana@fit.edu or 321-6747395 and let me know if you would be interested in participating in these training sessions. Once you contact me, we can talk further about specific details. Thanks, Sowmya Padmanabhan.

Appendix X: Performance Tests Evaluation Report by Dr. Cem Kaner

Analysis of Performance Test Results for Sowmya Padmanabhan's Thesis Data Cem Kaner, J.D., Ph.D. January 30, 2004 This memo summarizes my notes on the results of two experiments conducted by Sowmya Padmanabhan in the summer of 2003. I am the research advisor for that work. Three of us, James Bach, Pat McGee and I, have done this analysis. I have discussed analysis criteria and results with both McGee and Bach. Their analyses have influenced my analysis to some degree, and I have somewhat influenced their analysis. My primary influence on them was in helping them develop their performance criterion. The evaluation criterion that I laid out for them is this: Compare the results from this Performance Test to the results that you would expect from a tester who claimed to have a year's experience and who claimed to be good at domain testing. All three of us have substantial testing experience, and have interviewed, hired and supervised testers. All of us have our own sense of what we would expect from a 1-year tester, and we also differ in the details of how we would do a domain testing analysis. We also recognize that there are different approaches to domain testing and that our goal was to determine whether this person looked reasonably experienced and whether the work looked reasonably sound. I am confident that our discussions influenced how we articulated our conclusions, but did not influence our conclusions. Each of us was independently uncomfortable with the results. I analyzed two experiments' results. Experiment 1's subjects had been mis-instructed on all-pairs testing. Additionally, the wording of some of the Experiment 1 questions was confusing. Accordingly, I am ignoring the Experiment 1 data on combination testing. The students took this as an open book exams. They had lecture notes, checklists, and other materials available. The lecture notes and handouts presented a series of concepts associated with domain testing, and then boiled the concepts down into step-by-step procedures.

Overview
Student performance was remarkably consistent. They approached the task in the same order, identified the same variables, analyzed and discussed them in essentially the same way, presented the results in extremely similar tables, and made the same mistakes. Overall, I think that the students learned the procedures well, but didn't learn domain testing well. The answers are both more and less than I would expect from a 1-year experienced tester. The tables are tidy and well-structured. Each variable is analyzed in terms of several dimensions. The all-pairs analyses of the Experiment 2 students look OK. The students' descriptions of risks are uninsightful and largely redundant with the tests.

If one student had provided a given answer, I might have been impressed with it. However, the stereotyping of the answers is very pronounced, suggesting that they are following a procedure (such as a checklist) or working directly from an example, rather than thinking through what they are doing. For example, the students provide the same dimensions and they miss the same dimensions. The students missed issues of interaction, they failed to consider these dimensions in terms of output effects, and they failed to consider the boundaries implicated by a critical boundary-related error message. In terms of presentation, these charts are better than I would expect from one-year testers. In terms of content and insight, I would expect better from an experienced tester.

Now for the details:

Analysis of the Results


This breaks down into several sections: List of variables; indicate input or output, their dimensions and the data type they map to. Discuss the relationships that exist between variables, if any, and give examples of how you would test them. Equivalence class table(s) that shows the complete equivalence class and boundary value analysis for the function under test. All-pairs combination. o Indicate what variables you would select for doing all-pairs combination. Justify your selection. o Indicate which test cases of the chosen variables you will use for doing all-pairs. Justify your selection. o Show all iterations. Give relevant comments when you backtrack and redo any ordering.

List of variables; indicate input or output, their dimensions and the data type they map to.
List of variables: I see 7 variables, highly interrelated: o Slides Sized For o Width o Height o Number slides from o Orientation of slides o Orientation of Notes, handouts & outline

o Help (? Key) All of the subjects identified the first 6 variables. None of them considered Help. None identified any other variables.

Input or Output: The first six variables are both input and output variables. On the input side, you enter or select a value (that's the input). On the output side, the value of the variable controls how the program displays and prints slides. All of the subjects identified these six variables as input variables or they made no explicit identification and then treated the variables (in the equivalence class table) as input variables. Only 2 subjects, JLT11S4 and JL1418S2, explicitly identified any variables as output. Neither of these actually analyzed the output variables. Their equivalence class table dealt only with the input-related issues for these variables. One additional subject, JL28A1S4, mentioned output tests (for example, s/he said of # of slides that an error could be failure to print the slides correctly) but s/he listed all variables only as input variables and s/he didn't explore the output-related risks in any detail. If I put the "width" variable in front of 10 experienced testers, especially if I asked them to identify output variables, I would expect several, perhaps all, to deal with at least one or two output dimensions associated with this variable. I interpret the subjects' failure to consider the output issues involved with these variables as partially due to their inexperience and partially due to negative transfer from a course that focused primarily on analysis of input variables.

Dimensions: The question is, what should we vary to test this variable? Here's an example: Slides Sized For o It is both an input and an output variable. You select a value (input) but that value controls how the program displays and prints slides. If you change this with an existing presentation, you may change the font size of the existing slides. o Has only two obvious dimensions, as an input variable. One dimension is the enumerated list. The other is the side effect impact on the other variables (height, width, etc.) As an output variable, it changes a lot of things, such as default font size, the font sizes of existing text, the default scaling of the slide on the screen, the ability of the attached printer to print the slide. To test the input variable, one obvious dimension involves the value that the program appears to be requesting. Examples: o the Slides Sized For variable is enumerated. One "dimension" is the set of all values in the enumerated list. o the Width variable is floating point. The dimension runs from smaller Widths to larger Widths. It appears to run from 1 to 56 inches. There are other aspects of the input variable that are orthogonal to the identified dimension. For example, for the width:

o You could type more or less quickly o You could make more or fewer edits of the field o You could type letters, digits, or non-alphanumeric characters o You could type between 1 and many characters, such as 1 or 1.2345678. There are some potentially interesting games to play with this, like how it handles many leading zeros or many leading spaces or many digits before instead of after the decimal or how many decimal points, etc. If I put the "width" variable in front of 10 experienced testers, I doubt that all 10 would identify character set and number of characters as additional dimensions to test. All of the subjects identified these. Either they have all learned to be very thorough, or they are following the same script. In fact, these three were the dimensions identified in the course lecture notes as dimensions of the numeric fields. Rather than learning that these were examples, students seemed to have learned that they were the exhaustive list. For example, none of subjects suggested time or edits as input dimensions. Only one subject, JL2125S4, suggested a "dimension" off the beaten paths/he made two test cases with more than one decimal point, but treated these as isolated tests, not as a distinct dimension. To test the output variable, I have several dimensions. Here are a few: o Changing slide size changes the font size on existing slides. The font size can range from tiny to huge. o Changing slide size changes the set of printers that I can print on. I can print a really big slide on a big plotter, but only a very small slide on a small photo printer. o Depending on the printer selected, if I attempt to set a slide size that exceeds the printable area, I get an error message: The current page size exceeds the printable area of the paper in the printer. Click Fix to automatically fit the page to the paper. Click Cancel to return to the Page Setup dialog box. Click OK to continue with the current page size. (NOTE: This is the wording on the Macintosh. Messages and parameter limits are slightly different on Windows machines, but the issues and my conclusions are all the same.) A paper size is either smaller than the printable area (no error message) or larger (error message and if you print or print preview, you don't get the full slide.) o None of the subjects identified any of these dimensions or any others like them.

Discuss the relationships that exist between variables, if any, and give examples of how you would test them.
This question was asked only in Experiment 2. Experiment 1 had a question with similar intent but was too confusingly worded. I have only considered the data from Experiment 2. None of the 5 subjects "discussed" anything. They started by identifying certain variables as related and then show a table that lists specific values. This (subject a2529s5) is typical:

Orientation of Slides is dependent upon Width, Height and Width, Height are dependent upon Slides sized for. The tests are as follows:
Test Case # Slides sized for Width Height

Orientation of slides Portrait Landscape Portrait Landscape Portrait Landscape Portrait Landscape Portrait Landscape Portrait Landscape Portrait Landscape Portrait Landscape Portrait Landscape Portrait Landscape Portrait Landscape

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

On screen show On screen show Letter paper Letter paper Ledger paper Ledger paper A3 paper A3 paper A4 paper A4 paper B4 paper B4 paper B5 paper B5 paper 35mm slides 35mm slides Overhead Overhead Banner Banner Custom Custom

7.5 10 7.5 10 9.99 13.32 10.5 14 7.5 10.83 8.88 11.84 5.88 7.84 7.5 11.25 7.5 10 1 8 8.17 10.67

10 7.5 10 7.5 13.32 9.99 14 10.5 10.83 7.5 11.84 8.88 7.84 5.88 11.25 7.5 10 7.5 8 1 10.67 8.17

There's no rationale for these tests. It is particularly incongruous to see this type of table after seeing an equivalence class chart that emphasized risk. Every test case in that chart was tied to a purported risk. This chart ignores risk completely. From an experienced tester, I would either expect to see both sets of test cases justified or designed in terms of risks or I neither. I interpret this oddity as follows: The students were trained to include a risk column in their equivalence class charts. Their risk analyses in those charts are unimpressive, but the charts follow the form of a risk analysis. In contrast, I don't think that it was drilled into the students that a chart like this (I'll call it a relationship test chart) needs a risk analysis, and so the students didn't supply one. If the students had adopted the notion of risk as a way of thinking about test design, they would have used it and talked about it when creating the relationship test chart. Given that all of the students operated in the same way, I don't see this as an idiosyncrasy of one tester. Instead, I conclude that the students are following a set format rather than thinking about risk (or perhaps about much of anything). Checklist: Make sure you include the following in your analysis: a. List of variables; indicate input or output, their dimensions and the data type they map to. b. Equivalence class table(s) that shows the complete equivalence class and boundary value analysis for the feature under test. c. All-pairs combination table. d. Discuss the relationships that exist between variables and give examples of how you would test them. All five subjects provided a table like the following. This particular one is from A259S3. A 259S5 has some additional text, but no additional explanation of the test cases and no discussion of risk. Combination of Error handling test cases # Test Case 1 2 3 S Custom Custom Custom W 0 1 0 H 0 57 57 O Portrait Landscape Portrait

Another oddity in all five answers was a fifth section. The performance test called for four parts:

Perhaps this is in response to the "relationships" question (though in 2 of the 5 cases, it was explicitly labeled as Part E (a2529s1) or part 5 (a2529s5)

Equivalence class table(s) that shows the complete equivalence class and boundary value analysis for the function under test.
The equivalence class tables were similar across all subjects. All of the subjects included the enumerated variables in the equivalence class tables. This makes little sense. The individual values are all "classes" of size 1. The risk descriptions associated with these individual items are information-free, for example " Failure of Slides sized for Letter" or " Failure to display the right slide" listed beside each possible value for "Slide Sized For." This reflects a weakness in the instructional materials. Students were encouraged to handle enumerated variables in exactly this way, up to and including risk statements like "1) Failure to process option 1 correctly" and "1) Mishandling of values outside the enumerated list of values" (Course materials, AppendixA.doc). The analyses for all of the other variables also marched through the table in lockstep, virtually the same analysis by every tester, using almost the same order of cases, the same verbose risk descriptions, etc. as each other and as the course notes.

It's hard to comment on this work. If one individual drew this table and filled it in this way, I'd probably be impressed. The tables are tidy and organized. The enumerated fields add noise, but if we ignore those, the analyses of the integer and floating point fields make it appear as though the person is thinking creatively. Whether or not these particular tests are likely to expose a bug is less interesting than the fact that the tester is thinking beyond the literal wording of the problem. The illusion evaporates when you see all of the answers together, and then review the course notes. The two additional dimensions (number of characters and treatment of nondigits) are prefabricated responses, laid out specifically in the course and applied mechanically. When I look for other evidence of creativity, I see none. Here are a few examples of things that I would expect from a sample of moderately experienced testers who claimed to have some experience with domain testing: If I was considering the number of characters we could enter into a field, I'd paste several variations of a huge number of characters, such as 65533 0's (or spaces) followed by a 1. (If that yielded any response, I'd go to 2^32 characters or more.) I don't think it's unreasonable for the tester to make a working assumption that the empirical limits on the variable (how many characters it accepted in testing) are the intended limits, but I would expect the tester to raise the issue, recognize the uncertainty involved, and try to check it or note that this should be checked. Even if the instructions hadn't instructed the tester to consider output variables, it is common (and highly desirable) to check the result of a test. For example, when you set the page size, you don't know whether it has been correctly set until you try to use the new

page size, perhaps via print or print preview or display with the rulers set. Doing this simple check with a large page size could have exposed the font size changes, the display scale changes, and other effects of page setup. The subjects were doing the performance test using their computers. To discover that 56" was the maximum page size, I assume that they had to actually try large numbers, including 56. But when you try 56, you get the page size error message. This should alert any tester that there is a new boundary and a new set of equivalences to explore.

In checking the maximum on Slide Number (9999), I would have used this as a starting number and then created a presentation with at least two slides (so that we cross 10000 as a slide number.)

All-pairs combination.
The subjects in the first experiment were mis-instructed on all-pairs and the question was not easily understood. Therefore I consider only the results of Experiment 2. Subjects could pick different variables, so I'll address the 5 subjects' answers individually. The subjects justified their selections of variables by choosing variables they considered independent. The justification essentially was, "These are independent, therefore they are what we want." On a time limited exam, I think that's good enough. None of the subjects justified their test case selections. As in so many other ways, we see an emphasis on mechanical selection (causing selection of way too many test cases), rather than thinking through what tests would satisfy the desired criteria. Asked to describe their thinking, they are silent. The working tester is dealing with variables and combinations more complex than these. The performance test was more complex than the course examples but much less complex than most of the things the testers will actually test. A tester who cannot extend what she is doing beyond a set of pre-specified routines to simple situations, and who cannot explain and justify what she is doing, will not survive in a practitioners' environment.

1. Indicate what variables you would select for doing all-pairs combination. Justify your selection. 2. Indicate which test cases of the chosen variables you will use for doing all-pairs. Justify your selection. 3. Show all iterations. Give relevant comments when you backtrack and redo any ordering.
Subject A2529S1 Picked 3 variables: Slides Sized For, Slide Number and Notes Orientation. The subject mistakenly believed that Slide Orientation was determined by Slide Sized For actually for each size, you can specify portrait or landscape. For the values of Slides Sized For, the subject used the 11 enumerated values. But one of these is Custom, which allows the full range of sizes. This is where the boundary cases (1" x 1" and 56" x 56") can be entered. If you're interested in testing slide size, these are certainly two key cases to include. If I was to test the full enumerated set, I'd include the 2 custom values, for 12 cases total. For the values of Starting Slide Number, the subject specified 6 values. o One test required an (unspecified) 1 digit number; another an (unspecified) 4 digit number o One test required an (unspecified) test that included a 1; another an unspecified number that included 9 o One test required 1; another required 9999. o Clearly, 1 and 9999 meet the other requirements, so there are 2 cases to test here, not 6. For the values of Notes Orientation, there are 2 cases, as the subject specified. Conclusions to this point: o This subject is working mechanically without thinking about the variables being tested. o No one with work experience in this type of testing would do this, because if they had ever done it, they would have realized how much time they were wasting when they tested from this matrix. o Additionally, the failure to realize that 1x1 and 56x56 are interesting cases here suggests to me that boundary analysis doesn't come naturally to this subject. o This final piece of work would lock down my conclusion that this subject doesn't have the understanding that I would expect of a tester with a year of practical experience with domain testing.

Using the variables that this subject used, I would have a combination involving {12 cases} x {2 cases} x {2 cases}, which would have resolved into a trivially easy to solve 24-case all-pairs matrix. The subject's analysis involves an {11} x {6} x {2} matrix, which reduces to a trivially easy 66-case table. The subject carried out this mechanical task correctly. This subject's 4 variables were Width [Test case # 1,2,6,7,8,9,13,14] == [w1, w2 .w8] Height [Test case # 1,2,6,7,8,9,13,14] == [h1, h2 . H8] Slides From [Test case # 1,2,6,7,11,12] == [s1,s2 . S6] Notes, handouts and outline [Test case # 1,2] == [n1,n2]

Subject A2529S2

For Width, I could meet the 8 criteria with three cases, 1, 56., and 9.0123 For Height, I could meet the 8 criteria with the same three values For Start Slides From, the second case is strange because it says you can enter 32 characters into this field. You can only enter 4. If we correct 32 to 4, two cases 0 and 9999 cover the 6 criteria For Notes orientation, I would use the same two values. As with A2592S1, I conclude to this point: o This subject is working mechanically without thinking about the variables being tested. o No one with work experience in this type of testing would spec this many test cases, because if they had ever done it, they would have realized how much time they were wasting when they tested from this matrix.

This subject didn't have the chance to blow the same boundaries (1x1 and 56x56) as A2592S1, but a domain tester is constantly looking for ways to cut the number of tests to run from an unreasonable numberthe subject specifies 64 testsdown to a reasonable set of representatives. I can meet the tester's criteria with a matrix of 9 tests rather than 64. Given an all-pairs task involving {8} x {8} x {6} x {2} values, construction of the 64case matrix is a routine, easy, mechanical task. The subject did it correctly. This subject's 4 variables were Width [TC# 1,2,9,10,11,18,19] => [W1,W2,W3,W4,W5,W6,W7] Height [TC# 1,2,9,10,11,18,19] =>[H1,H2,H3,H4,H5,H6,H7] Orientation of Notes, handouts and outlines [TC# 1, 2] =>[O1,O2] Number slides from. [#TC 1, 2, 9, 10, 15, 16] => [N1, N2, N3, N4,N5,N6]

Subject A2529S3

For width and height, you can satisfy the 7 criteria (each) with 1, 56., and 9.0123 (3 tests rather than 7 for each variable) For Number slides, you can satisfy the 6 criteria with 0 and 9999 (two tests) The comments and conclusions that applied to A2529S2 apply here in the same way. This subject's 4 variables were Width [TC# 1,2,7,8,9,15,17] => [W1,W2,W3,W4,W5,W6,W7] Height [TC# 1,2,7,8,9,15,17] =>[H1,H2,H3,H4,H5,H6,H7] Orientation of Notes, handouts and outlines [TC# 1, 2] =>[O1,O2] Number slides from. [#TC 1, 2, 7, 8, 11, 12] => [s1, s2, s3, s4,s5,s6]

Subject A2529S4

For width and height, you can satisfy the 7 criteria (each) with 1, 56., and 9.0123 (3 tests rather than 7 for each variable) For Number slides, you can satisfy the 6 criteria with 0, 1 and 9999 (three tests). There should be only two tests, but the subject misidentified the lower boundary as 1 instead of 0. The comments and conclusions that applied to A2529S2 and A2529S3 apply here in the same way. The subject's analysis, and my comments, are the same as for A2529S1.

Subject A2529S5

Evaluation of the Results of the Thesis Experiment


In Padmanabhan's proposal defense, we (the advisory committee) agreed that it was important for Padmanabhan to obtain a positive result. That is, she was to demonstrate that she understood domain testing well enough, and instructional design well enough, that she could effectively teach novice testers how to do domain testing. The performance test results convince me that Padmanabhan failed in this ultimate objective. However, I think she failed in an interesting way and that the materials she developed and the process she followed to develop the materials are of barely sufficient quality to satisfy the requirements for a Master's thesis. I say "barely sufficient quality" because many of my objections to the subjects' performance apply to Padmanabhan's Answer Key: Enumerated variables are tabled the same way you would table variables that you can perform equivalence class analysis on Width, height, and starting number are analyzed in the same stereotyped ways, the limits discovered through testing are treated as the known-to-be-correct limits

In the all-pairs analysis, 7 values are used for width (and height) instead of the 3 that would satisfy the 7 criteria and 6 values are used for Starting Number instead of 2.

The good news is that the subjects were able to generate the same answers that Padmanabhan considered correct. The bad news is that the subjects' approach was so mechanical that they generated these answers. In a recent review of the domain testing literature (Teaching domain testing: A status report. In press, Proceedings of the Conference on Software Engineering Education & Training 2004, Norfok, VA: March, 2004), I noted that the descriptions of domain testing in books and articles written for practitioners generally provided worked examples that readers were expected to generalize from, simple explanations, and/or lists of heuristic rules for determining what values were equivalent. In my opinion, the instructional materials developed by Padmanabhan are consistent with the field's most common presentation style for domain testing. She took their approach, did a more thorough job of presenting it to students, and obtained reasonable results when she asked questions that involved straightforward application of what she had taught. The specific results could certainly be improved on, but I don't think that those problems are at the heart of the learning problems faced by the subjects when they tried to transfer their knowledge to the performance test. The performance test that we designed is a test of the student's ability to generalize and transfer what they learned in the classroom to a more complex real-life example. Her students failed. Up to this point, I had tried teaching domain testing many times (perhaps 100 teachings of variations of my Black Box Testing course), and was able to look at student performance on assignments or in-class exercises in perhaps 20 of those teachings. I have been generally disappointed with the results, especially with results from novice testers. In the university courses, most students eventually figured out how to use the technique (I think), but it keeps taking a surprisingly long time. I had expected that we could solve this problem with more exercises, more examples, more drill in the use of the technique and more oral presentation of the reasoning behind the exercises, examples and drill. Padmanabhan's course takes the approach that I was using (and that is common in the literature and in other instructor's courses) to a higher level. Her teaching is more thorough, has more examples, and more formative assessment. The lesson that I take from her results is not that she didn't do a good enough job, but that failure despite the acceptable job that she did indicates that we need to rethink the teaching strategy in common use. I think that the modern literature on mathematics education, focusing on how to build higher-order understanding rather than procedural knowledge, can provide a good foundation for the next generation of course (and course-related experimentation), but I think that's the subject for another thesis.

Appendix Y: Performance Tests Evaluation Report by James Bach

Analysis of Student Performance in Domain Testing


My name is James Bach. I am a testing consultant and trainer. I've been in the computing field for 21 years. About 8 years of that time I spent as a working test manager. I was asked to analyze the performance of five people ("student testers") who had been instructed in a particular technique of domain testing. My goal was to comment on how the performance of these testers compares to the performance I would expect from a typical working software tester ("working testers") with 1-2 years experience in the field, working under competent supervision for an organization that expects them to test productively.

General Notes On My Critique


I did not witness the training the students received. I don't know what were the exact definitions and examples they were trained in. Hence, to compare their behavior to the things I would expect from testers who work for me may or may not be relevant to this exercise. It's not clear from the instructions how far the analysis should be taken. It may be reasonable for students to assume that page setup in PowerPoint is a relatively low risk feature, such that a highly detailed analysis would be overkill. On the other hand, if the exercise is intended to showcase the skills of the student in analyzing even a very important feature, it may be appropriate for them to do a more thorough job than would normally occur in the field for a page setup dialog box. Therefore, I'm not sure whether to criticise the students for glossing over details or reward them for pragmatic brevity. I'll make comments both ways.

Definitions
Equivalence class: a set of tests or test data that are equivalent with respect to some theory of risk. In other words, with respect to a type of problem we have in mind, we expect that every test represented in an equivalence class would reveal that problem, or fail to reveal it. If we believe some would reveal it and others not, that wouldn't be an equivalence class. An equivalence class may also be called a partition. Domain: a set of related tests or test data that might be partitionable into equivalence classes. Variable: anything that can vary in the product's state or environment. Unless the product code is self-modifying, code is not normally considered a variable.

Expectations
The analysis of domains and dimensions may be based on product specifications, programmer interviews, or empirical observation of product behavior (exploratory testing)-- preferrably all three, since problems may be identified by analyzing inconsistencies among the three sources. Working testers always use empirical observation of the product, regardless of whether they also used specifications or interviews, except in cases where observation is not feasible. Working testers often avoid making strong claims about domains and dimensions unless they have multiple sources or other evidence that justifies a strong claim. In other words, if a tester observes that a field can take numbers up to 50, that does not necessarily mean that 50 is a correct boundary. Perhaps the boundary is supposed to be 64 and the tester is simply seeing a bug. On the other hand, if there is corroborating evidence to support an inference about the boundary (perhaps 50 corresponds to the number of states in the United States), that can provide a basis for inferring the boundaries of a particular domain. Working testers routinely identify and test interactions among variables as part of this kind of analysis. Working testers often mention sources and limitations of their analyses. Working testers avoid needless redundancy in test documentation. Working testers avoid documenting the obvious, unless in the belief that the information may be forgotten under pressure. Working testers create documentation that is reasonably useful as an aid to testing, or as a product for an reasonably anticipated client. Considering that domain analysis is a heuristic process that relies on the imagination and technical knowledge of the tester, I expect different working testers to produce analyses that are similar, but vary in level of detail and in occasional particulars.

Critique of Analysis
Part A of the instructions stated "List of variables; indicate input or output, their dimensions and the data type they map to." I see that each student has listed some variables and indicated input or output, and guessed a data type. See appendix A for my answer to this question, which also answers part D and some of part B of the performance test. My analysis is more detailed than any of the students answers. Though some of the ideas I provided were included by the students in their equivalence class tables, many of them werent. This is not in itself a problem. I spent about four hours to arrive at my answer, and I believe it is more elaborate than anyone would routinely expect from a working

tester with a couple years experience. But it is an example of the kinds of thinking I do expect from a working tester. I have not specifically produced the equivalence class tables called for in the exercise. However, my analysis of the variables provides most of the information required for those tables. Observations of the students answers: I don't see anything in the students answers (except for student4) that goes to the question of dimension. I dont see any output variables in the student answers. I dont see any interactions among variables noted. Though this wasnt specifically called for in the instructions, I believe this is an important aspect of the skill of the working tester. Many aspects of products are not laid out for testers directly, but must be inferred mental models of the product that form in the normal course of learning a product. I dont see any issues or questions noted, or commentary on the sources and limitations of their analyses. In almost every case, the risks provided by the students added no information over and above the description of the corresponding equivalence class. Notice that in my analysis, I identified one list of domain-defining risks for each variable, and no separate list of equivalence classes. This is because in most cases, I felt a separate list would add nothing to the analysis. The click help function was not mentioned in any of their analyses. Click help is functionally identical to an enumerated menu of options. Just because the interface is different does not seem to me a reason to exclude it from our testing. The custom setting applies the dimensions of the printable area of the current printer, but this relationship is not noted in any of the student analyses. This may indicate that the students did not review the online help, nor perform enough of an exploration of this function. Because the students apparently used only empirical observation to perform their analyses, their analyses are basically tautological. They each determined that the input length limit for the numeric fields is 31, apparently because that is how the software behaves. But a length of 31 is inconsistent with the obvious limits of those functionsclearly PowerPoint is not designed to handle slides that are a billion trillion trillion inches wide or tall. So, it seems more reasonable to question the 31 character limit than to enshrine it permanently in the test cases. The consistency among the answer sheets for each student suggests that the students either collaborated with each other to produce their answers, or that they were instructed in a very specific formula of domain testing. I dont believe domain testing is usefully reduced to a formula. I believe successful working testers show more variation in their work, because of natural variations in the experiences, temperament, exploration, knowledge, and contexts of thinking people. The lack of

variation suggests that these factors may not have been present in the thought processes of the students, and therefore that the method they used was not very powerful. There is a great deal of redundancy in the students answer tables. I think this may be due to the nature of the equivalence class table format, which I believe they were required to use as a template. Its a typical problem with templates that they may force the practitioner to say in elaborate or stilted ways what could be said more economically in a different format. One of the behaviors I expect to see from testers who work for me is to reformat documents for efficiency, even if that means customizing a template. After all, the job of testing is not to create documents just for their own sake. Some of the risks identified by the students indicate a lack of insight about the nature of the technology under test. For instance, student #1 suggested using characters that are on either side of the ASCII code range for numeric digits. That kind of test made sense in 1985, but it seems unlikely that the programmers of PowerPoint are still writing their own input filter routines based on ASCII code signposts in an IF statement. Modern languages provide more sophisticated libraries and constructs to perform filtering on input. I think a much better test would be to paste in every possible character code. This is fairly easy to do and would trigger many more kinds of faults in the filtering code. The instructions use words like "justify" and "discuss". I saw nothing in the students' answers (except for student #5, who provided a minimal justification for allpairs variables) that I believe a reasonable person would consider justification or discussion. Regarding the allpairs tables, I don't know what is meant by "iteration" in the sentence "Show all iterations." When I look at the work I don't see iterations, as such, but apparently a series of tables, each a strict subset of the next. If that's what "iteration" means, then I don't see the point of iterating. There seems to have been no evolution in the thinking behind the tables.

Conclusion
I expect more insight, product knowledge, and imagination from a serious tester who had more than a few months of experience working for a company that cared about doing good testing. So, I would not say that this work is strong evidence that the students have been brought to an equivalent of a tester with 1 or 2 years experience.

APPENDIX A
In this analysis, I used online help and empirical observation of the product. I had no access to a specification or any other authoritative source of information about the domains. I recommend that we get the programmer/designer/product manager to review this. I included domain risks in this outline because it seemed more natural to do that than to create a separate table. Notice that many of the domain-defining risks I have identified are vague in some way (e.g. no identification of the specific nature of a potential failure). They are vague because I dont have detailed technical information or experiences with this product that would allow me to be any more specific. This document is crudely formatted because I find that more elaborate formatting lowers the probability that I will update it as new information emerges. Normally, I would maintain this in a text editor with no formatting at all beyond indents and line breaks. "???" denotes an issure or question to be investigated further.

Input Variables
Slides Sized For
Type: Enumerated Default: On-screen Show Values: [??? These are the values are observed in the product; not corroborated by information in any spec or help file.] (presets) On-screen Show [??? Is there a functional difference between this and Letter?) Letter Paper (8.5x11 in) Ledger Paper (11x17 in)

A3 Paper (297x420 mm) [greatest area & greatest width & greatest height] A4 Paper (210x297 mm) B4 (ISO) Paper (250x353 mm) B5 (ISO) Paper (176x250 mm) [least width] 35mm Slides Overhead Banner [least area & least height] (customer set) Custom Domain-Defining Risks: Customer set and preset dimensions may fail in different ways or under different conditions. Preset size with the least/greatest height, least/greatest width, or least/greatest area, may fail in different ways or under different conditions. A different size may be used than the one set. Size used for print may not match size used for display. A particular size may cause crash or data loss. A particular size may cause a specific related function to fail (such as text display on a slide). A particular size may not be available on the current printer. A particular size may not properly set Width and Height fields.

Width
Type: Floating Point Numeric Data Entry w/Incrementer Buttons Units: inches [??? is this configurable?] Dimensions: [??? except for max value, these dimensions are observed but not corroborated by any spec or help file] allowable characters: 0-9 and decimal point min characters: 0 max characters: 2 [??? the product appears to accept 31, not 2. But 31 character length is inconsistent with a maximum value of 56] apparent max characters: 31 [although 31 seems like an incorrect limit, it is still empirically a limit] max value: 56 min value: 1

resolution: incrementer buttons: .1 direct input: .01 [??? Are hundredths actually processed?] Domain-Defining Risks: The incrementer may decrement or increment past min/max boundaries. The incrementer may decrement or increment improperly when user enters an incorrect or out of range value. The incrementer may fail after more than a particular number of increments. More than 2 characters may be accepted. Less than or equal to 2 characters and more than 0 characters may be rejected. More than 31 characters may trigger some kind of failure. A tenth of an inch resolution may not be properly rendered. A hundredth of an inch resolution may not be properly rendered. Illegal characters may be typable or pastable.

Height
[appears to have same exact dynamics as Width]

Slides Numbered From


Type: Integer Numeric Data Entry w/Incrementer Buttons Default: 1 Units: N/A Dimensions: allowable characters: 0-9 min characters: 0 max characters: 4 [??? the product accepts 31, not 4. But 31 character length is inconsistent with a maximum value of 9999] apparent max characters: 31 [although 31 seems like an incorrect limit, it is still empirically a limit] max value: 9999

min value: 0 [??? the product accepts a minimum of 1, but some page numbering schemes start with zero, so this seem like a bug] Domain-Defining Risks: The incrementer may decrement or increment past min/max boundaries. The incrementer may decrement or increment improperly when user enters an incorrect or out of range value. The incrementer may fail after more than a particular number of increments. More than 4 characters may be accepted. Less then or equal to 4 characters and more than 0 characters may be rejected. More than 31 characters may trigger some kind of failure. Illegal characters may be inputable or pastable.

Slide Orientation
Type: Enumerated Default: Landscape Values: Landscape Portrait Domain-Defining Risks: A different orientation may be used than the one set. Orientation used for print may not match orientation used for display.

Notes, Handouts & Outline Orientation


Type: Enumerated Default: Portrait Values: Landscape Portrait Domain-Defining Risks:

A different orientation may be used than the one set. Orientation used for print may not match orientation used for display.

Click-Help/Widget [this displays help hints for each widget on the dialog box]
Type: Enumerated Default: N/A Domain-Defining Risks: The help tip displayed may not correspond to the widget selected.

Output Variables:
[The following output variables are so directly representative of inputs that there seems to be no need to analyze them further] Actual Printed/Displayed Slide Size Actual Printed/Displayed Page Orientation Actual Printed/Displayed Page Numbers Slide Orientation Icons Print Preview Display

Dimensions and Display of Text and Objects on a Slide


[This output variable is related in part to page setup and in part to many inputs that have nothing to do with page setup.] Type: Various elements of height, width, aspect ratio and font size. Values: Varies with object type. Domain-Defining Risks: Object sizes may be shrink or be made unrecognizable when page dimensions are changed. Objects may not be rendered correctly in conjunction with extreme page dimensions.

Printable Area
Type: Numeric width and height Values: Varies with printer brand/model and settings. Domain-Defining Risks: Printing may fail when a large preset size is printed to a smaller printable area. Printing may fail when a small preset size is printed to a very large printable area. An extreme-sized print area may not be represented properly when "Custom" is selected. A printable area with a height or width greater than 56 may cause failure when "Custom" is selected. The printable area of a standard printer may not be properly represented when "Custom" is selected. The printable area of a photo printer may not be properly represented when "Custom" is selected. The printable area of a plotter (or other large printer) may not be represented when "Customer" is selected.

Interactions:
Height/width ratio (aspect ratio) vs. Dimensions and Display of Text and Objects on a Slide
Dimensions: min aspect ratio: 1/56 max aspect ratio: 56 sizes/positions of objects: varies with object type Domain-Defining Risks: An extreme aspect ratio may cause display/print problems.

Aspect ratio vs. portrait/landscape


Domain-Defining Risks: Switching between portrait and landscape may change the aspect ratio of displayed/printed objects. Switching between portrait and landscape may fail to switch width and height parameters. At extreme aspect ratios, switching between portrait and landscape may cause display/print problems.

Slides orientation vs. notes orientation


Values: orientations match or do not match Domain-Defining Risks: Non-matching orientations may lead to display/print problems on one or the other.

Slides numbered from vs. number of slides to print


Values: 1 to some value greater than 9999 [??? precise upper limit not known; probably irrelevant] Domain-Defining Risks: Printing/disaply may fail if (numbered from) + (slides to print) is LESS THAN OR EQUAL TO 9999 Printing/disaply may fail if (numbered from) + (slides to print) is GREATER THAN 9999

State-based interactions:
A particular page setup setting may fail when preceded by a certain other setting. Example: 1. place text objects on a slide. 2. set height=56, width=1, press ok. 3. change to width=56, press ok. 4. change to height=5, width=5, press ok. 5. text size has been reset to super tiny. 6. Repeat steps 2-4. 7. text size has been reset to impossibly tiny (font size = 0). Domain-Defining Risks: Preset may fail when following custom. Custom may fail when following preset. Particular preset may fail when following particular other preset Extreme dimensions may fail when following other extremes

Domains described above may also have problems related to interactions with any of these functions:
field editing functions in dialog box home end text insertion text deletion copy/cut/paste tab dialog buttons ok cancel print slides notes handouts save save as... save as web page display normal slide show slide sorter view master view editing current printer document formats animation live links and buttons

localized versions of Windows alternative alphabets double-byte character sets alternate measurement systems

Appendix Z: Performance Tests Evaluation Report by Pat McGee

Report on experiment on teaching specific testing techniques.


Pat McGee 2 January 2004
Subjects

There were 18 subjects in the first group. To date, I have only received files for 17 of them. The files for the 18th subject seem to have gone missing. There were five subjects in the second group.
Identification of variables

Of those 17 subjects in the first group, 12 attempted to answer the dimension question, 12 attempted to answer the type question, and 9 attempted to answer the direction question. Of the second group, all five attempted to answer the all three of those questions. Three of them answered the dimensions question in part b, but not in part a. On the dimension and type question, it looked to me like the subjects exhibited poor test-taking behavior. The subjects that answered these questions mostly answered them correctly. I don't see any reason to believe that these subjects were taught poorly. On the direction question, just over half the subjects in the first group attempted to answer it at all. Of those, all but two subjects identified all the fields as inputs. Two of the subjects identified some but not all of the input/output fields as outputs. Of those two, one was in each of the first two sessions. None identified any fields as input/output. In the second group, all five identified all the variables as input only. From this, I conclude that this was not taught well. It seems to have been taught somewhat better in the first two sessions than the three after that.
Equivalence class and boundary value analysis

I believe that the problem presented was somewhat different from the simple cases presented in the training. The problem presented had four interrelated controls which basically represented a 2-D space with special points. McGeeReport v1.doc 1 03 January 2004 09:01

The material presented in training was mostly about 1-D spaces. So, in many respects, this problem was a good test in that it required performance at level 6 of Bloom's taxonomy. Unfortunately, none of the subjects made this conceptual leap. They tried to treat the problem as if it were two independent 1-D spaces. This led them to propose tests that I thought were not very powerful. They did well on the outside boundaries of each dimension, since these could be tested independently. They did well on the enumerated variables, correctly identifying each value as something they needed to test. Testing the numeric inputs showed some conceptual problems. Almost all of the students tested the inputs for field lengths of 5 and 32. Nowhere in the documentation that I saw claimed either one as a limit. The program seems to enforce 5 as a limit, but whether this is according to any specification or not, I can't tell. I think this was an invented limit because they couldn't deal with the concept of an unknowable limit. When dealing with experienced testers, I think it is a good thing to write test cases as specifications of constraints that the test cases must fit. I expect experienced testers to be able to generate good test cases from specifications. However, when dealing with inexperienced testers, I don't have that same expectation. Until I see some test cases that they generate from a specification, I don't know whether a specific person understands enough about what makes a test case good to generate good ones. Many of the subjects wrote more generic specifications for test cases instead of generating specific values. So, without more data, I can't tell whether they did this knowing how to translate that into good test cases, or whether they did it because they didn't know how to write good test cases and could figure out some specifications. Only one (1) subject wrote test cases with almost all test cases having specific values. Six (6) wrote them with most values specific, while fifteen (15) wrote them with few values specific. For the seven who wrote at least mostly test cases with specific values instead of specifications, the test cases were good. For the fifteen who didn't, I have no way of knowing whether they could actually generate good test cases for those specifications.

McGeeReport v1.doc

03 January 2004 09:01

All-pairs analysis

The results of the all-pairs analysis of the first subjects showed that they did not understand this very well at all. Sowmya rewrote the training materials for this and presented it to several new subjects. The analysis of the all-pairs question for these five subjects follows. There are several parts of the problem of generating an all-pairs table. First, identify a reasonable set of variables to test. Second, correctly perform the mechanics of generating the all-pairs table. I believe that a good all-pairs table for this problem would have four variables, and require two different sets of all-pairs I believe that a good all-pairs test for this problem would have two distinct all-pairs tables. This is because several of the variables are mutually interdependent and cannot be tested separately. Splitting these four mutually interdependent variables into two sets gives the two tables. One set could use the width and height, the other set could use the Sized and the Orientation: Slides. These would be combined with the two independent variables: Orientation: Notes, and Number of Slides. None of the subjects analyzed the problem this way. All analyzed it as if several interdependent variables were independent, even though this is false to fact. Two subjects made other errors in the analysis. One didn't test custom sizes. Another had only three independent variables, not four. Given the analysis that they did, all of the subjects generated all-pairs tables that were completely or mostly correct. One subject got the first three iterations correct, then completely messed up the fourth iteration. Another generated a table that was completely wrong, but the error was probably a bad copy-and-paste for the first variable. (For that variable, all values listed were the same.) Making a very reasonable correction to this answer would lead to it being completely correct.
Dependencies

Of the five subjects in the second group, all answered this by detailing the dependencies, but not how to test them. This could be poor test-taking skills, but I think it is more likely that the subjects didn't know how to answer the question of testing the dependencies.

McGeeReport v1.doc

03 January 2004 09:01

Question e

All the subjects in the second group answered this question, but I did not have the question, so I cannot tell whether the answers were correct or not.
General impression

Overall, I did not get the impression that any of these subjects understood the material well enough to apply it to a new situation. I believe that they mostly could apply these techniques to situations that were very similar to the examples they had been trained on. In terms of Bloom's Taxonomy, I believe they learned the material mostly at level 3 (Application.)

McGeeReport v1.doc

03 January 2004 09:01

Você também pode gostar