Você está na página 1de 19

Groble, D.

2002 Prepared for IAASE Draft Copy

ASSESSMENT: OBSERVATION
DESCRIPTION OF OBSERVATION: The reauthorization of the Individuals with Disabilities Act (IDEA, 1997), and recommended best practice, have shifted assessment and evaluation from traditional to more ecological and functional approaches. In fact, IDEA advocates the use of systematic direct observational procedures to gather relevant functional and developmental information about student behavior and performance patterns. The problem-solving process itself depends on the collection and analysis of reliable, accurate, and socially meaningful data to influence not only decision-making, but the effectiveness of the intervention itself. According to Hintze, Volpe, and Shapiro (2002), direct observation is one of the most widely used assessment procedures by school psychologists. Different methods of observation can have different goals and, therefore, take on different characteristics. For instance, direct observation can be either naturalistic or systematic. Whereas the naturalistic approach to observing student behavior refers to observation of a student without pre-determined behaviors in mind, systematic direct observation: (a) has a goal of measuring specific behaviors which have been operationally defined; (b) is conducted under standardized procedures; (c) is highly objective in nature; and (d) yields data that does not vary from one observer to another (Salvia & Ysseldyke, 2001). While naturalistic observation is advantageous in terms of its social validity and usefulness in establishing the relationships between antecedents, behavior, and consequences, the primarily descriptive data it yields is of limited assistance to decision-making. Of more utility are those observational procedures that yield quantifiable data. The first step in systematically measuring and recording behavior requires that the behavior of interest be defined explicitly. Hawkins and Dobes (1977) recommend that these definitions be objective, unambiguous, and complete. Once the behavior of interest has been defined, data may be collected using any one of a variety of procedures. Some common data collection methods follow (Hintze, et. al., 2002): (a). Frequency or event recording: Involves counting and recording the number of occurrences of a behavior during a specific time period. This method is most useful when observing behaviors that have a discrete beginning and ending and occur at a relatively low rate. (b). Duration recording: Involves the recording of how long the behavior persists. This method is most useful when the target of the intervention is to change the duration of a behavior. (c). Latency recording: Involves measuring the elapsed time between the onset of a stimulus and the initiation of a behavior. This method is most useful when latency is the intended target of intervention.

Groble, D. 2002 Prepared for IAASE Draft Copy RESEARCH SUPPORT: Observational Instruments Play-based Observation Farmer-Dougan, V. & Kaszuba, T. (1999). Reliability and validity of play-based observations: Relationship between the PLAY behavior observation system and standardized measures of cognitive and social skills. Educational Psychology, 19, pp. 429-440. Examined the relationships between scores obtained on a classroom-based play observation system and standardized measures of cognitive and social competence with pre-school children. Findings suggest that, when operationally defined play observation methods were used, observers were able to record accurately the level of play exhibited by each child, and these play behaviors did reflect the childs current cognitive and social development. Attention Deficit Hyperactivity Disorder School Observation Code (ADHDSOC) Gadow, K.D., Sprafkin, J., & Nolan, E.E. (1996). ADHD School Observation Code. Stony Brook, NY: Checkmate Plus. This code was developed as a screening and intervention evaluation tool for students. It can be used in several different types of settings, allows for peer comparison, and include targeted behaviors, such as: noncompliance, verbal aggression, and off-task behavior. This code has been found to be valid and reliable (see Gadow, Sprifkin, & Nolan, 1996). Behavior Observation of Students in Schools (BOSS) Shapiro, E.S. (1996). Academic skills problems workbook. New York: Guilford. This code was developed to assess student on-and off-task behavior in the classroom. The BOSS classifies behaviors in terms of : (a) active engagement; (b) passive engagement; (c) off-task motor; (d) off-task verbal; and (e) off-task passive. In addition, this code includes a measure of teacher direct instruction. The BOSS is administered in 15-seconmd intervals for about a 15minute interval. An advantage of this system is that it allows for the collection of peer behavioral norms. This code has been found to be both valid and reliable. How to Perform Direct Observation Alessi, G. (1988). Direct observation methods for emotional/behavioral problems. In E.S. Shapiro & T.R. Kratochwill (Eds.), Behavioral assessment in the schools: Conceptual foundations and practical applications. (pp. 14-75). New York: Guilford. Discusses the basics of direct observational methods with an emphasis on determining the functional antecedents, the behavior itself, and the functional consequences of behavior. Strategies

Groble, D. 2002 Prepared for IAASE Draft Copy for collecting observation data as well as methods of data recording are described along with practical implications of this type of assessment methodology. Barnett, D. & Carey, K. (1992). Principles and techniques for observations. In Designing interventions for preschool learning and behavior problems. San Francisco, CA: Jossey Bass. This chapter reviews issues to consider when conducting observations. Issues discussed include: (a) what to observe, (b) where to observe, (c) who will observe, (d) when to observe, and (e) how to observe. Also dealt with are observation and recording methods with consideration to advantages and disadvantages of each. Hintze, J.M. & Shapiro, E.S. (1997) Best practices in the systematic observation of classroom behavior. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology-Third Edition. Washington, DC: National Association of School Psychologists. Presents the use of systematic direct observation as a method of assessment that offers direct linkages to intervention development. Best practices in the use of systematic observation of behavior include: (a) selecting the target behavior, (b) measuring and recording the behavior, and (c) reporting the data collected. Lentz, F.E. & Shapiro, E.S. (1986). Functional assessment of the academic environment. School Psychology Review, 15, pp. 336-345. Describes the importance of assessing the instructional environment to assist in intervention development. Specific examples are provided to conduct such assessments. Sattler, J.M. (1992). Assessment of behavior by observational methods. In J.M. Sattler (Ed.), Assessment of Children (3rd Ed.). San Diego, CA: J.M. Sattler Publisher, Inc. This chapter offers several examples of how to perform an observational assessment. Recording methods and coding systems are detailed with explicit information on each method. Special attention is given to possible difficulties in carrying out behavioral observation with suggestions for reducing error. Suggestions are also given for reporting results of behavioral assessments. Practical Implications of Direct Observation Hay, L.R., Nelson, R.O. & Hay, W. (1977). The use of teachers as behavioral observers. Journal of Applied Behavior Analysis, 10, pp. 345-348. Examined the effect of using teachers as behavioral observers on student and teacher behavior. Results suggest that observation had effects on behaviors of both teachers and students. Teachers who were observed used more praise than those who were not being observed. Likewise, students who were observed were reported to show more change in appropriate behavior than students who were not observed. Observations by participant observers , as well as independent observers, may effect changes in the behavior of the individuals being observed.

Groble, D. 2002 Prepared for IAASE Draft Copy Milich, R. & Landau, S. (1988). Teacher ratings of inattention/overactivity and aggression: Cross-validation with classroom observations. Journal of Clinical Child Psychology, 17, pp. 92-97. Examined the use of teachers as a principal source of information regarding the behaviorally disturbed child. Results suggest that teachers ratings on observation scales indicated their sensitivity and ability to distinguish between hyperactivity and aggression. Symons, F.J., McDonald, L.M., & Wehby, J.H. (1998). Functional assessment and teacher collected data. Education and Treatment of Children, 21, pp. 135-159. Described two case studies in which teacher-collected observational data facilitated the functional assessment process. Teachers collected observational data and plotted results for examination during team meetings. Interventions were developed and evaluated using similar observational techniques. Results suggested that teachers were more aware of situations in which behaviors of concern were occurring as a result of collecting observational data. Meta-analysis/Reliability-Validity of Direct Observation Platzman, K.A., Stoy, M.R., Brown, R.T., Coles, C.D., Smith, I.E. & Falek, A. (1992). Review of observational methods in attention deficit hyperactivity disorder (ADHD): Implications for diagnosis. School Psychology Quarterly, 7, pp. 155-177. Reviews 39 empirical studies in which direct observational methods were used to assess children diagnosed with ADHD. Findings support the validity of classroom observations and teacher reports in identifying children with ADHD. Ecological Assessment Evans, W.H., & Evans, S.S. (1990). Title: Ecological assessment guidelines. Diagnostic, 16, 49-51. Offers guidelines for effective ecological assessment in list form that targets specific diagnostic questions and a checklist of variables in the child's environment that may be contributing to behavior problems. Considers physical, psychosocial, and physiological factors in analyzing the targeted concern for the child. Leslie, L. & Jett-Simpson, M. (1999). Authentic literacy assessment: An ecological approach. This book describes the process of implementing ecological assessment in classrooms. Chapters include instruction and information on alternative assessment with an emphasis on change within the classroom. This is a good resource for teachers on how to create a classroom environment that is conducive and successful for implementing ecological assessment.

Groble, D. 2002 Prepared for IAASE Draft Copy Welch, M. (1994). Ecological assessment: a collaborative approach to planning instructional interventions. Intervention in School and Clinic, 29, 160-184. The advantages of an ecological approach for developing and implementing assessment are reviewed. Guidelines for implementing ecological assessment by specialists in regular classrooms and during prereferral intervention are offered. REFERENCES AND RESOURCES: Hawkins, R.P. & Dobes, R.W. (1977). Behavioral definitions in applied behavior analysis: Explicit or implicit? In B.C. Etzel, J.M. LeBlanc, & D.M. Baer (Eds.), New developments in behavioral research: Theory, method, and application. (pp. 167-188). Hillsdale, NJ: Erlbaum. Hintze, J.M., Volpe, R.J., & Shapiro, E.S. (2002). Best practices in the systematic direct observation of student behavior. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology-Fourth Edition. Washington, DC: National Association of School Psychologists. Individuals with Disabilities Education Act: Amendments of 1997 (PL 105-17). USC Chapter 33, Sections 1400 et seq. Salvia, J., & Ysseldyke, J.E. (2001). Assessment. (8th ed.). Princeton, NJ: Houghton Mifflin.

Groble, D. 2002 Prepared for IAASE Draft Copy

ASSESSMENT: INTERVIEW
DESCRIPTION OF THE INTERVIEW: Interviewing is an assessment method from which in-depth, reliable information can be gathered to facilitate consultation, counseling, problem-solving, systems change, and intervention or program evaluation. There are different approaches to the use of interviews as an assessment tool. For example, interviews can be structured or unstructured. Structured interviews consist of a specific, pre-determined list of questions that are asked in a highly standardized manner. The interviewer does not deviate from the interview format and remains objective throughout the session. On the other hand, unstructured interviews are significantly less structured, with questions depending largely on the interviewees response to previous questions. Lentz and Wehmann (1995) outline several processes of interviewing as an assessment procedure. First, the interview must be chosen because it is thought to be the best way to garner information needed. Secondly, there must exist, within the system, a groundwork of understanding about the interview process. Finally, there must exist, between the interviewer and interviewee, a relationship based upon mutual trust, understanding, and collaboration. Best practices for conducting interviews include (Lentz& Wehmann, 1995): Interviews should be ecological in nature and focus on the beliefs and perceptions of the interviewee. Explicit outcomes should be generated prior to conducting an interview. Interviews must be focused and sensitive to time constraints. Structured questions and responses should be used to achieve the objectives of the interview. The interviewer must be an active listener, paying attention to both verbal and non-verbal communication. There should be a clear plan linking the interview to the next course of action. The interviewer should understand how interview data can threaten the validity of the problemsolving process. As with all assessment procedures, validity and reliability are significant issues for consideration. Validity for interviews relates to how well interview content reflects accurate information (Shapiro, 1986). One way to ensure validity is to interview multiple sources and examine the correspondence between their responses. Likewise, the interviewer must make a decision regarding the accuracy of the information received and the need for further clarification. There are some available data on what content is appropriate to include in interviews for specific problems. Similarly, research has shown that interventions based on information from interviews are successful. While there has been little research in this area, guidelines are available to increase the likelihood of eliciting valid interview data and valid decision-making. These suggestions are as follows (Lentz and Wehmann, 1995): The use of multiple sources for interview information. Collecting of confirmatory information to assess accuracy of responses. Examining the validity of treatment decisions.

Groble, D. 2002 Prepared for IAASE Draft Copy And Developing and practicing interviewing skills. RESEARCH SUPPORT AND REFERENCES: Bergan, J. & Kratochwill, T. (1990). Behavioral consultation and therapy. New York: Plenum. Guide to behavioral consultation that reviews the purpose of interviewing, research on consultation, and how to conduct consultative interviewing. Hughes, J. (1990). The clinical child interview. New York: Guilford Press. Reviews aspects of interviewing children and provides guidelines regarding developmental issues with this population. Lentz, F.E. & Wehmann, B.A. (1997). Best practices in interviewing. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology-Third Edition. Washington, DC: National Association of School Psychologists. Marks, E.S. (1995). Entry strategies for school consultation. New York: Guilford Press. Provides suggestions and examples of interviews for use specifically in consultation with teachers. Sattler, J.M. (1998). Clinical and forensic interviewing of children and families: Guidelines for the mental health, education, pediatric, and child maltreatment fields. San Fransicso, CA: Jerome M. Sattler Publisher, Inc. Very comprehensive resource for structured interviews to use with children and families organized by specific problems. Shapiro, E.S., & Kratochwill, T. (Eds.) (1988). Behavioral assessment in schools. New York: Guilford Press. Reviews and provides examples of interviews used for assessment of school-related problems.

Groble, D. 2002 Prepared for IAASE Draft Copy

ASSESSMENT: PERFORMANCE-BASED
DESCRIPTION OF PERFORMANCE-BASED ASSESSMENT: Authentic assessment describes an approach to assessment in which students are realistically and actively involved in the evaluation process. These types of assessments are performance-based and provide a qualitative method of educational assessment, particularly useful for outcome-based accountability systems. While this approach contrasts with traditional quantitative assessment methods, performance-based assessment can yield data that is just as valid and useful. The data offers information about ways student achievement can be improved and exactly what areas in which the student struggles. Performance-based assessment links learning experiences with instruction and assessment. This is its main advantage; improved student outcomes based on continual assessment and subsequent improved instruction. Additional advantages include engaging students in active learning, the fact that assessment is driven by the curriculum, and that assessment tasks are inherently worthwhile (Sweet, 1989). Performance-based assessment is not a single event, but an ongoing, dynamic process. According to Wiggins (1993), its goals include: (a) to gather data on students that focus on growth over time, rather than comparing them to one another; (b) to focus on what they know rather has, in fact, been applied in several curriculum areas (language arts, mathematics, science) and with many different types of learners (early childhood, second-language, special needs). Performance-based assessment has three main features: (a) students construct, rather than select responses; (b) assessment formats allow teachers to observe student behavior on tasks reflecting real-world requirements; and (c) scoring reveals patterns in students learning and thinking (Fuchs, 1995). This authentic type of assessment can, however, take different forms. For example, product assessment might involve the collection of finished work in a portfolio, performance assessment might involve oral presentations or debates, or process assessment would focus more on student learning and thinking strategy in the moment (McTighe & Ferrara, 1996). In line with this flexibility of method, there are also several different approaches to evaluation. McTighe & Ferrara (1996) present primary evaluation methods used with performance-based assessments: Scoring Rubrics: generic scoring tools used to evaluate product or performance quality that consist of a fixed measurement scale and a list of criteria that describe characteristics for each point. Task-Specific Scoring Guides: for use with specific assessment activities that contain a fixed scale and descriptive criteria.

Groble, D. 2002 Prepared for IAASE Draft Copy Rating Scales and Checklists: easy to use scoring tools for open-ended response tasks that do not generally provide detailed, explicit criteria for evaluation and Written and Oral Comments: methods of evaluation that provide for teacher-student communication and direct feedback. RESEARCH SUPPORT AND RESOURCES: The Case for Performance-Based Assessment Fuchs, L.S. (1995). Connecting performance assessment to instruction: A comparison of behavioral assessment, mastery learning, curriculum-based measurement, and performance assessment. ERIC Digest. E530.available at: www.ed.gov/databases/ERIC-Digests.html. Popham, J.W. (1999). Why standardized test scores don't measure educational quality. Educational Leadership, v56 n6 8-15. Provides rational for performance-based assessment. Reasons that standardized tests are prone to problems and errors such as, testing-teaching mismatches, omitted items, and confounded causation problems. Contends that factors that influence students' scores on standardized tests are: what's taught in school, native intellectual ability, and out-of-school learning. Wiggins, G. (1990). The case for authentic assessment. ERIC Clearinghouse on Tests, Measurement, and Evaluation, Washington, DC. Available at: http://proxy.lib.ilstu.edu:2054/ovidweb.cgi (Accession number ED328611). Contrasts performance-based assessment with traditional standardized assessment. Standardized tests rely on indirect items, whereas authentic assessment directly examines student performance on intellectual tasks. Supports the use of authentic assessment and contends that a move toward more authentic tasks and outcomes improves teaching and learning. Student clarity and engagement will increase and teachers will be able to use assessment results to improve their instruction. How to do Performance-Based Assessment Brualdi, A. (1998). Implementing performance assessment in the classroom. ERIC Clearinghouse on Assessment and Evaluation. Washington, DC. Available at: http://proxy.lib.ilstu.edu:2054/ovidweb.cgi (Accession number ED423312). Outlines basic steps involved in planning and executing performance-based assessment in the classroom. Steps discussed include: (a) defining the purpose of assessment, (b) choosing the activity, (c) defining the performance criteria, (d) creating performance rubrics, (e) and assessing performance. Suggestions for implementing these steps are discussed and tailored toward the classroom teacher.

Groble, D. 2002 Prepared for IAASE Draft Copy Lam, T.C. (1995). Fairness in performance assessment. ERIC Clearinghouse on Counseling and Student Services. Greensboro, NC. Available at: http://proxy.lib.ilstu.edu:2054/ovidweb.cgi (Accession number ED391982). Reviews the realities and difficulties of maintaining fairness in performance-based assessment. Argues that, while assuring equality through standardization enables comparisons of student performance and simplifies administration processes, it loses task meaningfulness and creates difficulty avoiding bias. On the other hand, assuring equity effectively reduces bias and enables rich, meaningful assessment, but it introduces difficulty in administration and in comparing student performance. Concludes that there is little research devoted to examining and promoting fairness in performance assessment. Nevertheless, performance-based assessment can be useful and nonbiased when these McTighe, J. & Ferrara, S. (1996). Assessing learning in the classroom. Washington, DC: National Education Association. Offers a good introduction to the rationale behind alternative assessment, strategies for performing authentic assessment, and practical implications. Stiggins, R.J. (2000). Student-involved classroom assessment. 3rd Ed. Prentice Hall College Division. Handy guide for teachers that focuses on the paradigm shift of assessment toward greater student accountability. Provides detailed explanation and rational for performance-based assessment as well as instruction on how to construct assessments. Included is information on reliability and validity of this assessment method. Roeber, E.D. (1996). Guidelines for the development and management of performance assessments. ERIC Clearinghouse on Assessment and Evaluation. Washington, DC. Available at: http://proxy.lib.ilstu.edu:2054/ovidweb.cgi (Accession number ED410299). Offers guidelines to district and state policy makers regarding the issues of managing the development, administration, and use of performance-based assessments. Suggests that preassessment activities must take place before assessment development can occur. Steps in preassessment, which include: (a) the development of the assessment framework, (b) creation of the assessment plan, (c) determination of assessment resources, and (d) production of an assessment blueprint. Suggests that administrators must be well trained and the scoring process clear in order for things to run smoothly. Sweet, D. (1989). Performance assessment. Available at: www.ed.gov/pubs/OR/ConsumerGuides/perfasse.html. Reviews performance assessment basics and provides an extensive list of successful strategies and programs along with contact information.

Groble, D. 2002 Prepared for IAASE Draft Copy Performance-Based Assessment with Special Populations Grace, C. (1992). The portfolio and its use: Developmentally appropriate assessment of young children. ERIC Clearinghouse on Elementary and Early Childhood Education. UrbanaChampaign, IL. Available at: ericece@uiuc.edu. Addresses issues of using performance-based assessment with younger students and provides support for this type of assessment with this population. McLaughlin, M.J., & Warren, S.H. (1995). Using performance assessment in outcomesbased accountability systems. ERIC Clearinghouse on Disabilities and Gifted Education. Available at: http://www.ed.gov/databases/ERIC-Digests/ed381987.html. Addresses specifically the issues surrounding using performance-based assessment with students with disabilities. Provides examples and listings of states which have adopted performance assessments as a part of their outcomes-based systems. Tannenbaum, J.E. (1996). Practical ideas on alternative assessment for ESL students. ERIC Clearinghouse on Language and Linguistics. Washington, DC. Available at: http://www.ed.gov/databases/ERIC-Digests/ed395500. Gives many suggestions and strategies for using performance-based assessment with non-Englishspeaking, non-verbal, and bilingual students. In-depth descriptions of using many different approaches to performance assessment. Reliability and Validity Elliot, S.N. (1995). Creating meaningful performance assessments. ERIC Clearinghouse on Disabilities and Gifted Education. Reston, VA. Available at: http://www. ed.gov / databases /ERIC-Digests/ed381985.html. Offers guidelines and addresses the issues of reliability and validity in performance assessments. Specifically, guidelines are geared toward best practices in developing and scoring assessments. Steege, M.W., Davin, T., & Hathaway, M. (2001). Reliability and accuracy of a performance-based behavioral recording procedure. School Psychology Review, 30, pp. 252-261. Discusses an example of a well-designed, reliable and valid approach to performance-based assessment. Wiggins, G. (1993). Assessment, authenticity, context, and validity. Phi Delta Kappan. November, 200-214. Provides suggestions for ensuring a well designed assessment that answers specific questions and is linked to instruction.

Groble, D. 2002 Prepared for IAASE Draft Copy

SERVICE DELIVERY: INCLUSION


DESCRIPTION OF INCLUSION: Special education reform, along with the reauthorization of the Individuals with Disabilities Education Act (IDEA) of 1997 have led to the inclusion movement, which advocates for the education of students with exceptionalities in general education classes with necessary supports and services. According to Sailor (1991), inclusion has six major components: (1) students will receive education in the school they would attend if they had no disability, (2) a natural proportion of students with disabilities occurs at each school site, (3) zero-reject so that no student will be excluded on the basis of type or extent of disability, (4) school and general education placements are age and grade appropriate so that no self-contained special education classes will exist, (5) cooperative learning and peer instruction are the preferred instructional methods, and (6) special education supports exist within the general education class and in other integrated environments. Of particular debate on the issue of inclusion have been the concepts of least restrictive environment (LRE) and the Regular Education Initiative (REI). The LRE principle has been very difficult to interpret, although IDEA mandates that students be included to the maximum extent possible with students who do not have disabilities. In other words, the LRE is the most inclusive. According to IDEA, schools must offer a continuum of services, ranging from most to least inclusive and that this continuum must extend to nonacademic settings as well. The Regular Education Initiative (REI) was based on the IDEA principle of LRE in that it sought more inclusion and presumed that disabled students should be in general education with supports. REIs major contribution was the suggestion that the general education program needed to undergo a major restructuring and improvement in instructional design and delivery. In other words, the general education classroom needed to change significantly to accommodate the individualized needs of students with disabilities. The collaboration movement refers to special education reform which is legally based on the procedural due process and parent and student participation principles. Collaboration is the action various stakeholders take together to further their mutual goals and involves building on the expertise, interest, and strengths of all stakeholders in special and general education. RESEARCH SUPPORT: Policy and Reform Fuchs, D. & Fuchs, L. (1995). The inclusive school: Sometimes separate is better. Educational Leadership, 52, (4). Examines the call for special education reform and takes the position that eliminating special education placements in the name of full inclusion will deprive many students with disabilities of an appropriate education. Goodlad, J. & Lovitt, T. (Eds). (1993). Integrating general and special education. New York: Macmillan.

Groble, D. 2002 Prepared for IAASE Draft Copy A book written by leaders in the field offering perspectives on issues regarding the integration of general and special education. Topics addressed include: (a) curriculum, (b) financial issues, (c) service delivery options, (d) program evaluation, (e) administrative perspectives, and (f) teacher roles. National Association of State Boards of Education. (1992). Winners all: A call for inclusive schools. Alexandria, VA: NASBE. The National Association of State Boards of Education reviews the history of special education and the inclusion movement and makes recommendations for best practice in the following areas: (a) state board policy, (b) teacher and administrative development, and (c) funding. Additionally, lists of specific recommendations are provided for parents, general and special education teachers, local school administrators, state legislators, the federal government, and for higher education. Shanker, A. (1995). Full inclusion is neither free nor appropriate. Educational Leadership, 52, (4). Cautions against a rush to include all disabled children in regular education classrooms and rejects full inclusion. Shanker argues that the comprehensive range of services and support is expensive and unrealistic in regular education. Consequently, the student suffers and misses opportunity. Program Recommendations/Strategies Buysse, V., Skinner, D., & Grant, S. (2001). Toward a definition of quality inclusion: Perspectives of parents and practitioners. Journal of Early Intervention, 24, pp. 146-161. Compiles results of interviews conducted with parents and practitioners to determine what they believe to be indicators of quality inclusion. Program features, resources, strategies, and outcomes associated with high-quality inclusion are highlighted. Clasberry, G.A., & Lian, M-G. J. (1998). Strategies for an inclusive school: A handbook for teachers and program coordinators. Normal, IL: U.S. Department of Education Research Grant. Reviews and presents current practice and innovative strategies for inclusive schools. Strategies recommended include: (a) employment of teachers assistants, (b) instructional adaptation and modification, (c) the use of cooperative learning activities, (d) adaptation of materials, (e) team teaching, (f) itinerant teaching, (g) the availability of consultant services, (h) multi-level curriculum, (i) assistive technology, (j) peer tutoring, (k)curriculum overlapping, (l) peer physical assistance, (m) alternative curriculum, and (n) cross-age tutoring. Additional recommendations are offered for implementing these strategies. Sandall, S., Schwartz, I. & Joseph, G. (2001). A building blocks model for effective instruction in inclusive early childhood settings. Young Exceptional Children, 4, pp.3-9.

Groble, D. 2002 Prepared for IAASE Draft Copy Presents a model for the range of support and instruction services needed to ensure successful inclusion for young children with disabilities. The building blocks described include: (a) a high quality early childhood program, (b) modifications and adaptations, (c) embedded learning opportunities, and (d) explicit child-directed instruction. Inclusion Studies McDonnell, J., Mathot-Buckner, C., Thorson, N., & Fister, S. (2001). Supporting the inclusion of students with moderate and severe disabilities in junior high school general education classes: The effects of classwide peer tutoring , multi-element curriculum, and accommodations. Education & Treatment of Children, 24, pp.141-160. This study reports the effects of an intervention that involved a classwide peer tutoring program, the use of a multi-element curriculum, and instructional accommodations on the academic performance of three junior high students with disabilities who were enrolled in general education classes. Results indicated improved academic responding and reduced rates of competing behaviors by the target students. Academic benefits for peers without disabilities were also indicated. Peetsma, T. Vergeer, M. Karsten, S. & Roeleveld, J. (2001). Inclusion in education: Comparing pupils development in special and regular education. Educational review, 53,2 pp. 125-35. Researchers matched students in mainstream and special education and followed their academic progress over a four year period. After two years, results indicated that students with disabilities achieved more academically in regular education, however, their levels of motivation were higher in special education. After four years, academic progress was higher for those students enrolled in regular education. Siegal, B. (1997). Is the emperor wearing clothes ? Social policy and the empirical support for full inclusion of children with disabilities in the preschool and early elementary grades, Meta-analysis that reports findings in the areas of cognitive development and achievement, learning factor outcomes, and social outcomes. Studies tended to show better cognitive outcomes for the inclusion of students with milder disabilities. Indicated was a persistent interaction between the severity of the disability and the intervention. In terms of learning factor outcomes, it was found that gains made by special education students in specialized inclusion classrooms fail to transfer to other settings or over time. Regarding social outcomes, research suggested that adultdevised activities can facilitate interactions between non-disabled and full inclusion students, however, the frequency of those interactions decreases when adult support is stopped. Likewise, nondisabled students prefer to play with other nondisabled students, to the exclusion of peers with disabilities. Full inclusion students may also become less popular as the school year progresses. The article also discusses methodological problems with the research done on inclusion and concedes that there is just not enough good research that shows whether inclusion works or not.

Groble, D. 2002 Prepared for IAASE Draft Copy Slavin, R.E. (1988). Ability grouping and student achievement in elementary schools: A best-evidence synthesis. Review of Educational Research, 57, 293-336. Mini-review of 14 studies on the effects of between- and within-class ability grouping on the achievement of elementary school students. Concludes there is no support for the assignment of students to self-contained classes according to ability. Slavin, R.E. (1993). Ability grouping in the middle grades: Achievement effects and alternatives. Elementary School Journal, 93, 535-552. Reviews the effects of ability grouping on the achievement of middle school students and supports alternatives to between-class ability grouping. Suggests the use of cooperative learning and withinclass grouping as functional alternatives.

Groble, D. 2002 Prepared for IAASE Draft Copy

INSTRUCTION: DIRECT INSTRUCTION


DESCRIPTION OF DIRECT INSTRUCTION: Direct Instruction is a highly structured model for teaching well-developed and planned lessons designed around specific learning increments and clearly defined teaching tasks. Created by Sigfried Englemann and Dr. Wesley Becker in the 1960s, Direct Instruction is based on the theory that clear instruction that eliminates misinterpretations can greatly accelerate learning and improve academic performance as well as certain affective behaviors. Its curricula in language, reading, math, and science focus on the early mastery of basic skills, however, the program also addresses general comprehension and analytic skills. Direct Instruction has been used successfully as both a schoolwide program and in separate implementations. While it has primarily been used in elementary school programs, Direct Instruction has been successfully used with secondary and adult learners as well. Likewise, Direct Instruction, while developed out of a need to help disadvantaged children, can be used with all students, regardless of their levels of cognitive functioning. The Direct Instruction Project, University of Oregon has outlined the main features of Direct Instruction programs: Scripted lesson plans Scripts are field-tested and offer templates of how to teach particular skills and content. Research-tested curriculum Skills are taught in sequence until students have fully internalized them and are able to generalize their learning to new, untaught situations. The lesson sequences have been field-tested to determine the most effective and efficient way to lead students to mastery. Each lesson builds upon previously mastered skills. New material, once presented, is followed by guided practice and frequent checks for mastery. The order of this sequence is based on cognitive theory that skills are moved from short- to long-term memory once they have been learned to the point of mastery. Thus, students are freed to apply their learning, attend to new content, and move on to progressively more difficult and higher-ordered skills. Coaches/facilitators In-class coaches support program implementation by monitoring the classroom, assisting the teacher , and/or taking over a part of the lesson when needed. Rapid pace Direct Instruction is characterized by fast-paced teacher-student interaction to achieve its goal to move students to mastery as quickly as possible. Achievement grouping Flexible achievement groups formed on the basis of achievement level are established with the idea that all students will progress at the fastest rate possible and no student will be left behind.

Groble, D. 2002 Prepared for IAASE Draft Copy Groups are constantly re-arranged to assure students are assigned to groups that match their rate of progress. Frequent assessments Assessments are built in to the program to ensure students are reaching mastery, to detect students who need extra help, and to identify students who need to be reassigned to another group. RESEARCH SUPPORT: In the more than thirty years since its introduction, Direct Instruction has been one of the most empirically validated and effective curricula for all learners. What follows are major findings of research on Direct Instruction: Students who are taught with Direct Instruction curricula generally outperform children taught with other forms of instruction academically and in terms of selfesteem (Adams & Englemann, 1996; Becker & Carnine, 1981; Tarver & Jung, 1995; Watkins, 1997). Project Follow Through, a massive educational study completed in the 1970s, examined a variety of programs and educational philosophies and found that Direct Instruction gave the best results in terms of improved cognitive skills as well as self-esteem. The U.S. Department of Educations 1987 booklet, What Works: Research About Teaching and Learning, concludes that Direct Instruction enables students to learn more. Early gains of students taught with Direct Instruction are sustained in later grades (Gersten, Keating, & Becker, 1988; Meyer, 1984). Additionally, some studies have found higher graduation rates, lower drop out rates, and higher college acceptance rates among Direct Instruction students (Darch, Gersten, & Taylor, 1987; Meyer, Gersten, & Gutkin, 1983).

Despite overwhelming evidence for its effectiveness as a program, Direct Instruction has not been widely accepted or utilized in the American education system. Bessellieu, Kozloff, and Rice (2001) suggest that reasons for this may include decision-making that is based on agendas and philosophies rather than on experimental data and widely held misperceptions of the Direct Instruction program. Some of these misperceptions about the program include: (a) that its only for use with special needs children, (b) that it is simply drill and kill, (c) that it thwarts teacher creativity, and (d) that it focuses only on rote skills. Thus, factors critical to the adoption, and successful implementation, of a Direct Instruction program must involve accurate orientation and training on using the program as well as administrative support. REFERENCES:

Groble, D. 2002 Prepared for IAASE Draft Copy Adams, G.L., & Englemann, S. (1996). Research on direct instruction:25 years beyond DISTAR. Seattle, WA: Educational Achievement Systems. Becker, W., & Carnine, D.W. (1981). Direct instruction: A behavior theory model for comprehensive educational intervention with the minority. In S.W. Bijou & R. Ruiz (Eds.), Behavior modification: Contributions to education (pp. 145-210). Hillsdale, NJ: Lawrence Erlbaum Associates. Bessellieu, F.B., Kozloff, M.A., & Rice, J.S. (2001). Teachers perceptions of Direct Instruction teaching. Available at: http://www.uncwil.edu/people/kozloffm/didevelapp.html. Darch, C, Gersten, R., & Taylor, R. (1987). Evaluation of Williamsburg County Direct Instruction program: Factors leading to success in rural elementary programs. Research in Rural Education,4, 111-118. Gersten, R. & Keating, T. (1987). Long-term benefits from Direct instruction. Educational Leadership, 44, 28-29. Gersten, R., Keating, T, & Becker, W.C. (1988).Continued impact of the Direct instruction model: Longitudinal studies of Follow Through students. Education and Treatment of Children, 11, 318-327. Tarver, S.C., & Jung, J.S. (1995). A comparison of mathematics achievement and mathematics attitudes of first and second graders instructed with either a discovery-learning mathematics curriculum or a Direct Instruction curriculum. Effective School Practices, 14, 49-57. Tarver, S.C. (1998). Myths and truths about Direct instruction. Effective School Practices, 14, 49-57. Watkins, C. (1997). Project follow through: A case study of contingencies influencing instructional practices of the educational establishment. Cambridge, MA: Cambridge Center for Behavioral Studies. Meyer, L., Gersten, R., & Gutkin, J. (1983). Direct instruction: A Project Follow Through success story in an inner-city school. Elementary School Journal, 84, 241-252. Meyer, L. (1984). Long-term academic effects of the Direct Instruction Project Follow Through. Elementary School Journal, 84, 380-394. ADDITIONAL RESOURCES: Bessellieu, F.B., Kozloff, M., & Nunnally, M. (no date). One-school pilot implementation as a catalyst for district-wide adoption of direct instruction: Process and outcomes. Available at: http://www.uncwil.edu/people/kozloffm/diimplement.html

Groble, D. 2002 Prepared for IAASE Draft Copy This paper describes a pilot project of implementing direct instruction at an elementary school. The authors describe the process of how school-wide adoption of direct instruction expanded into district-wide adoption. http://www.nichd.nih.gov/publications/nrp/intro.htm This is the website for the National Institute of Child Health and Human Development. This site offers the report of The National Reading Panel: Teaching Children to Read. http://www.nifdi.org/defaultcontents.html. This is the website for The National Institute for Direct Instruction (NIFDI). NIFDI is a not-forprofit organization dedicated to providing school districts with a solid training program and approach for the implementation of Direct Instruction in districts, schools, and classrooms.

Você também pode gostar