Você está na página 1de 11

RUNNING HEAD: An Action Research Study

Research Proposal: An Action Research Study Comparing Student Attitudes Toward Online and In-class Peer Assessment in Higher Education Rebecca Jacobson
Selkirk College

ETEC 500: Research Methodology in Education April 18, 2010

An Action Research Study

INTRODUCTION I am interested in conducting an action research project to assess students attitudes about the efficacy of online peer assessment (PA) in comparison to the more common method of conducting PA in the classroom. I teach both academic and technical writing to first- and second-year undergraduates at Selkirk College. Teachers of writing face unique challenges and often find themselves with a dilemma based on two conflicting factors: 1. The best way to learn to write is to write; therefore, the more writing that can be assigned, reviewed and returned to the student, the better they will learn. 2. While essential to success, careful evaluation of writing is inordinately timeconsuming (indeed, in some institutions, this fact is recognized and English faculty is compensated accordingly). This conflict has led me to make use of both in-class and online PA in a variety of different ways in an effort to improve student learning while dealing with time constraints and heavy teaching loads. To date, I have not allowed students to assign each other grades for their writing; the sole purpose of these exercises has been to improve the quality of writing I receive while helping the students to learn by exposure to, and analysis of, others writing. I have received mostly positive, if anecdotal, feedback from the students on both of these processes. By their nature, these two forms of PA have five differences that are bound to affect student attitudes and perceptions; these are outlined in the chart below:

An Action Research Study

Differences Between Online and In-Class PA Online PA Anonymous Students assess more than one classmate No discussion about assessed work assessment has taken place Assessment is done in the students own Assessment is done under time constraint time and can be returned to more than once Students must learn to use the technology in one sitting No technology required In-Class PA Non-anonymous Students assess one classmate Student discuss the assessed work after

These differences represent the research questions under consideration: (1) what affect do these differences have on student attitudes and perceptions of PA? and (2) which method is more productive in terms of quality of writing? There are other factors that may affect the outcome. For example, some students will naturally prefer to work alone while others will prefer the structure of the classroom, and some will be skeptical and/or resistant to the idea of assessing their peers. These types of variables, however, represent the diverse make-up of any classroom and cannot be controlled for. REVIEW OF RELATED LITERATURE The significance of this proposed project is founded on the belief that PA in the classroom can be a valid, reliable and valuable form of evaluation, and a review of the

An Action Research Study

literature supports this belief. Only four studies were found that address the issue of the efficacy of online PA, and none were found that compare the two. This literature review, then, will first look at two studies (of many) that illustrate the validity, reliability and value of classroom-based PA. Then, four studies will be reviewed that offer insight into online PA. Taken together, these two reviews will show the value of this proposed project not only as a comparison, but as a much-needed investigation into the value of online PA itself. Peer Assessment in the Classroom Falchikov and Goldfinch (2000) conducted a meta-analysis of 48 quantitative PA studies that compare teacher marks to peer marks. Their primary goal was to investigate the validity of peer marking. The researchers set out clearly defined parameters for the studies that would be included; for example, they included only high quality studies that report enough information to enable replication. They also looked closely at issues such as the design quality, number of peers and number of instructors. This comprehensive analysis is well designed and offers clear details of the methods of analysis and also recognizes its own limitations. The authors conclude that peer and faculty marks are closely in agreement as long as the PA activity is carefully designed; in addition, they note that PA has many formative benefits in terms of improving student learning (318). Cowan and Creme (2005) set out to investigate students responses to peer- and self-assessment of writing assignments and argue that these types of undertakings should be approached as peer engagement as opposed to peer assessment. The

An Action Research Study

former, they conclude, can facilitate critical thinking and communication skills, while the latter is often neither well organized nor appropriately used. The study addresses the issue of overburdened teaching faculty while also attempting to encourage students to take greater responsibility for their own learning. The participants were 320 students from ten difference disciplines taking a ten-week course at the University of Sussex. They received a one-hour lecture along with two-hour seminars led by 13 different tutors. The latter actually administered the peer review. They concluded that the undertaking was worthwhile, but needed to be approached differently in order to facilitate collegial, cooperative learning (as opposed to assessment): hence the term peer engagement. These two examples show that PA can be used productively, both in terms of student learning and in terms of the reliability of student marks, as long as the PA activities are carefully constructed and approached in an appropriate manner. Online Peer Assessment I found only four studies that seek to analyze online PA, and each of these has a different research question/purpose. Nonetheless, they do illustrate that online PA is worth further investigation, as each of them found the process to be useful in one way or another. Claiming that PA in general is a valuable learning tool and that online PA can improve the freedom of time and space for learners (p. 27), Wen and Tsai (2006) set out to evaluate students attitudes about both forms of PA. This study administered a 34 item Likert-scale questionnaire to 280 students from two major Taiwanese

An Action Research Study

universities. The study concluded that students in general held a positive attitude toward PA but that they viewed online PA as a technical tool rather than a learning aid (p. 41). This study consists mainly of a presentation of quantitative results and fails entirely to offer the information necessary for the reader to analyze validity, reliability or ethical issues. Noting that validity and reliability of peer-generated grades in the post-secondary classroom are a major concern, Cho et al (2006) conclude that PA of writing is nonetheless a worthwhile task, given the proper approach. Using the internet-based, PA application SWoRD (scaffolded writing and rewriting in the discipline), the researchers used 708 students from 16 different courses in five universities over a three-year period to test the reliability and validity of peer-generated grades. While SWoRD actually facilitated the PA process, the students were given clear instructions on how to use the application, had consistent and clear evaluation rubrics, and were rewarded with grades for their participation. The study concludes that with these criteria in place, grades generated by students are both reliable and valid. Davies (2002) used a computerized assessment with plagiarism (CAP) system to facilitate PA and to address the validity of several common assessment preconceptions. Using second-year undergraduates from the School of Computing at the University of Glamorgan, the study used a questionnaire to evaluate student attitude about the system, which required the students to submit a report and assess the reports of their classmates. This researcher concluded that the use of a CAP system would be of great value in distance education.

An Action Research Study

McConnell (2002) conducted a study in which he evaluated student attitudes and experiences with peer- and self-assessment in an online environment and concluded that it is a value-laden approach to learning and teaching which seeks to involve students in decision making about the assessment process and how to make judgments on their own and each others learning (p. 89). The participants were students enrolled in an MEd program delivered entirely through WebCT. Data were collected using interviews, examination of online transcripts, and a questionnaire. The researcher does not address the validity of using WebCT for the process; indeed, it is not one of the research questions. Therefore, for the purposes of this review, little was discovered about how the learning management system affected the PA process; however, the fact that the researcher considered the experiment a success points clearly to the need for studies to be done that do focus on the process of online PA. METHODOLOGY The sample for this study will consist of students from the School of Renewable Resources taking my first-year technical writing and communications class (TWC 150). I chose this class because it spans a full academic year. Semester One This will be a mixed-mode class; that is, the students will have a two-hour classroom lecture early in the week (ideally Mondays) and one hour of asynchronous, online contact using the learning management system, Moodle. The assignments submitted for PA in this class will be two letters, a memo, a proposal, an informal report and a resume. These will be due at two-week intervals beginning in week three of a 14-week semester. Actual grades for written work will be assigned by me; however, 15% of the grade will be earned from peers. At the end of each assessment period, the students

An Action Research Study

will assess the quality of the feedback they received and assign a value out of fifteen using a rubric. The accumulated grades will be averaged and the end result will be a grade out of 15 for each student based solely on the quality of their feedback and assigned by peers. This two-week cycle is illustrated in the table below. Two-Week Peer Assessment Cycle Monday TuesdayFriday Monday TuesdayFriday Sunday Continue peer reviews and complete by Friday, midnight Submit final draft for instructor evaluation and assess your assessors (submit their grades) In-Class lecture beginning with 15-minute questionnaire concerning Monday the peer assessment process. Next writing task assigned; the cycle begins again. In-Class Lecture receive writing assignment Work on writing assignment: Submit for peer review in Moodle by Friday, midnight In-Class Lecture Begin peer reviews

Peer Assessment in Moodle


The actual process of PA is facilitated through the workshop function in Moodle, which randomly and anonymously assigns each student four papers to assess. The results are sent back to the writer. The assessors must use the evaluation rubric provided to offer

An Action Research Study

their opinion on the work. The rubrics will vary depending on the assignment and the concepts being learned at the time. Semester Two This term the students will have three on-campus contact hours with me; the first two will be lecture and will be conducted the same as semester one. In semester two, the students will be given fewer assignments, each of which is more complex and longer than those in semester one: an informal report; an oral presentation; and a final, formal report. PA will take place in class in the third contact hour. Students will be randomly given a paper written by a peer, given 30 minutes to review the paper using a rubric, then given time to review and discuss their findings with the author. This process will take place for the longer assignments as well as some of the shorter weekly writing assignments. In effect, the students will be exposed to a classmates writing (and assessment) every second week. At the end of each review, the students will be given a brief questionnaire concerning the PA process. This data will be compared to the data collected in semester one. DATA COLLECTION AND ANALYSIS The questionnaires for each semester are very similar with slight variations concerning the time given for assessment and anonymity. These will yield some quantitative date, which will offer the opportunity for numerical comparisons of responses. The qualitative questions will be carefully considered and should yield an overall picture of the students attitudes at the time of the PA activity. At the end of the year, the students

An Action Research Study

10

will be given another questionnaire to discover their opinions of the two processes in comparison to one another.

An Action Research Study

11

References Cho, Kwangsu, Christian D. Schunn, & Roy W. Wilson. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives.

Journal of Educational Psychology, 98.4, 891-901. doi: 10.1037/00220663.98.4.891 Cowan, Jane K., & Creme, Phyllis. (2005). Peer assessment or peer engagement? Students as readers of their own work. Learning and Teaching in the Social

Sciences, 2.2, 99-119. doi: 10.1386/ltss.2.2.99/1


Davies, Phil. (2000). Computerized peer assessment. Innovations in Education and

Training International, 37.4, 346-355.


Falchikov, Nancy, & Goldfinch, Judy. (2000). Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Review of

Educational Research, 70.3, 287-322. doi:10.3102/00346543070003287


McConnell, David. (2002). The experience of collaborative assessment in e-learning.

Studies in Continuing Education, 24.1, 73-92. doi: 10.1080/01580370220130459


Wen, Meichun Lydia, & Tsai, Chin-Chung. (2006). University students perceptions of and attitudes toward (online) peer assessment. Higher Education, 51, 27-44. doi: 10.1007/s10734-004-6375-8

Você também pode gostar