Você está na página 1de 14

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO

ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

GATHERING, ANALYZING, AND IMPLEMENTING


STUDENT FEEDBACK TO ONLINE COURSES:

IS THE QUALITY MATTERS


RUBRIC THE ANSWER?
Lucian Dinu1, Philip J. Auter2, Phillip Arceneaux3

Abstract

This paper proposes a new method of collecting and utilizing quality feedback from
students regarding the learning experience of the electronic classroom. The study
begins by reviewing how existing methods for data gathering, also known as student evaluations of instruction (SEI), have been well established and tested in the
traditional class setting, but have not been adequately adapted to the online class
setting. The Quality Matter (QM) rubric is suggested as a supplementary tool in
the information collection process of online classes. Data was collected by survey
from both students and professors of the same institution. The results note both
strengths and weaknesses of each approach, and conclude that the most efficient
system would be to use the QM Rubric as a supplement to the SEI.
Keywords:

Online Courses, Distance Learning, Student Feedback, Evaluations of Instruction


1. Lucian Dinu, Ph.D. is an endowed Associate Professor of
Communication at the University of Louisiana at Lafayette in
Lafayette | DINU@LOUISIANA.EDU | +1.337.366.0266
2. Philip J. Auter, Ph.D. is an endowed Professor of
Communication at the University of Louisiana at

IJODE | 15
volume 1 ssue 1
january 2015

Lafayette in Lafayette | AUTER@LOUISIANA.EDU |


+1.337.482.6112
3. Phillip Arceneaux is a graduate student in the Communication
program at the University of Louisiana at Lafayette |
PHILLIP@LOUISIANA.EDU | +1.337.482.9008

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Gathering, analyzing, and


implementing student feedback
to online courses: Is the QM
rubric the answer?
Student feedback is an essential part
of teaching (Huxham et. al., 2008). It allows
students to select courses and instructors, it
influences tenure, promotion, and merit raises
for instructors, and perhaps most importantly, it allows instructors to improve the content
and delivery of courses.
And yet, obtaining, analyzing, and implementing feedback in online classes presents
several challenges. First, current feedback
forms dont focus on the specifics of online
instruction. Second, the data gathered using
current forms may not provide sufficient information to onlineinstructors. And third, it is
not clear how and whether online instructors
and distance learning program administrators

are using student feedback to improve online


course delivery and program development.
This paper focuses on the three stages of improving online course instruction: the challenges of obtaining student feedback, utilizing
feedback to determine better ways of course
redesign; and closing the loop between assessment and course improvement. QM-certified instructors and course designers will be
interviewed in order to find out how they typically obtain, analyze, and utilize student feedback in their course development.
Additionally, the QM rubric will be
modified to be utilized as a direct assessment
tool in a number of online classes at the University of Louisiana at Lafayette. Responses
to the QM survey will be compared to traditional student evaluation of instruction (SEI)
scores in order to see if and how they correlated. Additionally, faculty and administrators
will be interviewed to see how they feel about
results and the best ways to take advantage of
them.

16 | IJODE
volume 1 ssue 1
january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO


ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Literature Review
The process of formal education was
first established at the advent of complex
writing systems, namely Egyptian hieroglyphics (Fischer, 2004). Modern technology has
offered great advances to what is offered in
the 21st centurys educational system. Fewer
classrooms have chalk boards while more and
more classrooms have lecterns equipped with
computers, projectors, visual scanners, and audio systems. One of the greatest applications
of modern technology is that some classes do
not even have to have a classroom. Distance
learning offered through online courses is becoming a popular option among college students. Whereas traditional courses have been
well established through trial and evaluation
over time, online courses are only a few decades old and evaluative tools for these classes
have lacked intensive academic criticism. In
this study an alternative option will be put
forth for the effective evaluation of teaching
in online courses.
Traditional courses consist of the faceto-face transference of knowledge and instruction from teacher to student. Over the years,
the means of assessing the effectiveness of the
teacher has varied, but one has remained a
constant over time and is typically the most
weighed are the data gathered from the students evaluations of the course (Erdoan
et al, 2008; Seluk, 2011). These data have
been the basis for administrative evaluations
of teaching performance as a determinant in
the awarding of tenure, title promotions, and
pay increases (Loveland, 2007; Palmer, 2011).
Previous research has shown that numerous
variables effect students overall satisfaction
with a course. Type of class is very important
as it must reflect the intellectual ability of the
students. Types of courses include: lecture,
seminar, lab, and independent study. (DeBerg
and Wilson, 1990; Langbein, 1994). Class
size also greatly affects student evaluation responses. Lectures can range up to multiple
hundreds of people while seminars can be as

IJODE | 17
volume 1 ssue 1
january 2015

small of five people; and independent studies


typically involve a one-on-one relationship
between teacher and student. This wide range
in class size leads to different social environments that greatly vary a students interactive
role (Hoftgreter, 1991). The time of day a
class is offered also significantly skews student perceptions. Average college classes can
range anywhere from early morning courses,
to afternoon courses, to late night courses (Deburg and Wilson, 1990). Lastly, the day of the
week a class is offered tends to vary student
perceptions, as the day typically determines
the frequency with which a student must attend that class per week (Husbands and Frosh,
1993).
The most widely agreed upon method
of evaluating student satisfaction of a course
is through data collection by means of a questionnaire. Student satisfaction is defined as,
the degree to which students feel satisfied
with workload of course, level of course,
teaching activities, and instructors teaching
effectiveness (Richardson, 2005; Seluk,
2011). Questionnaires designed specifically
to gather satisfaction data of professors are
known as Student Evaluations of Teaching
(Loveland, 2003; Seluk, 2011; Palmer, 2011)
or Student Evaluation of Instruction SEI.
Approximately 2,000 examinations of SEI
methods have been conducted over the 20th
century, all supporting relatively high marks
of both reliability and validity (Wilson, 1998).
Marsh (1982) developed one of the most common SEI in academia the Students Evaluations of Educational Quality (SEEQ) which is
a thirty-five item questionnaire, with each item
measured on a five point Likert scale. Through
extensive testing, SEEQ has been shown to
produce minimal variance across teaching demographics at all levels of formal education
(Marsh and Hocevar, 1991). Questionnaires
are administered in the class setting on the last
day of the semester in which the group meets,
or as near to the conclusion of the semester as
the course schedule allows. The purpose of
administering the questionnaire at that time is

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

so that the feedback can evaluate the entirety


of the course rather than just a portion. The
ability to capture data in class rather than on
a students own time boosts response rates
which in turn reduces the sampling error as
much as possible (Richardson, 2005). The
questionnaires are kept relatively short, making the time length of the data collection process rather minimal. Unfortunately, the sheer
repetitive nature of filling out near identical
questionnaires for multiple classes each semester has become so common place that students
tend to not seriously address the questions, potentially leading to faulty data (Abarmi et al,
1996). Another aspect that affects the quality of data is the apparent application of the
information. When students see institutional
changes representative of their critiques, they
become more involved in actively working to
better their classes. When administrations fail
to act on the evaluations provided through
the questionnaires, the student body tends to
become disinterested and the quality of the
data gathered suffers (Spencer and Schmelkin,
2002). As with most concerns for gathering
effective data, evaluative questionnaires are
provided in a multitude of ways to account for
students with both physical and mental disabilities, in compliance with such legislation
as the Americans with Disabilities Act and the
United Kingdoms Special Educational Needs
and Disability Act (Richardson, 2005).
The advent of applying modern technology to the educational process has not
only enforced the traditional class setting, but
has also given birth to distance learning. The
first online course was offered in 1981 by the
Western Behavioral Sciences Institute in La
Jolla, California (Feenburg, 1999). By 1997,
44% of academic institutions were offering
distance learning courses (NCES, 1999). As
of 2006, over three million American students
were taking at least one online course (Allen
and Seaman, 2006). More recent results by
the U.S. Departments National Center for Education Statistics reported that in the academic
Fall of 2012, over twenty-one million Ameri-

cans were enrolled in at least one online course


(NCES, 2014) with over 75% of American
universities offering online programs (Parket
et al, 2011; Berk, 2013). Where the role and
responsibilities of instructors have been well
established in the traditional class setting, the
roles and responsibilities for teachers in the
electronic class setting are considerably different and under established. As teachers work
more one-on-one with students through online classes, their role consists of being more
of an academic partner than a traditional institutional leader (Balderrain, 2006). Through
this role as an academic partner, teachers are
needed to more directly interact with and assist the students in the consumption, and most
importantly, assimilation of knowledge (Kearsley and Shneiderman, 1998; Rothman et
al, 2011). Because of the extensive amount
of technology used in online courses, when a
student contacts the teacher seeking a resolution for technical issues, the teacher must be
more prompt than in a traditional setting in
responding to the student in order to facilitate
an ease of access to the material and assignments (Bangert, 2004). Unfortunately very
few teachers receive official training when it
comes to teaching online courses. Institutionally they are offered technological support and
little more than that. One of the current methods for governing the online educational process is known as the Quality Matters Program
(QM). The QM provides a rubric which is
designed to increase a teachers awareness and
sensitivity to: learning objectives, assessment
and measurements, instructional materials,
course activities and earner interaction, and
course technology (Matters, 2011). With an
increasing number of online students, teachers
must be trained to exhibit skills and behaviors
appropriate to the online class setting rather
than just the traditional class setting (Loveland, 2007).
Whereas by the dawn of the 21st century approximately 2,000 studies had been conducted on the effectiveness of the SEI, by 2007
Loveland and Lovelands 2003 article titled

18 | IJODE
volume 1 ssue 1
january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO


ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Student Evaluations Of Online Classes Versus


On-Campus Classes was the only published
academic article that directly analyzed and
questioned the effectiveness of SETs for evaluating online courses (Loveland, 2007). In that
article they were the first to hold that traditional evaluation tools, such as an SEI, were
not effectively designed to collect pertinent
data in online courses (Loveland and Loveland, 2003). The majority of evaluations used
in online courses to date has either been written by the course instructor himself/herself or
were all together neglected to be administered
by the academic institution itself (Campora,
2003; Rothman et al, 2011). Overall there is
simply a general lack of research and testing
on functional evaluative tools for the online
class setting. The QM Program offers teach-

IJODE | 19
volume 1 ssue 1
january 2015

ers a structured rubric for designing effective


syllabi and content for their online courses.
This article investigates whether the QM rubric would also serve as an optimal as a postcourse evaluative tool. The specific research
questions that drive this study are:

RQ1: Are the current SEIs adequate for


student feedback in online classes?

RQ2: How is student feedback to online


instruction gathered?

RQ3: How is student feedback to online


instruction analyzed?

RQ4: How do faculty close the student


feedback loop in online classes?

RQ5: Can the QM rubric be adapted into


a student feedback form for online classes?

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Methods and Procedures


Two surveys were used in this study.
Both surveys were conducted at a large public
university in the Southern part of the United
States of America.
Student survey. The first survey was longitudinal and collected data from students enrolled
in online classes during the Spring, Summer, and
Fall 2013 semesters. The purpose of this survey
was to assess the utility of the Quality Matters
(QM) rubric as an instrument for instruction
feedback data. The QM rubric is typically used
to assess classes, not as a survey tool. Utilization
in this way is a new approach and thus the reliably in these circumstances is untested.
All students enrolled in six communication classes were asked to complete both a
traditional Student Evaluation of Instruction
(SEI) form and a form based on the QM rubric. Three of the classes were freshman-level,
two were junior-level, and one was senior-level. Both the traditional SEI and the QM-based
evaluations were used as regular student feedback for improving the course. Completing
the forms was not mandatory and the students
were not offered any incentives to participate
in the study. All responses were anonymous.
More specifically, the QM-based form
used each standard in the QM rubric as a dichotomous (yes/no) variable. The students
were asked to respond to each statement based
on their experience in the course. For example,
the first statement corresponding to QM standard 1.1, was Instructions make clear how to
get started and where to find various course
components. Respondents had to indicate
their agreement or disagreement with the
statement by checking yes or no, respectively. Eight open-ended questions, one for
each major QM standard, asked respondents
if they had any suggestions for improvements.
Faculty survey. The second survey was
cross-sectional and collected data from QM-certified faculty at the same Southern US public

university as above. Its purpose was to assess


whether faculty see a QM-based feedback form
as a useful and viable tool for student evaluation of instruction. In order to teach online this
university requires faculty to be Quality Matters
(QM)-certified. About 150 professors and instructors were certified in the Fall 2014 semester, when this part of the research was conducted. A total number of N = 48 faculty completed
the questionnaire during the two weeks allotted
for data collection, for a response rate of 32%.
Specifically, at the beginning of the Fall
2014 semester an online questionnaire was created and posted online using the tools available
in Google Forms. A mailing list of all professors
and instructors certified to teach online was also
created. The mailing list used contact information publicly available on the website of the Universitys Distance Learning office. All QM-certified faculty about 150 people at the time of the
survey were emailed and asked to participate
in the study. As an incentive, a $20 gift card was
raffled after the completion of data collection.
The survey took about 15 minutes to complete.
Once again, the QM standards were
used as items in the questionnaire. This time,
respondents had to rate the importance of
having student feedback on each of the QM
standards. A five-point Likert-type scale, from
1 (not at all important) to 5 (extremely important) was used for each item. In addition,
faculty respondents were also asked several
questions about traditional SEIs. For example, five-point Likert scales were used to assess
facultys level of agreement with the importance, utility, value, accuracy, pertinence, and
sufficiency of student feed-back obtained with
traditional SEIs. Yet another set of Likert-type
scales asked faculty how likely they were to
use student feedback to improve their course.
Finally, a few open-ended questions asked faculty respondents whether they gather student
feedback other than through the traditional
SEIs in their online courses and how, and also
what they perceived as the best features and
the poorest features of traditional SEIs.

20 | IJODE
volume 1 ssue 1
january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO


ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Results
The first four research questions were
answered using data from the faculty survey.
One third (n = 16) of the faculty who completed the survey were at the instructor rank,
while 18.8% (n = 9) were assistant professors,
29.2% (n = 14) were associate professors, and
14.6% (n = 7) were full professors. Two respondents (4.2%) did not reveal their rank. A
majority of the respondents had at least some
experience with online teaching at the time
of the research. Thus, 31.3% of the respondents had been teaching online for one to two
years and 37.5% had been teaching online for
three to five years. Moreover, 25% of the re-

Figure 1. Respondent experience with online courses

On the other hand, respondents identified some specific disadvantages of current


SEIs for evaluating online instruction. About
one third (31.1%) of the respondents disagreed and 15.6% strongly disagreed that cur-

Figure 2. Perceived importance and value of current SEIs.

spondents indicated that they normally teach


two courses during an average academic year
and 20.6% indicated they normally teach one
course in an academic year (see figure 1). Nevertheless, some respondents (4.2%) indicated
they had never taught an online course before.
RQ 1 asked Are the current SEIs adequate for student feedback in online classes?

rent SEIs provide sufficient student feedback


in online classes (see Figure 3).
In addition, answers to open ended
questions revealed that respondents are worried about the adequacy of traditional SEI in
the evaluation of online courses. For example,
respondents wrote that

In general, respondents indicated that


the SEIs currently in use have both some advantages and some disadvantages. Large proportions of respondents agreed (38.3%) or
strongly agreed (40.4%) that current SEIs are
important; similarly, many agreed (45.7%) or
strongly agreed (34.8%) that SEIs are valuable
to them (see Figure 2)
Figure 3. Perceived sufficiency of traditional SEIs

IJODE | 21
volume 1 ssue 1
january 2015

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Some of the questions are not relevant


to online courses, such as How many classes did you miss? and The physical environment was conducive to learning. And
The choices for feedback are not
particularly useful. The questions are generic one-size-fits-all-courses-and-all faculty, so
students answers give little useful feedback
about course improvement. And further In
some cases, the questions dont apply to the
format of instruction.
RQ2 asked How is student feedback to
online instruction gathered?
About half (47.92%) of the respondents
indicated that they collect their own student
feedback data, in addition to that collected
through the institutional SEIs. Some of those
who do that wrote that they use online class
tools, such as forum discussions, or questionnaires they develop themselves. For example,
one respondent wrote
I ask for their feedback in Moodle discussion forums, via e-mail, and ask them to
submit anonymous feedback manually under
my office door if theyre unwilling to provide
feedback online. Another respondent answered that he/she uses a discussion forum
or an reflection assignment to ask students to
share their input on a few questions about the
course design and delivery.
Several respondents indicated that they
take the class temperature by distributing a
survey once or more times during the semester. Finally, one respondent took this opportunity to express his/her discontent with the
current SEIs:
I offer students points for writing and
submitting a feedback paper at the end of
the semester so I can learn about their experiences since the SEIs tend to get such low
response rates and are not a representative
sample (mostly students who are upset with
the strictness of the course policies). Also, the

question about teacher availability is worded


incorrectly because it should be something like
The instructor was available for scheduled
appointments during office hours because I
have students that expect me to be available
to them on Saturdays and they are marking
that statement negatively for me rather than
understanding that online teachers are available in person during office hours or online
during work hours (as stated to students in my
syllabus).
RQ3 asked How is student feedback to
online instruction analyzed? And RQ4 asked
How does faculty close the student feedback
loop in online classes?
Large percentages of respondents reported that they use the current SEIs in a variety of ways. For example, 68.9% of the respondents are very likely and 15.6% are likely
to browse the data to see their scores; 42.2%
are very likely and 22.2% are likely to use the
numerical data to improve their courses. Even
higher percentages reported they are likely
(20%) or very likely (64.4%) to use students
comments to improve their courses (see Figure 4).
To complete the picture, respondents
also described how they use their own feedback mechanisms, other than the school SEI,
to improve their courses. Thus, some respondents wrote that they make changes in
assignment design, design and number of instructional resources. Some respondents reported that they use feedback they collect independently of the school SEI to make changes
to their courses during the semester, as well
as between semesters. For example, one respondent wrote During the semester I make
adjustments to the delivery of the course, if
enough students have issues expressed on the
temperature checks. Then, between semesters
I read the exit questionnaires and make changes to the course in areas where many students
had trouble. And another one wrote: I try
to weigh student comments to see where I can

22 | IJODE
volume 1 ssue 1
january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO


ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Figure 4. Uses of data from traditional SEIs.

improve. For instance, if students indicate that


they find particular directions unclear, I try to
rewrite them.
Last, but not least, RQ 5 asked: Can the
QM rubric be adapted into a student feedback
form for online classes?
Data from both the student survey and
the faculty survey were used to answer this research question. Out of the 179 students enrolled in the classes used for data collection,
42 students completed the traditional SEIs and
19 students completed both the traditional and
the QM-based feedback forms. While the rates
of completion are small, they are reflective of
the rates typical for evaluations of instruction
conducted online. Moreover, even though the
resulting sample was too small for meaningful
quantitative data, several qualitative observations can be made.
First, the QM Rubric responses provided data on instruction and course not available from traditional SEIs. For illustration, the
current, traditional SEI has only one question
pertaining to the universitys online learning

IJODE | 23
volume 1 ssue 1
january 2015

management system, Moodle. That question


simply asks what instructional resources were
used in the course, with Moodle being one of
the answer options. One generic question such
as this may not be sufficient for a course conducted exclusively online, or even for a hybrid
course. In comparison, the entire QM-based
feedback form is focused on the specifics of
using the means available in Moodle for sound
instruction.
A second observation is that a QMbased feedback form does not necessarily
eliminate the necessity of traditional SEIs, but
complements them well. Traditional SEIs have
been developed, tested, and refined for years,
and provide information that is not gathered
through the QM-based feedback, such as the
students perceptions of the workload in each
class, students perceptions of the quality of the
class compared with other classes, and so on.
Finally, a third observation is that a
QM-based feedback form is excellent for traditional courses as well especially those with
a learning management system (like Moodle)
on the side. That happens because QM stan-

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

How important is feedback from students


on the following points, in your opinion?

Mean

Std. Deviation

Institutional policies

3.28

1.55

Prerequisite knowledge in the field

3.47*

1.12

Etiquette expectations

3.49*

1.17

Table 1. Lowest levels of agreement on


questions about using a QM-based rubric as SEI

*means statistically significantly different from 3.00 (neutral value) at p < .05

dards are rooted in sound pedagogical principles, which are independent of the medium
used for delivering the course content online
or traditional.
RQ 5 was further explored with data
from the faculty survey. To be more precise, the
faculty survey asked respondents to report their
rating of the importance of using each standard
in the QM rubric for student feedback. Five
point Likert-type scales, from 1(unimportant)
to 3 (neutral) to 5 (extremely important) were
used for these questions. Overall, results indicate that faculty rated all items above the neutral point. In fact, only one of the QM standards
was not statistically different from neutral: On
the average, faculty rated student feedback on
the presence of institutional policies in online
courses at m = 3.28 (st. dev. = 1.55), which is
not significantly different from 3 (t = 1.66; df
= 46; p = n.s). The items rated next lowest as
importance were prerequisite knowledge in the
field (m = 3.47; st. dev = 1.12) and student support services (m = 3.49; st. dev = 1.17). For each
of these two items a one-sample t-test indicated

that the respective mean was statistically significantly higher than 3 (neutral). Specifically,
for the item prerequisite knowledge in the field
the one-sample t-test resulted in t = 2.88; df =
46; p < .05, and for student support services the
one-sample t-test resulted in t = 2.87; df = 46; p
< .05. These results are synthesized in Table 1.
In addition, it is worth mentioning that
some of the items scored very high importance
ratings from faculty respondents. Having student feedback on clarity of instructions (m =
4.49; st. dev. = 1.04), on clarity of grading policies (m = 4.47; st. dev. = .88), and on course
navigation (m = 4.45; st. dev = .78) were at
the top of the importance ratings (see table 2).
Overall, the observations inspired by
the student survey and the statistics obtained
from the faculty survey lead toward the conclusion that the answer to RQ 5, which asked
Can the QM rubric be adapted into a student
feedback form for online classes? Is likely a
positive one.

How important is feedback from students on the


following points, in your opinion?

Mean

Std. Deviation

Clarity of instructions

4.49

1.04

Clarity of grading policy

4.47

.88

Course navigation

4.45

.78

Table 2. Highest levels of agreement on questions about using a QM-based rubric as SEI

24 | IJODE
volume 1 ssue 1
january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO


ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

Discussion
Student evaluation of instruction and
instructors has always been a controversial
issue. Response rates have declined as universities move to online SEI administration. But
prior, in-class administration was rife with
problems including instructors that biased the
results by staying in the room during evaluation, or providing pizza and other more gradebased treats on or near evaluation day. Especially today, when response rates are low,
responses tend to be students with extreme
(usually negative but occasionally positive)
positions about the course. That said, faculty and administrators need some methods of

IJODE | 25
volume 1 ssue 1
january 2015

gauging the effectiveness of class instruction


so that it can be improved upon for future
courses. SEIs may be the best option in a difficult scenario.
Faculty generally are not thrilled with
SEI questions, or the way information is gathered. Many have developed unique and personal ways to gather supplemental data to
help them improve their courses. It appears
from both the student responses and faculty
feedback that utilizing the QM rubric as a
supplemental evaluation tool could provide
additional non-duplicative data that can aid
an instructor in improving course instruction
and outcomes.

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Conclusion, Further Discussion,


and Suggestions
The purpose of this study was to explore the benefits and drawbacks of using the
Quality Matters (QM) rubric as an evaluative
tool for online courses. Previous research
in this field has particularly focused on the
evaluative tools of traditional class settings
and how they might be transferred for use in
distance learning evaluation. In an effort to
extend this literature, this study proposed using the QM Rubric, a successful tool teachers
design online courses, as an means to evaluate the overall effectiveness of the class. The
results supported the premise that traditional
student evaluations of instruction (SEI) were
not adequately designed to serve as the sole
means of course evaluation. Feedback from

both students and teachers favored applying


the QM rubric as an evaluative tool. However, it was determined that neither SEIs nor
the QM rubric on their own sufficiently evaluated the qualities of the course. Therefore
it was determined that using both traditional
SEIs and the QM rubric would serve to supplement each others weaknesses and function
as the most efficient method of data collection
regarding student feedback of the online learning experience. Teachers, administrators, and
other scholars of the educational process may
use the findings of this study to better facilitate the most optimal methods of collecting
student satisfaction data so that online courses, and their instructors, may routinely be
improved as a means of offering the highest
levels of education.

26 | IJODE
volume 1 ssue 1
january 2015

GATHERING, ANALYZING, AND IMPLEMENTING STUDENT FEEDBACK TO


ONLINE COURSES: IS THE QUALITY MATTERS RUBRIC THE ANSWER?

References
Abrami, P., dApollonia, S. & Rosenfield,
S. (1996) The dimensionality of student
ratings of instruction: what we know
and what we do not, in: J. C. Smart (Ed.)
Higher education: handbook of theory
and research, volume 11 (New York, Agathon Press).
Allen, I.E. & J. Seaman (2006). Making
the grade: Online education in the
United States. Needham: MA: Babson
Survey Research Group, The Sloan Consortium.
Bangert, A. W. (2004). The seven principles
of good practice: A framework for evaluating on- line teaching.The Internet and
Higher Education,7(3), 217-232.
Beldarrain, Y. (2006). Distance education
trends: Integrating new technologies to
foster student interaction and collaboration.Distance education,27(2), 139-153.
Berk, R. (2013). Face-to-Face versus Online Course Evaluations: A Consumers
Guide to Seven Strategies.Journal of
Online Learning and Teaching,9(1).
Compora, D. P. (2003). Current trends in
distance education: An administrative
model.Online Journal of Distance Learning Administration,6(2).
DeBerg, C.L. & J.R. Wilson (1990). An empirical investigation of the potential confounding variables in student evaluations
of teaching. Journal of Accounting Education, 8 (1), 37-63.
Erdoan, M., Uak, M., & Aydin, H. (2008).
Investigating prospective teachers satisfaction with social services and facilities
in Turkish universities.Journal of Baltic
Science Education,7(1), 17-26.
Feenberg, A. (1999). Distance learning:
Promise or threat.Crosstalk,7(1), 12-14.

IJODE | 27
volume 1 ssue 1
january 2015

Fischer, S. R. (2004).History of Writing.


Reaktion Books.
Holtfreter, R.E. (1991). Student rating biases:
Are faculty fears justified? The Woman
CPA, Fall, 56-62.
Husbands, C.T. & P. Frosh (1993). Students
evaluation of teaching in higher education:
Experiences from four European countries
and some implications of the practice. Assess and Evaluation in Higher Education,
18(2), 25-34.
Kearsley, G., & Shneiderman, B. (1998). Engagement theory: A framework for technology-based teaching and learning.Educational technology,38(5), 20-23.
Langbein, L.I. (1994). The validity of student
evaluations of teaching. Political Science
and Politics, September, 545-553.
Loveland, K. A. (2007). Student Evaluation
of Teaching (SET) in Web-Based Classes:
Preliminary Findings and a Call for Further
Research.Journal of Educators Online,4(2), n2.
Loveland, K., & Loveland, J. (2003). Student
Evaluations Of Online Classes Versus
On-Campus Classes.Journal of Business
& Economics Research (JBER),1(4).
Marsh, H. W. (1982). SEEQ: A Reliable,
Valid, and Useful Instrument for Collecting Students Evaluations of University
Teaching.British Journal of Educational
Psychology,52(1), 77-95.
March, H. W. & Hocevar, D. (1991) Multidimensional perspective on students evaluations of teaching effectiveness: the generality of favor structures across academic
discipline, instructor level, and course level,
Teaching and Teacher Education, 7, 9-18.
Matters, Q. (2011). Quality matters rubric
standards 2011-2013 edition.Quality
Matters Program.

Lucian Dinu, Philip J. Auter, Phillip Arceneaux

Parker, K., Lenhart, A., & Moore, K. (2011).


The Digital Revolution and Higher Education: College Presidents, Public Differ on
Value of Online Learning.Pew Internet &
American Life Project.
Palmer, S. (2011, January). An institutional
study of the influence of onlineness on
student evaluation of teaching in a dual
mode Australian university. InASCILITE
2011: Changing demands, changing directions: Proceedings of the Australian Society for Computers in Learning in Tertiary
Education Conference(pp. 963-973). University of Tasmania.
Richardson, J. T. (2005). Instruments for
obtaining student feedback: a review of
the literature.Assessment & Evaluation in
Higher Education,30(4), 387-415.
Rothman, T., Romeo, L., Brennan, M., &
Mitchell, D. (2011). Criteria for assessing
student satisfaction with online courses.International Journal for e-Learning
Security,1(1-2), 27-32.
Seluk, G. S., Karabey, B., & alkan, S.

(2011). Predicting student satisfaction in


physics courses.Buca Eitim Fakltesi
Dergisi, (28), 96-102.
Spencer, K. J. & Schmelkin, L. P. (2002)
Student perspectives on teaching and its
evaluation, Assessment and Evaluation in
Higher Education, 27, 397-409
U.S. Department of Education, National
Center for Education Statistics. (1999).
Distance Education at Postsecondary
Education Institutions: 1997-98. NCES
2000-013, by Laurie Lewis, Kyle Snow,
Elizabeth Farris, Douglas Levin. Bernie
Greene, project officer. Washington, DC.
U.S. Department of Education, National Center for Education Statistics. (2014). Enrollment in Distance Education Courses, by
State: Fall 2012. NCES 2014-023, by Scott
Ginder and Christina Stearns. Richard
Reeves, project officer. Washington, DC.
Wilson, R. (1998). New research casts doubt
on value of student evaluations of professors. The Chronicle of Higher Education,
44(19), A12-A14

28 | IJODE
volume 1 ssue 1
january 2015

Você também pode gostar