Escolar Documentos
Profissional Documentos
Cultura Documentos
Abstract
This paper proposes a new method of collecting and utilizing quality feedback from
students regarding the learning experience of the electronic classroom. The study
begins by reviewing how existing methods for data gathering, also known as student evaluations of instruction (SEI), have been well established and tested in the
traditional class setting, but have not been adequately adapted to the online class
setting. The Quality Matter (QM) rubric is suggested as a supplementary tool in
the information collection process of online classes. Data was collected by survey
from both students and professors of the same institution. The results note both
strengths and weaknesses of each approach, and conclude that the most efficient
system would be to use the QM Rubric as a supplement to the SEI.
Keywords:
IJODE | 15
volume 1 ssue 1
january 2015
16 | IJODE
volume 1 ssue 1
january 2015
Literature Review
The process of formal education was
first established at the advent of complex
writing systems, namely Egyptian hieroglyphics (Fischer, 2004). Modern technology has
offered great advances to what is offered in
the 21st centurys educational system. Fewer
classrooms have chalk boards while more and
more classrooms have lecterns equipped with
computers, projectors, visual scanners, and audio systems. One of the greatest applications
of modern technology is that some classes do
not even have to have a classroom. Distance
learning offered through online courses is becoming a popular option among college students. Whereas traditional courses have been
well established through trial and evaluation
over time, online courses are only a few decades old and evaluative tools for these classes
have lacked intensive academic criticism. In
this study an alternative option will be put
forth for the effective evaluation of teaching
in online courses.
Traditional courses consist of the faceto-face transference of knowledge and instruction from teacher to student. Over the years,
the means of assessing the effectiveness of the
teacher has varied, but one has remained a
constant over time and is typically the most
weighed are the data gathered from the students evaluations of the course (Erdoan
et al, 2008; Seluk, 2011). These data have
been the basis for administrative evaluations
of teaching performance as a determinant in
the awarding of tenure, title promotions, and
pay increases (Loveland, 2007; Palmer, 2011).
Previous research has shown that numerous
variables effect students overall satisfaction
with a course. Type of class is very important
as it must reflect the intellectual ability of the
students. Types of courses include: lecture,
seminar, lab, and independent study. (DeBerg
and Wilson, 1990; Langbein, 1994). Class
size also greatly affects student evaluation responses. Lectures can range up to multiple
hundreds of people while seminars can be as
IJODE | 17
volume 1 ssue 1
january 2015
18 | IJODE
volume 1 ssue 1
january 2015
IJODE | 19
volume 1 ssue 1
january 2015
20 | IJODE
volume 1 ssue 1
january 2015
Results
The first four research questions were
answered using data from the faculty survey.
One third (n = 16) of the faculty who completed the survey were at the instructor rank,
while 18.8% (n = 9) were assistant professors,
29.2% (n = 14) were associate professors, and
14.6% (n = 7) were full professors. Two respondents (4.2%) did not reveal their rank. A
majority of the respondents had at least some
experience with online teaching at the time
of the research. Thus, 31.3% of the respondents had been teaching online for one to two
years and 37.5% had been teaching online for
three to five years. Moreover, 25% of the re-
IJODE | 21
volume 1 ssue 1
january 2015
22 | IJODE
volume 1 ssue 1
january 2015
IJODE | 23
volume 1 ssue 1
january 2015
Mean
Std. Deviation
Institutional policies
3.28
1.55
3.47*
1.12
Etiquette expectations
3.49*
1.17
*means statistically significantly different from 3.00 (neutral value) at p < .05
dards are rooted in sound pedagogical principles, which are independent of the medium
used for delivering the course content online
or traditional.
RQ 5 was further explored with data
from the faculty survey. To be more precise, the
faculty survey asked respondents to report their
rating of the importance of using each standard
in the QM rubric for student feedback. Five
point Likert-type scales, from 1(unimportant)
to 3 (neutral) to 5 (extremely important) were
used for these questions. Overall, results indicate that faculty rated all items above the neutral point. In fact, only one of the QM standards
was not statistically different from neutral: On
the average, faculty rated student feedback on
the presence of institutional policies in online
courses at m = 3.28 (st. dev. = 1.55), which is
not significantly different from 3 (t = 1.66; df
= 46; p = n.s). The items rated next lowest as
importance were prerequisite knowledge in the
field (m = 3.47; st. dev = 1.12) and student support services (m = 3.49; st. dev = 1.17). For each
of these two items a one-sample t-test indicated
that the respective mean was statistically significantly higher than 3 (neutral). Specifically,
for the item prerequisite knowledge in the field
the one-sample t-test resulted in t = 2.88; df =
46; p < .05, and for student support services the
one-sample t-test resulted in t = 2.87; df = 46; p
< .05. These results are synthesized in Table 1.
In addition, it is worth mentioning that
some of the items scored very high importance
ratings from faculty respondents. Having student feedback on clarity of instructions (m =
4.49; st. dev. = 1.04), on clarity of grading policies (m = 4.47; st. dev. = .88), and on course
navigation (m = 4.45; st. dev = .78) were at
the top of the importance ratings (see table 2).
Overall, the observations inspired by
the student survey and the statistics obtained
from the faculty survey lead toward the conclusion that the answer to RQ 5, which asked
Can the QM rubric be adapted into a student
feedback form for online classes? Is likely a
positive one.
Mean
Std. Deviation
Clarity of instructions
4.49
1.04
4.47
.88
Course navigation
4.45
.78
Table 2. Highest levels of agreement on questions about using a QM-based rubric as SEI
24 | IJODE
volume 1 ssue 1
january 2015
Discussion
Student evaluation of instruction and
instructors has always been a controversial
issue. Response rates have declined as universities move to online SEI administration. But
prior, in-class administration was rife with
problems including instructors that biased the
results by staying in the room during evaluation, or providing pizza and other more gradebased treats on or near evaluation day. Especially today, when response rates are low,
responses tend to be students with extreme
(usually negative but occasionally positive)
positions about the course. That said, faculty and administrators need some methods of
IJODE | 25
volume 1 ssue 1
january 2015
26 | IJODE
volume 1 ssue 1
january 2015
References
Abrami, P., dApollonia, S. & Rosenfield,
S. (1996) The dimensionality of student
ratings of instruction: what we know
and what we do not, in: J. C. Smart (Ed.)
Higher education: handbook of theory
and research, volume 11 (New York, Agathon Press).
Allen, I.E. & J. Seaman (2006). Making
the grade: Online education in the
United States. Needham: MA: Babson
Survey Research Group, The Sloan Consortium.
Bangert, A. W. (2004). The seven principles
of good practice: A framework for evaluating on- line teaching.The Internet and
Higher Education,7(3), 217-232.
Beldarrain, Y. (2006). Distance education
trends: Integrating new technologies to
foster student interaction and collaboration.Distance education,27(2), 139-153.
Berk, R. (2013). Face-to-Face versus Online Course Evaluations: A Consumers
Guide to Seven Strategies.Journal of
Online Learning and Teaching,9(1).
Compora, D. P. (2003). Current trends in
distance education: An administrative
model.Online Journal of Distance Learning Administration,6(2).
DeBerg, C.L. & J.R. Wilson (1990). An empirical investigation of the potential confounding variables in student evaluations
of teaching. Journal of Accounting Education, 8 (1), 37-63.
Erdoan, M., Uak, M., & Aydin, H. (2008).
Investigating prospective teachers satisfaction with social services and facilities
in Turkish universities.Journal of Baltic
Science Education,7(1), 17-26.
Feenberg, A. (1999). Distance learning:
Promise or threat.Crosstalk,7(1), 12-14.
IJODE | 27
volume 1 ssue 1
january 2015
28 | IJODE
volume 1 ssue 1
january 2015