Você está na página 1de 3

Quantitative Assessment Methods

Quantitative methods use numbers for interpreting data (Maki, 2004) and \"are
distinguished by emphasis on numbers, measurement, experimental design, and
statistical analysis\" (Palomba & Banta 1999). Large numbers of cases may be
analyzed using quantitative design, and this type of design is deductive in nature,
often stemming from a preconceived hypothesis (Patton, 2002). The potential to
generalize results to a broader audience and situations make this type of
research/assessment design popular with many. Although assessment can be
carried out with the rigor of traditional research, including a hypothesis and
results that are statistically significant, this is not a necessary component of
programmatic outcomes-based assessment. It is not essential to have a certain
sample size unless the scope of your assessment is on the institutional level.

A traditionally favored type of research design that has influenced outcomes-

based assessment methodology is quantitative assessment. Quantitative
assessment offers a myriad of data collection tools including structured
interviews, questionnaires, and tests. In the higher education setting, this type of
design is found in many nationally employed assessment tools (e.g., National
Survey of Student engagement, Community College survey of Student
Engagement, and the CORE Institute Alcohol and Drug Survey) but can also be
locally developed and used to assess more specific campus needs and student
learning outcomes. It is important when engaging in quantitative methodological
design, sampling, analysis, and interpretation to ensure that those individuals
involved are knowledgeable about, as well as comfortable with, engaging in
quantitative design (Palomba & Banta, 1999).

At Colorado State University, two primary quantitative assessment methods are

used to examine apartment life on campus. \"The Apartment Life Exit Survey is
given to residents as they begin the 'vacate' process from their apartment.
Results are tabulated twice each year, once at the end of fall semester and once
in the summer\" (Bresciani et al., in press).

Administrators at Pennsylvania State University originally measured the success

of their newspaper readership program based on satisfaction and use. The
quantitative survey they were using was later revised \"to include more detailed
information on students' readership behavior (e.g., how frequently they are
reading a paper, how long, and which sections), students' engagement on
campus and in the community, and their self-reported gains in various outcomes
(e.g., developing an understanding of current issues, expanding their vocabulary,
articulating their views on issues, increasing their reading comprehension)\"
(Bresiani et al., 2009). This revision allowed them to use survey methodology
while still measuring the impact of the program on student learning.

CSUS underwent a similar revision process of a locally developed quantitative

survey looking at its new student orientation program. Originally, only student
and parent satisfaction were measured. This was later revised to include a
true/false component in the orientation evaluation that used a form of indirect
assessment. In the final revision, a pre-and post-test were administered to those
students attending orientation to measure the knowledge gained in the
orientation session (Bresciani et al., 2009).

In addition, a great deal of data already contained in student transactional

systems can be used to assist in the evaluation of programs. Data such as facility
usage, service usage, adviser notations, participation in student organizations,
leadership role held, and length of community service can all help in explaining
why outcomes may have been met. For instance, staff at an institution's
counseling service desire for all students who are treated for sexually
transmitted diseases to be able to identify the steps and strategies to avoid
contracting them before leaving the 45-minute office appointment. However,
when they evaluated this, they learned that only 70% of the students were able
to do this, but they also examined their office appointment log and realized that
because of the high volume of patients, they were only able to spend 27 minutes
with each student on average. The decreased intended time to teach students
about their well-being may explain why the counseling staff's results were lower
than they would have desired.

Qualitative Assessment Methods

According to Denzin and Lincoln (2004), qualitative research is \"multimethod in

focus, involving an interpretive, naturalistic approach to its subject matter\" (p.
2). Upcraft and Schuh (1996) expand this definition by stating, \"Qualitative
methodology is the detailed description of situations, events, people,
interactions, and observed behaviors, the use of direct quotations from people
about their experiences, attitudes, beliefs, and thoughts\" (p. 21). Qualitative
assessment is focused on understanding how people make meaning of and
experience their environment or world (Patton, 2002). It is narrow in scope,
applicable to specific situations and experiences, and is not intended for
generalization to broad situations. Different from quantitative research,
qualitative research employs the researcher as the primary means of data
collection (e.g., interviews, focus groups, and observations. Also unlike
quantitative research, the qualitative approach is inductive in nature, leading to
the development or creation of a theory rather than the testing of a
preconceived theory of hypothesis (Patton). It is important to note then that
when applying qualitative methodology to outcomes-based assessment, you are
not fully using an inductive approach because you are using the methodology to
determine whether an intended outcome has been identified. However, the
application of the methods themselves can yield very rich findings for outcomes-
based assessment.

Data for qualitative analysis generally result from fieldwork. According to Patton
(2002), during fieldwork a researcher spends a significant amount of time in the
setting that is being investigated or examined. Generally multimethod in focus,
three types of findings often result from the qualitative fieldwork experience;
interviews, observations, and documents.

Each primary type of qualitative data contributes unique and valuable

perspectives about student learning to the outcomes-based assessment process.
When used in combination, a more complete or holistic picture of student
learning is created.


Interviews comprise a number of open-ended, questions that result in responses

that yield information \"about people's experiences, perceptions, opinions,
feelings, and knowledge\" (Patton, 2002, p.4). It is common to engage in face-to
face verbal interviews with one individual; however, interviews may also be
conducted with a group and administered via mail, telephone, or the Web
(Upcraft & Schuh, 1996). Though questions and format may differ, an essential
component of any interview is the \"trust and rapport to be built with
respondents\" (Upcraft & schuh, p. 32). Open-ended questions can also be given
to students at the conclusion of a program or an event to receive quick and
immediate feedback. At Widener University, \"questions presented before,
during, and after the {student health services} presentations allowed for an
interactive experience and a means to monitor learning progress\" (Bresciani et
al., in press.