Você está na página 1de 11

Interacting with Computers 20 (2008) 524–534

Contents lists available at ScienceDirect

Interacting with Computers


journal homepage: www.elsevier.com/locate/intcom

A web-based programming learning environment to support cognitive development


Wu-Yuin Hwang a, Chin-Yu Wang b,*, Gwo-Jen Hwang c, Yueh-Min Huang d, Susan Huang a
a
Graduate School of Network Learning Technology, NCU, Taiwan
b
Department of Tourism, Providence University, 200 Chungchi Road, Shalu, Taichung County 43301, Taiwan
c
Department of Information and Learning Technology, National University of Tainan, Taiwan
d
Department of Engineering Science, National Cheng Kung University, Taiwan

a r t i c l e i n f o a b s t r a c t

Article history: Web-based programming has become a popular and vital issue in recent years. The rapid growth of var-
Received 6 March 2008 ious applications not only demonstrates the importance of web-based programming, but also reveals the
Received in revised form 26 July 2008 difficulty of training relevant skills. The difficulty is owing to the lack of facilities such as online coding,
Accepted 30 July 2008
debugging and peer help to assist the students in promoting their cognitive development in web-based
Available online 12 August 2008
programming. To cope with these problems, in this paper, a web-based programming assisted system,
‘‘WPAS”, is proposed, which is able to support five programming activities with various difficulty levels
Keywords:
of cognition based on Bloom’s cognitive taxonomy. WPAS provides online coding, debugging and anno-
Web-based programming
Digital learning environment
tation tools to conduct the training and peer assessment for web-based programming. Experimental
Cognitive development results of 47 undergraduate students show that the innovative approach is helpful to the students in
Teaching/learning strategies improving their cognitive development in Web-based programming. In addition, according to the results
of the questionnaire, most of the participants perceived the ease of use and usefulness of the proposed
system. Therefore, this study suggests that teachers could design Web-based programming activities sup-
ported by the WPAS system to improve students’ cognitive development in web-based programming.
Ó 2008 Elsevier B.V. All rights reserved.

1. Introduction problem solving based programming is considered to be a


promising approach; furthermore, students are often asked to
In programming learning, continuous practice is required to en- write complete programs to solve problems as soon as possible
sure that the knowledge is retained (Truong et al., 2003). It has (Lister, 2001). Nevertheless, researchers have indicated that prob-
been found that actively and periodically scheduled learning is lem solving is in fact a necessary but not a sufficient criterion for
important for students to attain high levels of achievement programming (Winslow, 1996; Rist, 1995). The main difficulty
(Hwang and Wang, 2004). In the Internet era, a Web-based envi- faced by novices is expressing problem solutions as programs. Thus
ronment provides a convenient way for language programming. the coverage of programming comprehension and how to apply
In this circumstance, well-designed programming activities with programming comprehension to generate programs must remain
assisting tools play an important role, and students’ cognitive an important focus. Recently, Robins et al. (2003) have proposed
development in programming should be taken into account in a programming framework for novices which highlights three pro-
the designed activities (Lister and Leaney, 2003). However, most gramming procedures and the corresponding knowledge needed to
of the existing teaching methods for programming learning are complete each procedure; that is, the knowledge of planning meth-
inclined to place emphasis on students’ coding skills rather than ods to design a program, the knowledge of a programming lan-
on their cognitive development in language programming (Buck guage to generate a program, and the knowledge of debugging
and Stucki, 2001; Lister and Leaney, 2003). used to evaluate a program. Therefore, the cognitive development
In programming courses, there are several critical issues to be of programming to obtain the above knowledge needs to be con-
considered, including the ways to stimulate students’ interaction sidered in designing programming activities.
in or after class, methods to enrich students’ learning experiences, In traditional programming learning environments, such as
and facilities to assist students in sharing knowledge with their computer classrooms, it is not easy to promote students’ cognitive
classmates. Moreover, student learning is a process of interaction developments in programming if learning activities and learning-
between a set of inner experiences of the learners and the environ- assisted tools are not well integrated. For example, in addition to
ment (Slattery, 1995). In traditional programming courses, the generating the whole program, program gap filling and peer pro-
gram assessment play vital roles in building the programming
* Corresponding author. Tel.: +886 4 26328001x13519; fax: +886 426530035. cognitive development of students. Program gap filling can help
E-mail address: wang@pu.edu.tw (C.-Y. Wang). students in developing their skills of solving sub-problems such

0953-5438/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.intcom.2008.07.002
W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534 525

as variable block definition or control block building; afterwards should be facilitated to develop their cognitive structure of pro-
these skills can be combined together to solve the whole problem gramming in an appropriate way. In other words, learners’ cogni-
(Lieberman, 1986). Moreover, peer program assessment is helpful tive development in programming should be taken into account
for students in developing high level cognition for evaluating the during the design of programming learning activities. Lister and
quality of programs (Fallows and Chandramohan, 2001). Both pro- Leaney (2003) illustrated the following learning activities that stu-
gram gap filling and peer program assessment are important for dents are expected to learn in programming courses:
cognitive development in programming, but are not easily con-
ducted if no proper assisting tools are provided. The requirement (1) Knowledge activities including ‘‘memorize”, ‘‘state”, ‘‘name”
for assistance is especially highlighted in web-based programming and ‘‘recognize”. For example, students memorize the elements,
because complex considerations of server–client and database syntax, structure and methods of one programming language.
environments are usually needed. Thus, it has become an impor- (2) Comprehension activities including ‘‘restate” and ‘‘translate”.
tant and challenging issue to design computer-assisted tools that For example, students restate how a program executes.
take cognitive development of programming learning into (3) Application activities including ‘‘calculate”, ‘‘write” and
consideration. ‘‘solve”. For example, students accomplish a specific programming
Based upon Bloom’s cognitive taxonomy (1956), this study task such as giving partial code or gap filling according to some
attempts to design a series of web-based programming learning expressions.
activities to enhance students’ cognitive developments in (4) Analysis activities including ‘‘categorize”, ‘‘differentiate” and
web-based programming. Moreover, a Web-based Programming ‘‘discriminate”. For example, students identify whether a complete
Assisted System, WPAS, has been developed to support the learn- program is correct or not.
ing activities. From some practical applications of WPAS, it can (5) Synthesis activities including ‘‘create”, ‘‘design” and ‘‘plan”.
be seen that the innovative approach is helpful to the students in For example, students generate a complete program to solve a
both the cognitive and affective aspects for learning Web-based problem.
programming. In detail, after conducting the experiment, the ped- (6) Evaluation activities including ‘‘assess”, ‘‘evaluate” and
agogical findings show that the ‘‘program gap filling” activity is ‘‘judge”. For example, students review programs written by other
more difficult and needs much more consideration in the program- students and give comments, then the instructor grades the
ming learning than ‘‘program debugging”. The activity of ‘‘peer comments.
assessment” is found to be the most strongly related to students’
Programming concepts cannot be directly transferred from
learning achievements, and it is the best predictor variable of
instructors to students; they must be acquired actively by the stu-
learning achievements. As for evaluating the WPAS, most of the
dents (Ben-Ari, 2001). This research applies several programming
students are satisfied with the system.
learning activities, based on Bloom’s taxonomy of cognition and
Lister’s research, to facilitate students’ active learning and contin-
2. Literature review
uous practice in an experimental course.

In this section, the literature of Bloom’s taxonomy as well as its


2.2. Programming learning activities
importance in terms of the design of programming learning activ-
ities is discussed; moreover, the TAM (Technology Acceptance
Kolb’s Learning Styles Inventory (LSI) suggested a four-phase
Model) model (Davis, 1989), which is adopted in this study to eval-
cycle of learning including concrete experience, reflective observa-
uate how users come to accept and use the innovative approach, is
tion, abstract conceptualization, and active experimentation (Kolb,
also introduced.
1984). According to Kolb, concrete experience is a good starter dur-
ing students’ learning processes. That is, in programming learning
2.1. Bloom’s taxonomy of educational objectives
courses, practice is important for improving students’ learning.
Students should be given enough practice opportunities in an envi-
Bloom (1956) proposed a classification framework for writing
ronment where they can receive constructive and corrective feed-
educational objectives, which has been adopted in many educa-
back (Ben-Ari, 2001).
tional domains. The taxonomy consists of six educational objective
In programming learning courses; ‘‘coding to solve problems” is
levels (Bloom, 1956):
one of the most common learning activities for practicing pro-
gramming. However, a single activity is not enough to help stu-
(1) Knowledge: Knowledge is defined as the recalling of previ-
dents build their cognitive development of programming step by
ously learned data or information. Knowledge represents the low-
step; that is, multiple activities in proper sequence, from simple
est level of learning outcomes in the cognitive domain.
to complicated, are needed for practicing programming. According
(2) Comprehension: Comprehension is defined as the real under-
to Affleck and Smith (1999), one of the main difficulties for novice
standing of the meaning of a concept, which implies that one can
programmers is accessing their prior knowledge and applying it to
interpret the concept with one’s own words.
new problems. Using ‘‘fill in the gap” programming exercises is one
(3) Application: Application is defined as the ability to use a
of the ways to overcome the above problems (Lieberman, 1986).
learned concept in a new situation.
The given code in these ‘‘gap filling” exercises is generally well
(4) Analysis: Analysis is defined as the ability to separate mate-
known by students; the gap filling is a challenge which students
rial or concepts into component parts so that its organizational
should overcome by integrating their prior knowledge and the lat-
structure may be understood.
est learned knowledge. Hence, ‘‘fill in the gap” programming exer-
(5) Synthesis: Synthesis is defined as the ability to put parts
cises help students close the gap between existing and new
together to form a new whole.
knowledge (Van Merriënboer and Paas, 1990).
(6) Evaluation: Evaluation is defined as the ability to judge the
Also, computer programming classes often focus on teaching
value of material for a given purpose.
language syntaxes, analyzing problems and writing programs to
According to Bloom, each level of cognitive development de- solve problems. The class time has seldom been allocated for
pends upon the behaviors and the knowledge which are acquired debugging practice. However, debugging training is much more
at the previous levels. In programming learning courses, students important for novice programmers (Lee and Wu, 1999). Moreover,
526 W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534

Table 1
The criteria to evaluate students’ programs

Former studies Criteria


Jackson (1996) Correctness, Style, Efficiency, Complexity, Test data coverage
Brenda, Andy, Andrew, & Wee-Chong (2003) Correctness, Efficiency, Maintainability
Sitthiworachart and Joy (2004) Correctness, Quality (comments, variable/function names, structure of the program, indent of the program)

peer assessment has been applied in many different courses. Peer to solve problems and peer assessment, which correspond to the cog-
assessment is generally considered to be effective in promoting nitive levels, evaluation, synthesis, analysis, application, knowledge
students’ higher cognitive skills (Fallows and Chandramohan, and comprehension, in Bloom’s taxonomy.
2001), since students use their knowledge and skills to interpret,
analyze and evaluate others’ work in order to clarify and correct 3.1. Participants and subject
it (Segers and Dochy, 2001; Ballantyne et al., 2002). The use of peer
assessment is claimed to enhance students’ evaluative capacities Forty-seven undergraduate students majoring in Management
(Fallows and Chandramohan, 2001). Information Systems participated in the experiment. Most of the
students had basic computer concepts, but had no experience in
2.3. Criteria for evaluating students’ programs computer programming. The experimental course, which was car-
ried out from October 2005 to January 2006, was entitled ‘‘active
It is an important issue to evaluate and judge students’ pro- server page (ASP) programming” and took place three hours per
grams in all kinds of programming courses (Jackson, 1996). The week in a computer classroom. In addition to the in-class practice,
correctness of a program is a basic criterion when a teacher is judg- several learning activities were conducted in the WPAS environ-
ing the quality of students’ programs. To evaluate the correctness ment as homework. With the help of the WPAS, the researchers
of a program, three kinds of programming errors are classified were able to observe the learning behaviors of the students and
and employed: syntax errors, logical errors and runtime errors analyze the data at the end of each activity to find some interesting
(Jackson, 1996). Syntax errors will occur when a programmer mis- learning phenomena.
uses the syntax of a programming language. Logical errors (or
semantic errors) will occur when the logical thinking of a program- 3.2. Instructional design and learning activities
mer is not correct enough. Runtime errors will occur in many situ-
ations. For example, if a programmer does not notice the value Three topics were arranged in the following sequence.
domain of a variable, an ‘‘overflow/underflow” runtime error could
be encountered. The next important criterion to evaluate the qual- Topic 1: The programming concepts and syntax of ASP.
ity of students’ programs is efficiency. The quality of programs can Topic 2: Web form, variables, data passing and objects of ASP.
be finely distinguished by their efficiency when they run correctly. Topic 3: Concepts of database programming and system
Many factors, like algorithms and data structures, will influence a implementation.
program’s efficiency. The other important programming criterion
is programming style (readability). A program with a good program- Due to the limitations of the experiment period, this research
ming style is preferable since it is more readable and focused on topics 1 and 2 of the course. Fig. 1 illustrates the proce-
understandable. dure of the five activities. The learning activities were conducted in
When judging the quality of students’ programs, the above the following sequence: programming concepts testing, program gap
three criteria are taken into account in this study. Table 1 lists cri- filling, and so on. As shown in Fig. 2, these activities correspond to
teria for evaluating students’ programs from the previous related one or more cognitive levels in Bloom’s taxonomy, that is, evalua-
research (Jackson, 1996; Cheang et al., 2003; Sitthiworachart and tion, synthesis, analysis, application, knowledge and comprehension.
Joy, 2004). Note that knowledge and comprehension were considered together
and corresponded to the first activity, programming concepts test-
2.4. Technology acceptance model: TAM ing, in this research.

The TAM (Technology Acceptance Model) was developed by


Davis (1989) to evaluate how users come to accept and use a tech-
nology. Based on user acceptance of the technology, TAM theory
proposes perceived usefulness (PU) and perceived ease of use
Topic 1 Topic 2
(PEOU) to explain a user’s attitude toward a system. As a result
of TAM related research, PU was further employed to study a user’s Programming concepts Programming concepts
intention to use the system, and it was found that PEOU also had a testing testing
significant influence on the PU of a user who is currently using or
learning an information technology. This research employs TAM Program block filling Program block filling
theory to explore the ‘‘perceived usefulness” and ‘‘perceived ease
of use” of the WPAS system. Program debugging Program debugging

3. Method Coding to solve Coding to solve


problem problem
Based upon the above literature, a series of programming learn-
ing activities is proposed and conducted on the Web with the aim Peer assessment Peer assessment
of improving students’ learning. The activities include program-
ming concepts testing, program gap filling, program debugging, coding Fig. 1. The procedure of the programming learning activities.
W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534 527

Peer
Evaluation
assessment

Coding to solve Synthesis


problem
Program debugging Analysis

Program block filling Application

Programming concepts Knowledge & Comprehension


testing

Fig. 2. The five programming learning activities corresponding to Bloom’s taxonomy of educational objectives.

3.3. Design and development of the WPAS system online. Fig. 4 shows a snapshot of ‘‘program gap filling”. Also, with
the online coding tool, students were allowed to upload programs
The WPAS system was designed and developed to support Web- for practice, and were encouraged to do so. After the programs
based programming as mentioned above. The research data, such were uploaded, they were depicted as a list of hyperlinks in WPAS.
as the programs and the homework uploaded by the students, If the hyperlinks were clicked, the corresponding programs would
were collected and stored in the WPAS database. Moreover, stu- be executed, and the execution results as well as the source codes
dents were encouraged to make some annotations on the learning were also depicted simultaneously, as shown in Fig. 5. This mech-
materials (e.g., the demonstrated programs introduced by the anism facilitated the students to modify and improve their web
teachers). The annotations and notes made by the students were programs more conveniently and efficiently.
helpful to them while they were reviewing the materials after
class. Fig. 3 illustrates an overview of the WPAS system architec-
3.3.2. Annotation tool for assessing the programs
ture. Three main tools were provided by the system:
Students were encouraged to make annotations on the supple-
mental materials or programs uploaded by any other student. In
3.3.1. Online coding tool
this way, they could share their knowledge, comments and sugges-
An online coding tool was utilized in the ‘‘program gap filling”,
tions with each other. Fig. 6 illustrates annotations made by differ-
‘‘program debugging” and ‘‘coding to solve problems” activities.
ent students on a single program. The functionality of the
For example, during the ‘‘program gap filling” activity, the students
annotation tool included highlighting, underlining and comment
were asked to finish incomplete programs by writing codes in one
boxes (Hwang et al., 2007). Also, the annotation tool provided in
assigned blank block. Once students finished the program gap fill-
WPAS was utilized in the ‘‘programming concepts testing” and
ing and submitted their program within the online WPAS environ-
‘‘peer assessment” activities. During the ‘‘programming concepts
ment, the system would try to execute the program and instantly
testing” activity, students were asked to use the annotation tool
show the execution results with the source codes. If errors
to give their answers right beside the web-based assignments. As
occurred, students could modify their source codes immediately
for ‘‘peer assessment”, students were asked to review the programs

Fig. 3. Overview of the WPAS system architecture.


528 W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534

Fig. 4. Using the online coding tool in the ‘‘program gap filling” activity.

3.4.1. The grading criterion of ‘‘programming concepts testing”


A test that included open and closed questions was conducted,
and was assessed manually by the teachers. The grading criterion
of the test was simply the correctness of the answer to each
question.

3.4.2. The grading criteria of ‘‘program gap filling”, ‘‘program


debugging” and ‘‘coding to solve problems”
This research adapted three criteria, that is, ‘‘correctness”, ‘‘effi-
ciency” and ‘‘programming style” (as described in Section 2.3), to
evaluate the assignments for the ‘‘program gap filling”, ‘‘program
debugging” and ‘‘coding to solve problems” learning activities.
These criteria were announced before conducting each learning
activity, so that the students could understand how their home-
work would be graded. Each student’s performance in the three
activities showed their cognitive levels of programming in Bloom’s
application, analysis and synthesis, respectively. The details of the
three criteria are described as follows:

d Correctness: The teacher judged the correctness of each program


by executing it and testing it. If a program was completely correct,
Fig. 5. Illustration of program execution and the results. 100 points was given for the correctness criterion. If the result was
partially correct, 5 points was deducted per ‘‘bug”. On the other
hand, if the program could not be executed due to syntax errors,
uploaded by others and give comments by using the annotation zero points were given.
tool. Fig. 7 shows some annotations (including highlighting and
textual annotation) added by students during the ‘‘peer assess-
ment” activity. d Efficiency: In order to evaluate the efficiency of students’ pro-
grams, the time complexities of the programs were evaluated
3.3.3. Searching for key words in source code (Cormen et al., 2001). The time-complexity of a program is the
In WPAS, a keyword-searching functionality was also provided. number of steps that it takes to solve an instance of the problem
Students could easily find related programs in the WPAS as needed. as a function of the size of the input. If the time-complexity (a
Big O notation) of one program for a problem submitted by a
3.4. The grading criteria of programming learning activities student was quicker than or equal to the time-complexity of
the program provided by the teacher, then the student would
The achievements of the programming learning activities were get 100 points for the efficiency criterion; otherwise, a lower
graded based on the following criteria or assessment approaches: score was given.
W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534 529

Fig. 6. A snapshot of students’ annotations in a program taught by teachers.

Fig. 7. A snapshot of students’ ‘‘peer assessment” in a program.

activity. Each student’s comments on the assigned programs were


d Programming style: The students were taught and encouraged to classified into one of the five levels, and were then graded with the
write their programs using appropriate programming style, for corresponding score. The reliability of the scoring was investigated
example, writing program statements with clear structure, adding by Kendall coefficient of concordance. As shown in Table 2, the value
comments on each block of program codes and using library func- of Kendall’s W is sufficiently concordant. The above criteria were
tions. The programs with better programming style were graded employed to measure the quality of the occasions on which the
with higher scores. students acted within each programming activity. And the score
summation of all occasions had done by students in each activity
represents the students’ achievement in each programming
3.4.3. The grading criteria of ‘‘peer assessment”
activity.
Five quality levels were employed to evaluate the peer assess-
ment quality: ‘‘no comment”, ‘‘nonsense comments”, ‘‘rough com-
ments”, ‘‘meaningful descriptions and explanations about the 3.5. Research structure and research variables
assigned program” and ‘‘meaningful suggestions about the
assigned program”. Based on the quality of the five levels of The research structure is illustrated in Fig. 8. During the exper-
comments, three teaching assistants reviewed all of the comments iment, all students learned within the same learning period, were
that were contributed by the students during the peer-assessment taught by the same teacher, and utilized the same WPAS system
530 W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534

Table 2 new knowledge to solve problems, as well as to evaluate other pro-


The result of the Kendall coefficient of concordance grams, which both belong to the high level of cognition for learning
Kendall’s W Chi-square df Significance programming.
.905 119.420 44 .000
4.2. The relationship among the scores of the five programming
learning activities and learning achievements
with the same content provided in the courses. That is, some vari-
ables were controlled. In Fig. 8, a bidirectional-dotted-arrow line Simple regression analysis was utilized to explore the correla-
shows the correlation between the two connected variables. The tion between the scores of the five programming learning activities
proposed learning activities were investigated to show their and learning achievements. According to the results shown in
influence on programming learning achievement. The directional- Table 4, except for ‘‘concepts testing”, the scores of the ‘‘program
solid-arrow line means regression analysis was used to get the gap filling”, ‘‘program debugging”, ‘‘coding to solve problems”
prediction of learning achievement from the score of the five learn- and ‘‘peer assessment” activities were significantly positively
ing activities. The independent variables of this research include correlated with the students’ learning achievements. Moreover,
the scores for ‘‘programming concepts testing”, ‘‘program gap fill- the obtained R value of ‘‘peer assessment” is .615, which is the
ing”, ‘‘program debugging”, ‘‘coding to solve problems” and ‘‘peer highest. This reveals that ‘‘peer assessment” was the most critical
assessment”. The dependent variable is learning achievements. activity related to learning achievements.
The score of ‘‘programming concepts testing” stands for the score
earned by the students in the activity of ‘‘programming concepts 4.3. Multiple regressions between the scores of the five programming
testing”, and the score of ‘‘program gap filling” stands for the score learning activities and learning achievements
earned by the students in the activity of ‘‘programming concepts
testing”, etc. Moreover, learning achievements stands for the Multiple regressions were used to further study the prediction
post-test score earned by the students in the final exam on comple- of scores of the five programming learning activities on learning
tion of the course. achievements. The results are shown in Tables 5 and 6.
In Table 5, the R square value (R2 = .466, F = 6.976, p = 0.000)
4. Results and discussion indicates the whole predictability is 46.6%, that is, the students’
score for the five programming learning activities can be used to
4.1. Correlation between each programming learning activity predict 46.6% of their learning achievements. As the results of mul-
tiple regressions show in Table 6, the standardized coefficient b of
Pearson correlation was utilized to measure the relationships ‘‘peer assessments” was .457 (t = 2.869, p = 0.007) which was the
between students’ scores in each programming learning activity. highest of all. The results reveal that the students’ peer assessment
As shown in Table 3, there exist significant correlations between activity score is the best for predicting their learning achievements.
any two pairs of students’ cognitive achievements in the program- That is, the better the students performed in the peer assessment
ming learning activity. The highest coefficient (.608; between the activity, the higher their learning achievements. Note that since
coding to solve problems activity and the peer assessment activity) the standardized coefficient b of ‘‘programming concepts testing”
revealed that the ability of coding to solve problems was strongly is .268 in Table 6 and its R square value is .279 in Table 5, we
associated with that of assessing programs. This finding was also can not infer that ‘‘programming concepts testing” is of no help
explored deeply via the interviews. From the student interviews or has a negative impact on students’ learning achievements,
it was found that the ‘‘coding to solve problems” and ‘‘peer assess- because the correlation is positive between the score of the
ment” activities facilitated them in integrating their existing and ‘‘programming concepts testing” and learning achievements. To

Peer assessm ent Control Variables

1. Learning Period
2. Content of curriculums
Coding to solve
problem

Learning
Program debugging
achievem ents

Program block filling

Program m ing concepts


testing

Fig. 8. Research structure.


W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534 531

Table 3
Pearson correlation between students scores for the five programming activities

Pearson correlation Programming concepts testing Program gap filling Program debugging Coding to solve problems Peer assessment
** ** **
Programming concepts testing 1 .386 .662 .577 .493**
Program gap filling .386** 1 .351* .532** .548**
Program debugging .662** .351* 1 .496** .397**
Coding to solve problems .577** .532** .496** 1 .608**
Peer assessment .493** .548** .397** .608** 1
*
p < 0.05.
**
p < 0.01.

Table 4
The results of simple regression between the scores for the five programming learning activities and learning achievements

Variable R R2 Standardized coefficients b t p


Programming concepts testing .279 .078 .279 1.904 0.064
Program gap filling .488** .238 .488 3.665** 0.001
Program debugging .387* .150 .387 2.752* 0.009
Coding to solve problems .501** .251 .501 3.794** 0.000
Peer assessment .615** .378 .615 5.114** 0.000
*
p < 0.05.
**
p < 0.01, dependent variable: learning achievements.

investigate whether the above five activities were sufficiently inde- low achievement group in each activity. That is, the better the stu-
pendent, collinearity was considered in multiple regressions. dents performed in each learning activity, the higher the learning
According to statistics theory, Vif (Variance inflation factors) mea- achievements. Note that ‘‘program gap filling” (p = .002) is more
sures how much the variance of the standardized regression coef- significant than ‘‘program debugging” (p = .023), even though ‘‘pro-
ficient b is inflated by collinearity. According to Neter et al., there gram debugging” is supposed to be a higher cognition activity in
was no collinearity between the predictor variables if Vif value programming learning. This finding was also supported by the val-
was less than 10 (Neter et al., 1985). In Table 6, all Vif values are ues of R square shown in Table 4 (the R square value of ‘‘program
between 1.581 and 2.151. This means that there is no collinearity gap filling” is larger than that of ‘‘program debugging”). This was
between the score of the five programming learning activities. That an interesting phenomenon worth studying further by interview-
is, each programming learning activity had its own effectiveness on ing some students in both the high and low achievement groups
learning achievements. after the experiment. From the interviews with the students in
the high achievement group, it was found that the ‘‘program gap
4.4. The results of independent sample T-tests: performances of high filling” activities, which asked the students to provide a block of
and low achievement groups in the five programming learning program codes to work with the given codes, were harder than
activities the ‘‘programming debugging” activities. Also for the students in
the low achievement group, the ‘‘program gap filling” activities
The experimental class was further divided into a high achieve- were quite difficult in comparison with the ‘‘program debugging”
ment group and a low achievement group according to their post- activities. Therefore, in arranging ‘‘program gap filling” activities,
test scores (i.e., their learning achievements). The students whose longer practice time and more support from the teachers are
post-test score was in the top 27% belonged to the high achieve- needed. On the other hand, the significance of an independent
ment group, while those whose post-test score was in the bottom sample t-test in ‘‘program debugging” is .023. The significance is
27% were in the low achievement group. As shown in Table 7, the weaker than for the other activities, and the Standard deviation
scores of the five programming learning activities of the high of the low achievement group was obviously high
achievement group were all significantly higher than those of the (SD = 28.01934). Two reasons were found to cause this result,
namely, the number of ‘‘program debugging” assignments was
insufficient and the ‘‘program debugging” assignments were com-
paratively easy for some students.
Table 5
The summary of model in multiple regressions
4.5. Analysis of the questionnaire
Model R R2 Adjusted R square F p
1 .682 .466 .397 6.796 0.000 To investigate the students’ viewpoints about WPAS and the
five programming learning activities, the students were asked to

Table 6
The coefficient of multiple regressions between the scores for the five programming learning activities and learning achievements

Model Unstandardized coefficients b Std. coefficients b p Variance inflation factors


Programming concepts testing .285 .268 .127 2.151
Program gap filling .242 .165 .268 1.581
Program debugging .146 .240 .140 1.853
Coding to solve problems .115 .171 .316 2.064
Peer assessment 4.836 .457 .007* 1.855
*
p < 0.05, dependent variable: learning achievements.
532 W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534

Table 7
The results of independent sample T-tests: average scores of high and low achievement groups for the five programming activities

Activity Group N Average score Std. deviation t p


Programming concepts testing H 13 86.782 8.626 2.608 .017
L 13 74.625 14.423
**
Program gap filling H 13 82.038 9.177 3.587 .002
L 13 70.692 6.768
*
Program debugging H 13 89.653 7.738 2.562 .023
L 13 69.000 28.019
Coding to solve problems H 13 79.538 11.091 4.188** .000
L 13 52.807 20.165
Peer assessment H 13 4.076 .734 8.654** .000
L 13 1.641 .700
*
p < 0.05.
**
p < 0.01, H, high achievement group; L, low achievement group.

fill out a questionnaire, which included three dimensions: per- ities, other than ‘‘peer assessment”, could respectively improve
ceived ease of use (PEOU) of WPAS, perceived usefulness (PU) of their programming abilities (the average values of items 13, 14,
WPAS, and the PU of the five programming learning activities. 15 and 16 were all higher than 3.8). For ‘‘peer assessment”, one
The PEOU and PU were based on the Technology Acceptance Model point is worth mentioning; that is, the average rating for item 17
(TAM). was 3.50, which was the lowest in Table 11. It was found that 15
students chose ‘‘unable to answer” in that item, indicating that
4.5.1. Reliability and validity of the questionnaire peer assessment was too difficult for some students to give mean-
The questionnaires were given to 47 students in the experimen- ingful suggestions concerning others’ programs. Thus, the teacher
tal class and 46 valid answer sheets were obtained. The question- might need to give more support to those students who did not
naire consisted of 18 test items using a five-point Likert scale. The have high achievements in peer assessment. Finally, most of the
researchers utilized Cronbach’s a analysis to evaluate the internal students thought that, on the whole, the five programming learn-
consistency of each dimension of the questionnaire. As shown in ing activities could help promote their web-based programming
Table 8, the Cronbach a values in all dimensions were higher than concepts and abilities.
.80, and hence the questionnaire was considered to be highly reli-
able (Carmines and Zeller, 1979). Also, expert validity was used to 4.6. The results of the in-depth interviews
evaluate the validity of the questionnaire. During the questionnaire
design phase, all of the items were verified and validated by Eight students were interviewed by the researchers after con-
domain experts, and some ambiguous or unsuitable questions ducting the experiment. Three students were from the high
were modified and removed accordingly. achievement group and the others from the low achievement
group. These students gave some in-depth and interesting feed-
4.5.2. Results of questionnaire analysis back and suggestions as follows:
With respect to the perceived ease of use shown in Table 9, (1) Regarding the five programming learning activities, the
most of the students thought that the annotation tool and the on- interviewees thought that the learning design was well-structured
line coding tool in WPAS were easy to use. They also indicated that and ordered. Also, those few students who had previously taken
the operations of the two tools could be learned quickly. With programming courses thought that the learning activities in this
respect to the perceived usefulness listed in Table 10, the average course were more interesting than those in the other courses. For
ratings in items 8 and 10 were comparatively higher than for other example, two students shared the same feeling:
items. Also, 15 students chose ‘‘strongly agree” for item 8. Thus, ‘‘Compared with the programming course I had taken before, I
most of the students thought that the ‘‘online coding and execu- am interested in the five programming learning activities in this
tion” function provided in WPAS was useful for their programming class. I do not feel bored during the learning process. Various learn-
learning. In item 9, 38 students (10 ‘‘strongly agree” and 28 ing activities can train my thinking. They are helpful for me...”
‘‘agree”) agreed that the online coding tool can increase their learn- (2) Students would like to have more practice opportunities
ing efficiency. Moreover, according to the results of item 11, the immediately after lecturing in class. As some interviewees
students thought it was helpful to learn Web-based programming responded:
with WPAS. Finally, most of the students were satisfied with the ‘‘I need more time to practice after lectures in class. . .. . .”
WPAS system overall (item 12). By summarizing the results of per- ‘‘I think if the teacher can give us more time to do practice in
ceived ease of use and perceived usefulness, it can be seen that the class, we can learn better. . .. . .. . .”
students highly accepted the use of WPAS. As for the perceived ‘‘During practice, I may encounter some problems if I don’t
usefulness of the five proposed activities shown in Table 11, most understand enough, then I can ask and get help from the teacher
of the students thought that the four programming learning activ- and classmates soon in class. This will improve learning a lot.. . .. . .”
(3) The program gap filling activity created more challenges
than the program debugging. As four students in the low achieve-
Table 8 ment group responded:
Questionnaire dimension and the Cronbach a values ‘‘The of program-gap-filling assignments were difficult. Even
Dimension Cronbach a though I found several methods to solve the problem, I did not
value know which should be filled in the block. . .. . .”
Perceived ease of use of WPAS .8813 ‘‘I think the program-gap-filling activity is more difficult than
Perceived usefulness of WPAS .9152 the program-debugging activity. . .. . .”
Perceived usefulness of the five programming learning .9088 (4) Through the peer assessment activity, logical thinking will
activities
be stimulated, and programming skills will be enhanced. As one
Total Cronbach a value .9500
student in the low achievement group mentioned:
W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534 533

Table 9
Perceived ease of use

# Question SA A U D SD Average
1 I thought that I could easily finish the ‘‘programming concepts testing” assignments with the 11 31 2 2 0 4.09
annotation tool 23.9% 67.4% 4.3% 4.3% 0%
2 I thought that I could easily evaluate programs with the annotation tool 11 31 1 3 0 4.09
23.9% 67.4% 2.2% 6.5% 0%
3 I thought that I could easily finish the ‘‘program gap filling”, ‘‘program debugging” and ‘‘coding to 8 31 4 3 0 3.96
solve problems” assignments with the online coding tool 17.4% 67.4% 8.7% 6.5% 0%
4 I can proficiently use the annotation tool to finish assignments soon 6 33 6 0 1 3.91
13% 71.7% 13% 0% 2.2%
5 I can proficiently use the online coding tool to finish the programming assignments soon 5 33 7 0 1 3.87
10.9% 71.7% 15.2% 0% 2.2%

SA, strongly agree; A, agree; U, unable to answer; D, disagree; SD, strongly disagree.

Table 10
Prceived usefulness

# Question SA A U D SD Average
6 I thought that the annotation functionality was useful for me to learn programming 13 27 2 4 0 4.07
28.3% 58.7% 4.3% 8.7% 0%
7 I thought that the annotation functionality was useful to finish the ‘‘peer assessment” 6 30 8 2 0 3.87
13% 65.2% 17.4% 4.3% 0%
8 I thought it was useful that the system can automatically execute programs and show the results 15 29 0 2 0 4.22
32.7% 63% 0% 4.3% 0%
9 I thought that using online coding tool could improve my learning efficiency 10 28 6 2 0 4.00
21.7% 60.9% 13% 4.3% 0%
10 I thought that the functionality of modifying programs was helpful in the ‘‘program gap filling”, 10 33 2 1 0 4.13
‘‘program debugging” and ‘‘coding to solve problems” activities 21.7% 71.7% 4.3% 2.2% 0%
11 On the whole, I thought that it was helpful to learn ASP programming with WPAS 9 32 4 1 0 4.07
19.6% 69.6% 8.7% 2.2% 0%
12 On the whole, I was satisfied with WPAS 5 35 5 1 0 3.96
10.9% 76.1% 10.9% 2.2% 0%

SA, strongly agree; A, agree; U, unable to answer; D, disagree; SD, strongly disagree.

Table 11
Perceived usefulness of the five programming learning activities

# Question SA A U D SD Avg.
13 I thought that I could get help from the content of the materials during ‘‘program concepts testing” 3 39 4 0 0 3.98
activities 6.5% 84.8% 8.7% 0% 0%
14 I thought that it was helpful to apply my knowledge during ‘‘program gap filling” activities 5 34 3 3 1 3.85
10.9% 73.9% 6.5% 6.5% 2.2%
15 I thought that my ability in programming could be improved during ‘‘program debugging” 6 31 6 2 1 3.83
activities 13% 67.4% 13% 4.3% 2.2%
16 I thought that my skill in programming could be improved during ‘‘coding to solve problems” 7 31 5 2 1 3.89
activities 15.2% 67.4% 10.9% 4.3% 2.2%
17 I thought that I could improve my ability in programming evaluation and giving suggestions 4 22 15 3 2 3.50
during ‘‘peer assessment” 8.7% 47.8% 32.7% 6.5% 4.3%
18 On the whole, I thought that teaching with the five programming learning activities could help me 4 35 6 1 0 3.91
learn web-programming 8.7% 76.1% 13% 2.2% 0%

SA, strongly agree; A, agree; U, unable to answer; D, disagree; SD, strongly disagree.

‘‘Although I often did not fully understand others’ programs in ‘‘If the frequency of the ‘‘coding to solve problems” activity can
the peer assessment activity, I think that the activities will do me be increased, I think my programming skills will be better. . .. . .”
good.”
Also, three students in the high achievement group gave the fol-
lowing feedback: 5. Conclusions and discussion
‘‘I like to do peer-assessment. . .. . .. . .”
‘‘I can review and check my programming concepts when I This study proposes an innovative approach for the cognitive
assess my classmates’ programs. . .. . .. . .” development of programming learning. A Web-based Program-
‘‘I suggest that the teacher provide more peer assessment ming Assisted System (WPAS) that supports five programming
activities for us. . .. . .” learning activities from simple to complicated has been developed
‘‘Seeing and evaluating others’ programs can increase my based on the approach. The motivation of this study was to provide
programming experience. . .. . .” appropriate activities and sufficient practice to students, and help
(5) The interviewee suggested that more ‘‘coding to solve them develop their cognition in programming learning. From the
problems” should be added. As two interviewees responded: experimental results of a practical programming course, several
‘‘I think the arranged time in the ‘‘coding to solve problems” interesting and important findings were derived. First, the stu-
activity should be extended, because it was really the opportunity dents’ scores in each learning activity were positively related with
for me to integrate my knowledge. . .. . .” the score in the next activity, which implies that a well-structured
534 W.-Y. Hwang et al. / Interacting with Computers 20 (2008) 524–534

and ordered learning procedure which takes cognitive develop- References


ment into consideration is vital for programming learning. There-
fore, teachers might need to pay more attention to the Affleck, G., Smith, T., 1999. Identifying a need for web-based course support. In:
Proceedings of Conference of the Australasian Society for Computers in Learning
development of cognition in conducting programming learning in Tertiary Education, Brisbane, Australia, Online.
activities. Through well-structured and well-ordered learning Ballantyne, R., Hughes, K., Mylonas, A., 2002. Developing procedures for
activities, students can easily build their knowledge step by step implementing peer assessment in large classes using and action research
process. Assessment & Evaluation in Higher Education 27 (5), 427–441.
and will not feel frustrated during their programming learning Ben-Ari, M., 2001. Constructivism in computer science education. Journal of
process. Computers in Mathematics and Science Teaching 20 (1), 45–73.
Second, the results of multiple regression analysis showed that Bloom, B.S., 1956. Taxonomy of Educational Objectives: Handbook I: Cognitive
Domain. Longman, NY.
the students’ peer assessment activity score was the best predictor
Buck, D., Stucki, D., 2001. JKarelRobot: a case study in supporting levels of cognitive
variable for learning achievements. That is, peer assessment was development in the computer science curriculum. In: Proceedings of SIGCSE
the learning activity most strongly related to the students’ learning Technical Symposium on Computer Science Education, Charlotte NC, USA, ACM
achievements. Therefore, in designing learning activities for pro- Press, 16–20.
Carmines, E.G., Zeller, R.A., 1979. Reliability and validity assessment. Sage
gramming training courses, peer assessment, a programming University Paper 17, Beverly Hills: Sage Publications.
learning activity with high level cognition, needs to be taken into Cheang, B., Kurnia, A., Lim, A., Oon, W.-C., 2003. On automated grading of
account more. Also, it would be better for the teachers to give more programming assignments in an academic institution. Computer & Education
41 (2), 121–131.
support to those students who have difficulty in doing such a high Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C., 2001. Introduction to Algorithms,
level activity. second ed. MIT Press and McGraw-Hill, Boston.
Third, the questionnaire results regarding attitudes towards the Davis, F.D., 1989. Perceived usefulness, perceived ease of use, and user acceptance
of information technology. MIS Quarterly 13 (1), 319–340.
value of peer assessment are fascinating as they directly contradict Fallows, S., Chandramohan, B., 2001. Multiple approaches to assessment: reflections
the performance data. That is, although some of the students on use of tutor, peer and self assessment. Teaching in Higher Education 6 (2),
earned a low score in the peer assessment activity, they thought 229–246.
Hwang, W.Y., Wang, C.Y., 2004. A study on learning time pattern in asynchronous
peer assessment was useful for their programming learning. In- learning environments. Journal of Computer Assisted Learning 20 (4), 292–304.
deed, with the attractive annotation tools provided by WPAS, stu- Hwang, W.Y., Wang, C.Y., Sharples, M., 2007. A study of multimedia annotation of
dents’ motivation could be enhanced because they can exchange web-based material. Computers & Education 48 (4), 680–699.
Jackson, D., 1996. A software system for grading student computer programs.
programming skill and concepts through the comments made by
Computers & Education 27 (3), 171–180.
annotation tools. Also, ‘‘peer assessment” included encouraging Kolb, D.A., 1984. Experiential Learning: Experience as the Source of Learning and
students to enter into discussion and a range of social variables Development. Prentice Hall, England.
which could account for these findings. With these tools, students Lee, G.C., Wu, J.C., 1999. Debug it – a debugging practicing system. Computer &
Education 32 (2), 165–179.
would be facilitated to learn and help one another more outside of Lieberman, H., 1986. An example based environment for beginning programmers.
class, and this indirect effect could be responsible for improving Journal of Instructional Science 14 (3), 277–292.
their learning. Lister, R., 2001. Objectives and objective assessment in CS1. Proceedings of
SIGCSE2001. ACM press.
Finally, although well-designed learning activities and corre- Lister, R., Leaney, J., 2003. First year programming: let all the flowers bloom. In:
sponding facilities such as WPAS are helpful to the students in Proceedings of the 5th Australasian Computer Education Conference.
developing the cognition of programming, the effectiveness of Neter, J., Wasserman, W., Kutner, M.K., 1985. Applied Linear Statistical Models:
Regression Analysis, of Variance, and Experimental Designs. Richard D Irwin,
the WPAS in terms of students’ programming learning needs to America.
be further investigated. A ‘‘pretest–posttest two-group quasi- Rist, R.S., 1995. Program structure and design. Cognitive Science 19, 507–562.
experiment” will be carried out in the near future to perform such Robins, A., Rountree, J., Rountree, N., 2003. Learning and teaching programming: a
review and discussion. Computer Science Education 13 (2), 137–172.
an evaluation. During the future experiment, a control group with- Segers, M., Dochy, F., 2001. New assessment forms in problem based learning: the
out WPAS and an experimental group with WPAS will participate. value-added of the students’ perspective. Studies in Higher Education 26 (3),
The effectiveness of the WPAS will be revealed in this future re- 327–343.
Sitthiworachart, J., Joy, M., 2004. Effective peer assessment for learning computer
search. Also, baseline measures of programming knowledge will
programming. In: Proceedings of the 9th Annual SIGCSE Conference, Innovation
be taken to see if low achiever gains are greater or less than high and Technology in Computer Science Education, pp. 122–126.
achiever gains. Slattery, P., 1995. Curriculum Development in the Postmodern Era. Garland, New
York.
Truong, N., Bancroft, P., Roe, P., 2003. A web based environment for Learning to program.
Acknowledgements In: ACM International Conference Proceeding, Series Vol. 35, pp. 255–264.
Van Merriënboer, J.J.G., Paas, F.G.W.C., 1990. Automation and schema acquisition in
The authors acknowledge the anonymous reviewers whose sug- learning elementary computer programming: implications for the design of
practice. Journal of Computers in Human Behavior 6 (3), 273–289.
gestions and comments were helpful in the improvement of this Winslow, L.E., 1996. Programming pedagogy – a psychological overview. SIGCSE
paper. Bulletin 28 (3), 17–22.

Você também pode gostar