Escolar Documentos
Profissional Documentos
Cultura Documentos
ORG
International Journal
of
Learning, Teaching
And
Educational Research
p-ISSN: 1694-2493
e-ISSN: 1694-2116
Vol.11 No.1
PUBLISHER International Journal of Learning, Teaching and
London Consulting Ltd Educational Research
District of Flacq
Republic of Mauritius
www.ijlter.org The International Journal of Learning, Teaching
and Educational Research is an open-access
Chief Editor journal which has been established for the dis-
Dr. Antonio Silva Sprock, Universidad Central de semination of state-of-the-art knowledge in the
Venezuela, Venezuela, Bolivarian Republic of field of education, learning and teaching. IJLTER
welcomes research articles from academics, ed-
Editorial Board
Prof. Cecilia Junio Sabio ucators, teachers, trainers and other practition-
Prof. Judith Serah K. Achoka ers on all aspects of education to publish high
Prof. Mojeed Kolawole Akinsola quality peer-reviewed papers. Papers for publi-
Dr Jonathan Glazzard cation in the International Journal of Learning,
Dr Marius Costel Esi Teaching and Educational Research are selected
Dr Katarzyna Peoples through precise peer-review to ensure quality,
Dr Christopher David Thompson
originality, appropriateness, significance and
Dr Arif Sikander
Dr Jelena Zascerinska readability. Authors are solicited to contribute
Dr Gabor Kiss to this journal by submitting articles that illus-
Dr Trish Julie Rooney trate research results, projects, original surveys
Dr Esteban Vázquez-Cano and case studies that describe significant ad-
Dr Barry Chametzky vances in the fields of education, training, e-
Dr Giorgio Poletti learning, etc. Authors are invited to submit pa-
Dr Chi Man Tsui
Dr Alexander Franco
pers to this journal through the ONLINE submis-
Dr Habil Beata Stachowiak sion system. Submissions must be original and
Dr Afsaneh Sharif should not have been published previously or
Dr Ronel Callaghan be under consideration for publication while
Dr Haim Shaked being evaluated by IJLTER.
Dr Edith Uzoma Umeh
Dr Amel Thafer Alshehry
Dr Gail Dianna Caruth
Dr Menelaos Emmanouel Sarris
Dr Anabelie Villa Valdez
Dr Özcan Özyurt
Assistant Professor Dr Selma Kara
Associate Professor Dr Habila Elisha Zuya
VOLUME 11 NUMBER 1 April 2015
Table of Contents
Using Natural Language Processing Technology to Analyze Teachers’ Written Feedback on Chinese Students’
English Essays ........................................................................................................................................................................ 1
Ming Liu, Weiwei Xu, Qiuxia Ran and Yawen Li
A Study of the Development of Courseware and Students’ Learning Effectiveness in Primary Education: Using
Three Teaching Techniques as an Example ....................................................................................................................... 22
Fang-Chun Ou
A Comparative Examination of Teacher Candidates’ Professional Practicum Experiences in Two Program Models
................................................................................................................................................................................................. 36
Nancy Maynes, Anna-Liisa Mottonen, Glynn Sharpe and Tracey Curwen
Impact Investigation of using a Digital Literacy Technology on a Module: Case Study of Tophat .......................... 99
Xue Zhou and Stella-Maris Orim
Implementation of the 2006 Education Amendment Act on Indigenous Languages in Zimbabwe: A Case of the
Shangaan Medium in Cluster 2 Primary Schools in the Chiredzi District ................................................................. 117
Webster Kododo and Sparky Zanga
Teaching Culture through Language: Exploring Metaphor and Metonymy in Chinese Characters ..................... 161
Hu, Ying-Hsueh
Coaches‟ Perceptions of how Coaching Behavior affects Athletes: An Analysis of their Position on Basic
Assumptions in the Coaching Role ................................................................................................................................. 180
F. Moen, R. Giske and R. Høigaard
Regional Educational Development Research and School Improvement: A Systematic Literature Review of
Research ............................................................................................................................................................................... 200
Associate Professor Lena Boström
The Value-Added Assessment of Higher Education learning: The case of Nagoya University of Commerce and
Business in Japan ............................................................................................................................................................... 212
Hiroshi Ito Surname and Nobuo Kawazoe
1
1. Introduction
With the coming of the 21st century and the globalization of English, English
essay writing, as one of the four basic skills of language learning, has become a
more and more important skill. It not only requires some basic writing skill,
such as spelling and grammar, but also asks some high competency of writing,
such as coherence, structure and reasoning. Thus, it is also a difficult task to
overcome. It is particularly so in China. Statistics show that the number of
college students in China has soared to twenty-six million in 2013 (Bureau of
Statistis of China, 2013), accounting for the largest proportion of ESL learners
worldwide. Since 1987, the writing test has become one important aspect of the
College English testing in China. As for college students in China, college
English has been an obligatory course to take. In a typical English course,
students have to do 2-3 essay writing assignments and take 1 essay writing test
in order to pass national English tests, such as College English Test (CET) 4 or
Test for English-Major (TEM) 4. Essay writing is the last part of these tests.
Novice writers need feedback to develop their writing skills; however,
providing timely and meaningful feedback is time-consuming and expensive.
The aim of this study is to investigate the frequent type of feedback used by
human teachers and the relationship between the feedback and the textual
features extracted by using the natural language processing techniques.
The rest of this paper is constructed as follows: Section 2 presents related work
on feedback classification. Section 3 describes the study and discusses the
results. Finally, Section 4 concludes this paper.
2. Related Work
Recent development in natural language processing techniques has made it
possible for researchers to develop a wide range of sophisticated techniques that
facilitate text analysis. Some tools, such as Coh-Metrix (Graesser, McNamara,
Louwerse, & Cai, 2004), LIWC (Pennebaker & Francis, 1999) and Gramulator
(Rufenacht, McCarthy, & Lamkin, 2011), are useful in this respect, and have
certainly contributed to ESL knowledge (S.A. Crossley & McNamara, 2012).
Coh-Metrix is a powerful computational tool that provides over 100 indices of
cohesion, syntactical complexity, connectives and other descriptive information
about content (Graesser et al., 2004). Coh-Metrix has extensively been used to
analyze the overall quality of writing (S.A. Crossley & McNamara, 2012) and
one important aspect of writing quality, such as coherence (Scott a. Crossley &
McNamara, 2011a). For example, Crossley and McNamara found that
computational indices related to text structure, semantic coherence, lexical
sophistication, and grammatical complexity best explain human judgments of
text coherence. This study focused on using Coh-Metrix to analyze more aspects
of writing quality including, Supporting Ideas, Conclusion and Sentence
Diversity.
The AES systems, such as Criterion (Burstein, Chodorow, & Leacock, 2004), can
provide feedback on some aspects of writing including grammar, usage,
mechanics, style, organization, development, lexical complexity and prompt-
specific vocabulary usage (See Table 1). The Criterion categories are more
relevant to our case since we aim to generate corrective feedback on different
aspects of ESL student writing.
3. Study
We conducted an empirical study in analyzing Chinese ESL college student
essays with teachers’ comments and the relationship between the teacher
feedback and textual features. Section 3.1 describes the annotation process,
where each essay is scored in different aspect, such as Grammar, Spelling,
Coherence, Organization and Supporting Ideas. Section 3.2 shows the textual feature
extraction process. Section 3.3 illustrates the relationship between the textual
features and each feedback category, while section 3.4 examines the predictive
strength of the features in explaining the score variance in the each feedback
score.
Our dataset containing 105 English majors’ essays with teachers’ feedback was
collected from a large university in China. Two experienced English teachers
volunteered to rate the quality of the essays. They had at least five years of
teaching composition course for English majors. Their first task was to identify
the most frequent feedback type adapted from the standardized rubric used for
grading college English. 9 frequent feedback categories were found, including
Grammar, Spelling, Word Count, Sentence Diversity, Conclusion, Supporting Ideas,
Organization, Coherence and Chinglish (See Appendix I). Table 2 shows that
Supporting Ideas and Organization categories were more frequent than others,
while Spelling and Chinglish Expression and word count were less frequent. We
observed some feedback categories were similar to the Criterion categories, such
as Grammar, Spelling and Supporting Ideas. But, the Chinglish Expression and
Conclusion categories only appeared in our dataset.
The teachers’ second task was to give a score to each feedback category
regarding to the rubric (See Appendix I) on a scale of 3. 1 means negative
feedback on the category while 3 means positive feedback on the category. The
Correlations between the raters are located in Table 2. The raters had the highest
correlations for judgments of Grammar, Word Count, Conclusion and
Supporting Ideas and the lowest correlations for Chinglish and Sentence
Diversity.
For further analysis, the dataset was randomly divided into training set (n=70)
and testing set (n=35). A training set was used to identify which of the textual
features most highly correlated with each feedback score. Moreover, the training
set was used to train a multiple regression model to examine the amount of
variance explained by each writing feature. The model was then applied to a test
set to calculate the accuracy of the analysis.
Moreover, we propose and extract 8 new features that are not available in Coh-
Metrix. These features refer to characteristics of ESL learners’ writing style and
reflect on the importance of the introduction section, conclusion section and
mechanics in errors including spelling errors and grammatical errors. In the
database, each essay is stored as a plain text, where each line is a paragraph. We
use Java API to extract the first line and last line text, as introduction and
conclusion section respectively. For checking spelling errors, an open source
spelling error checker, called LanguageTool (http://www.languagetool.org/), is
employed to scan each word. For checking grammatical errors, the Link
Grammar Parser (Lafferty, Sleator, & Temperley, 1992) is used to check the
grammar of a sentence based on natural language processing technology. If the
link grammar could not generate links (relations between pairs of words) after
parsing a sentence, this sentence would be considered as ungrammatical.
Number of words in Introduction: the total number of words in the first paragraph
considered as the introduction section.
Number of words in Conclusion: the total number of words in the last paragraph
considered as the conclusion section.
Spelling errors: the number of spelling errors. We employ an open source spelling
error checker called LanguageTool (http://www.languagetool.org/), which is
part of the OpenOffice suite.
Grammatical errors: the number of sentences with grammatical errors. We use the
Link Grammar Parser (Lafferty et al., 1992) to check the grammar of a sentence,
which is also widely used in ESL context.
Percentage of spelling errors: the ratios of the number of word spelling errors to the
total number of words in the document.
Therefore, there are totally 116 features extracted from each essay.
Based on the system producing feature scores and the human annotators’ score
on each category, we used IBM SPSS for evaluating the Pearson correlation
between textual features and each category. Over 30 textual features
demonstrated significant correlations with the human ratings of each feedback
category. Table 3 shows the Chinglish was more related to the number of Gerund
used, the paragraph length and the first person singular pronoun incidence. The
Coherence was correlated to Text Easability PC Deep cohesion, consistent with
Crossley and McNamara’s study result (S. Crossley & McNamara, 2010). As
expected, the Conclusion was more related to the features of Conclusion Portion
and Lexical Diversity. We have not defined specific features which can detect the
Supporting Ideas. However, some features, such as Intentional verbs and
Adjective incidence, have shown their moderate correlations with the category
of Supporting Ideas. As we had expected, the Grammar and Spelling were
negatively related to the features of grammar error and spelling error. The Word
Count was correlated to the number of words in an essay. Organization was
correlated to the number of paragraphs since the essays with only 1 or 2
paragraphs were given lower scores by human annotators since they did not
have a clear essay structure, introduction, body and conclusion. Crossley and
MacNamara (Scott a. Crossley & McNamara, 2011b) got the similar study
results, where six features including the total number of paragraphs were
significant predictors in the regression to the raters’ organization evaluations.
Table 4: Linear Regression Analysis to Predict Essay Feedback Ratings in Testing Set
Feedback R R2 S.E.
Chinglish .764 .584 .349
Expression
Coherence .790 .624 .472
Conclusion .616 .380 .486
Supporting .745 .555 .407
Ideas
Grammar .939 .881 .260
Sentence .735 .540 .423
Variety
Spelling .941 .886 .242
Organization .475 .223 .473
Word Count .756 .572 .535
Notes: S.E. is standard error
4. Conclusion
Human teachers’ written feedback is very useful for students to revise their draft
and improve writing. A great number of researches has been conducted to
investigate the theoretical foundation of feedback in terms of feedback mode,
feedback strategies and feedback classification. With the development of
information technologies, automated essay scoring tools have been proposed,
which can extract textual features and generate corrective feedback on the traits
of writing including grammar, usage, style, mechanics and organization.
However, these AES systems are mainly designed for international ESL
students, who take TOFEL test. Those students can only represent a small
portion of ESL students, because they obviously possess a higher English
competency. Thus, we conducted an empirical study to investigate the frequent
feedback types and examine the feasibility of using existing natural language
processing tools to automatically measure the feedback.
In the study, we collected 105 essays written by English majors and some
teachers’ comments at a large university in China. Two English teachers first
found 9 frequent feedback categories based on the teachers’ comments. Some
feedback categories are consistent with the Criterion category. Then, they gave a
Our future work will examine teachers’ comments in detail and collect non-
English major student essays for analysis. In addition, we will focus on building
an automatic essay feedback generation system. Specifically, we will investigate
the feedback generation mechanism by using association rule mining
algorithms. In addition, we will look at how to incorporate effective feedback
strategies, such as formative feedback theory, into feedback generation
templates.
Acknowledgment
The authors would like to thank those teachers and student participants. This
work is partially supported by Chongqing Social Science Planning Fund
Program under grant No. 2014BS123, Fundamental Research Funds for the
Central Universities under grant No. SWU114005, No. XDJK2014A002 and No.
XDJK2014C141 in China.
Appendix A
Table 5: Nine Traits Rubric for Essay Writing
Category Scoring
Organization 1 Rudiment of organization apparent, but may be illogical,
ineffective or different to understand the sequencing of ideas
2 Satisfactory organization of sections, but the sequencing of
paragraphs within sections may be problematic.
3 Effective method of organization for both section and for
paragraphs within sections.
Supporting Ideas 1 Minimal use of examples and facts to support the writer’s
idea.
2 using some examples and facts to discuss
strengths/weakness of some opinions, but may have difficulties
(1) choosing appropriate facts; (2) sufficiently explaining those
facts; (3) connecting them to present thing.
3 Effective supports the strengths and weakness of one’s
opinion; Generally effective use of choice of examples and facts,
although some material may be extraneous or not adequately
explained
Grammar 1 Uses simple sentence constructions, but there are still
numerous errors (greater than 7).
References
Crossley, S. a., & McNamara, D. S. (2011a). Text Coherence and Judgments of Essay
Quality: Models of Quality and Coherence. The 33rd Annual Conference of the
Cognitive Science Society.
Crossley, S. a., & McNamara, D. S. (2011b). Understanding expert ratings of essay quality:
Coh-Metrix analyses of first and second language writing. International Journal of
Continuing Engineering Education and Life-Long Learning, 21, 170. doi:
10.1504/IJCEELL.2011.040197
Crossley, S. A., & McNamara, D. S. (2012). Predicting second language writing
proficiency: The role of cohesion, readability, and lexical difficulty. Journal
of Research in Reading, 35, 115-135.
Graesser, A. C., McNamara, D. S., Louwerse, M. M., & Cai, Z. (2004). Coh-metrix:
analysis of text on cohesion and language. Behavior research methods, instruments,
& computers, 36, 193-202.
Haswell, R. (2006). The complexities of responding to student writing; or, looking for
shortcuts via the road of excess. Across the Disciplines, 3.
Kintsch, W., & van Dijk, T. (1978). Towards a model of text comprehension and
production. Psychological Review, 85, 363-394.
Lafferty, J., Sleator, D., & Temperley, D. (1992). Grammatical Trigrams: A Probabilistic
Model of Link Grammar. Paper presented at the Proceedings of the AAAI
Conference on Probabilistic Approaches to Natural Language.
Lee, I. (2004). Error correction in L2 secondary writing classrooms: The case of Hong
Kong. Journal of Second Language Writing, 13, 285-312. doi:
10.1016/j.jslw.2004.08.001
Leki, I. (1991). The preferences of ESL students for error correction in college-level
writing classes. Foreign Language Annals, 24, 203-218.
Liu, M., Calvo, R., & Rus, V. (2014). Automatic Generation and Ranking of Questions for
Critical Review. Educational Technology & Society, 17, 333-346.
Liu, M., Calvo, R. A., & Rus, V. (2010). Automatic Question Generation for Literature
Review Writing Support. Carnegie Mellon University, USA: Springer's Lecture
Notes in Computer Science
Pennebaker, J. W., & Francis, M. E. (1999). Linguistic inquiry and word count (LIWC).
Rufenacht, R. M., McCarthy, P. M., & Lamkin, T. A. (2011). Fairy Tales and ESL Texts:
An Analysis of Linguistic Features Using the Gramulator. Proceedings of the
Twenty-Fourth International Florida Artificial Intelligence Research Society Conference.
Shermis, M. D., & Burstein, J. (2003). Automated essay scoring: A cross-disciplinary
perspective. 16.
The student, the text, and the classroom context: A case study of teacher response, 7 23-
55 (2000).
Thiesmeyer, E. C., & Theismeyer, J. E. (1990). Editor:A System for Checking Usage,
Mechanics, Vocabulary, and Structure.
Villalon, J., Kearney, P., Calvo, R. A., & Reimann, P. (2008). Glosser: Enhanced Feedback
for Student Writing Tasks.
Williams, R., & Dreher, H. (2004). Automatically Grading Essays with Markit©. Issues in
Informing Science and Information Technology, 1, 693-700.
Introduction
Student engagement plays an important role in a learning activity. Studies
(Fredricks, Blumenfeld, & Paris, 2004) show that a student who is engaged and
intrinsically motivated in a task is more likely to learn from an activity and
models of school engagement identify three core dimensions: behavioral,
cognitive and emotional engagement. ‘Behavioral engagement’, which is the
focus of the present study, refers to student participation in school related
activities and involvement in any learning tasks such as those being done online
(Fredricks et al., 2004). ‘Cognitive engagement’ refers to motivation,
thoughtfulness and willingness to make an effort to comprehend ideas and
master new skills. ‘Emotional engagement’ includes emotions and interest, such
as affective reactions in the classroom towards teachers. These three aspects are
interrelated and helpful to understand engagement as a whole.
Behavioural Engagement
Studies of behavioural engagement in learning environments typically use
evidence collected by human observers, such as teachers or students (Lane, 2009;
Martin, 2007). For example, using scales such as the Student Engagement
Figure 1: Line-based Visualization: green lines with different thickness show that
a user has done several intensive writing in the drafting process.
Graphs are copied from (Liu, Calvo, & Pardo, 2013).
Thus the total engagement score is calculated as the following weighted sum:
n
Engagement= i si ∗ wi (1)
where i is the index of a series, Si is the duration of the series i and Wi is the
weight assigned to i.
,d=1,2,3…D (2)
Where 𝑐1 𝑎𝑛𝑑 𝑐2 are the learning factors which are commonly set to 2 and 𝑟1 , 𝑟2
are random numbers distributed uniformly in the range [0, 1]. Then, each particle
updates to a new potential answer based on the velocity as:
𝑡+1 𝑡
𝑋𝑖𝑑 = 𝑋𝑖𝑑 + 𝑉𝑖𝑑𝑡 (3)
In our study, PSO starts with 20-randomly chosen particles and looks for the
best particle iteratively. Each particle is a 6-dimensional vector including three
time thresholds and three weights represents a candidate solution. The
engagement measurement algorithm is constructed for each candidate solution
to estimate its performance. The procedure describing proposed PSO-SVR
approach is as follows.
Function PSO-EM () {
Initializing PSO with 20 particles and each engagement measurement algorithm with
each particle.
Evaluating the fitness (MSE) of each particle.
For each iteration in 200
For each particle in 20
Calculating the particle velocity and updating the particle
Calculating the fitness of the particle by passing the parameters to
engagementMeasurement()
Comparing the fitness values and updating the local best and global best
particle.
End
End .
}
Study
In order to evaluate the feasibility of the proposed engagement measurement
algorithm, we have conducted a study, where 120 students were writing an
individual document in a web-based writing system. This system is developed
based on etherpad (http://etherpad.org/), which is an online real-time text
editor, letting authors to write a text document, and look all the revision history
of the document. Each document revision history has been recorded in a textual
database. We need to extract the timestamp of each revision as an input to the
engagement algorithm.
Participants and Procedure
A total of 120 university students participated in this study. The participants’
age ranged from 20 to 30 years (M: 25, SD: 5) and there were 61 males and 59
females. Those student participants came from different disciplines, including
computer engineering and education. They had no prior knowledge of the
system and did not participated in any previous related study. We arranged a
separate one hour writing activity for 60 education majors (writing a personal
best travel experience) while one month writing activity (writing a project
proposal) for 60 engineering students. We conducted this study in a controlled
environment so that each participant could only write in our system (see Figure
2), thus avoiding the ‘copy-and-paste’ issues. Once the writing activity was
finished, each participant was asked to estimate their engagement time in the
writing session. The dataset was divided into the training set (n=30) and testing
set (n=30) for each activity. We used the training set to train the parameters of the
engagement algorithm and testing set to evaluate the performance of the
algorithm.
Results
The correlation among participants and engagement measurement functions is
presented in Table 1. This study results show that correlations between the
proposed engagement algorithm (PSO-EM) and human are highly correlated
(r=.73 and r=.81) in both writing activities. This algorithm outperformed IbA
which has moderate correlation (r=.49 and r=.59) with student self-report
(Human). We also observed that the student engagement time in the one-hour
writing activity is more predictable than in the one month writing activity,
because the one-hour writing activity produced less document revisions.
PSO-EM 1 1
After 200 iterations, PSO-EM converges. Table 2 shows that PSO-EM algorithm
(MSE:15.88 in one hour;MSE:31.89 in one month) gets lower MSE scores than
traditional IbA (MSE:16.13 in one hour;MSE:64.95 in one month) in both writing
tasks (one hour and one month writing tasks).
Evaluation
Writing Parameters
Measure
Task
T1 T2 T3 W1 W2 W3 MSE
0.5 m 1.0 m 2.0 m 0.33 0.66 1 16.13
IbA
One
Hour PSO 3.30m 4.20 m 5.12m 1.09 2.34 2.89 15.88
-EM
0.5h 1.0h 2.0h 0.33 0.66 1 64.95
ibA
One
Month PSO 3.3h 4.20h 5.12h 1.09 2.34 2.89 31.89
-EM
In the one hour writing task, PSO-EM finds the best parameters for this dataset
include Threshold1 as 3.30, Threshold2 as 4.20 and Threshold3 as 5.12 minute,
and Weight1 as 1.09, Weight2 as 2.34 and Weight 3 as 2.89.
In addition, in the one month writing task, the best parameters for threshold are
different from those parameters in one hour writing task and the unit is hour.
This result indicates that the PSO-EM algorithm is robust to automatically adjust
its parameter values based on the dataset or the nature of the task. It also
suggests that PSO-EM outperformed the traditional method.
Acknowledgements
References
Blasco-Arcas, L., Buil, I., Hernández-Ortega, B., & Sese, F. J. (2013). Using
clickers in class. The role of interactivity, active collaborative learning
and engagement in learning performance. Computer & Education, 62, 102-
110.
Bouta, H., Retalis, S., & Paraskeva, F. (2012). Utilising a collaborative macro-
script to enhance student engagement: A mixed method study in a 3D
virtual environment. Computers & Education, 58(1), 501-517.
Bulger, M. E., Mayer, R. E., Almeroth, K. C., & Blau, S. D. (2008). Measuring
Learner Engagement in Computer-Equipped College Classrooms. Journal
of Educational Multimedia and Hypermedia, 17(2), 129-143.
Chena, P.-S. D., Lambertb, A. D., & Guidryb, K. R. (2010). Engaging online
learners: The impact of Web-based learning technology on college
student engagement. Computers & Education, 54(4), 1222-1232.
Cole, M. (2009). Using Wiki technology to support student engagement: Lessons
from the trenches. Computer & Education, 52(1), 141-146.
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School Engagement:
Potential of the Concept, State of the Evidence. Review of Educational
Research, 74(1), 59-109.
Huang, T. C., Huang, Y. M., & Cheng, S. C. (2008). Automatic and interactive e-
learning auxiliary material generation utilizing particle swarm
optimization. Expert Systems with Applications, 35, 2113-2122.
Jones, R. D. (2009). Student Engagement: Teacher Handbook. Rexford:NY:
International Center for Leadership in Education.
Lane, E. (2009). Clickers: can a simple technology increase student engagement in the
classroom? Paper presented at the International Conference on
Information Communication Technologies in Education, Corfu, Greece.
Latif, M. M. A. (2008). A state-of-the-art review of the real-time computer-aided
study of the writing process. International Journal of English Studies, 8, 29-
50.
Leijten, M., & Van Waes, L. (2013). Keystroke Logging in Writing Research:
Using Inputlog to Analyze and Visualize Writing Processes. Written
Communication, 30, 358-392. doi: 10.1177/0741088313491692
Lin, S. W., Ying, K. C., Chen, S. C., & Lee, Z. J. (2008). Particle swarm
optimization for parameter determination and feature selection of
support vector machines. Expert Systems with Applications, 35, 1817- 1824.
Lin, Y.-T., Huang, Y.-M., & Cheng, S.-C. (2010). An automatic group composition
system for composing collaborative learning groups using enhanced
particle swarm optimization. Computers & Education, 55, 1483-1493. doi:
10.1016/j.compedu.2010.06.014
Liu, M., Calvo, R. A., & Pardo, A. (2013). Tracer: A tool to measure student
engagement in writing activities. Paper presented at the the 13th IEEE
International Conference on Advanced Learning Technologies, Beijing,
China.
Martin, A. J. (2007). Examining a multidimensional model of student motivation
and engagement using a construct validation approach. British Journal of
Educational Psychology, 77, 413-440.
Sheldon, K. M., & Biddle, B. J. (1998). Standards, accountability, and school
reform: Perils and pitfalls. Teachers College Record(100), 164-180.
Tanes, Z., Arnold, K. E., Selzer King, A., & Remnet, M. A. (2011). Using Signals
for appropriate feedback: Perceptions and practices. Computers &
Education, 57(4), 2414-2422.
Tang, H.-W. V., & YIN, M.-S. (2012). Forecasting performance of grey prediction
for education expenditure and school enrollment. Economics of Education
Review, 31(4), 452-462.
Yin, P. Y., Chang, K. C., Hwang, G. J., Hwang, G. H., & Chan, Y. (2006). A
particle swarm optimization approach to composing serial test sheets for
multiple assessment criteria. Journal of Educational Technology & Society, 9,
3-15.
Fang-Chun Ou
Overseas Chinese University
Taichung, Taiwan
Introduction
English is one of the indispensable languages in modern society, which can be
employed to do a great deal of trades between countries as well as spoken to
interact with foreigners. It is also an essential bridge that connects people from
variety of events. In response to the requirement of international society,
strengthening English ability has become an important issue of education.
Moreover, with English learning, learners can blend into social and cultural
activities in English-speaking countries in good time. Language learners should
understand and respect multiculturalism in order to be cosmopolite.
Nowadays, being capable of speaking English fluently has become one of the
basic requirements in the global village. The purpose of English teaching and
learning is to build up learners’ ability of communication, increase the
motivation and interest of English learning, and develop a global perspective.
Additionally, language learners are expected to enhance the ability of handling
international matters and conflicts.
In order to break the myth of grades, the aims of 12-year compulsory education
are leading students toward creative learning on the initiative, having
knowledge from the learning process, experiencing pleasure, cultivating their
own characteristics, and communicating with English. However, the major
problem of English education is teaching too much and too difficult. Teachers
usually make students remember vocabularies and grammars compulsorily.
Therefore, the mechanical drills kill students’ learning motivation toward
English. As a consequence, the policies of 12-year compulsory education are set
up to teach efficiently, blend information technology into teaching, and
encourage students to think and express creatively. In light of this, English
ability and practicality are more important than they used to. Additionally, in
2010, Attar and Chopra pinpoint the teaching methodology and approach
should keep changing in order to meet the needs of language learning. Namely,
how to design effective teaching modes and cultivate students' communicative
competence have become the major concerns in English teaching and research.
Tracking back to the early period, English teaching mostly put emphasis on
Grammar Translation Method and Audio-Lingual Method. The traditional
teaching method is drill-oriented, which introduces and practices language
knowledge and skills in details. Worse still, students tend to be bored and
punctilious gradually. Until 1994, Ministry of Education started highlighting
Communication Language Teaching, which aims at meaningful interactions,
language skills, genuine material, language ability development, and English
communication under different social situations properly. As a consequence,
designing diverse teaching techniques as well as appealing activities and
courseware should be taken into consideration so as to benefit students by
increasing achievement and learning outcomes.
Research Questions
1. Does the intervention in the use of teaching methods help improve
elementary school students’ English proficiency?
2. Which type of question (vocabulary, picture matching, and reading
comprehension) was influenced most after exposed to these three teaching
methods?
Literature Review
The advantages of Communicative Language Teaching have been proved and
employed successfully in ESL/EFL (English as a Second Language/ English as a
Foreign Language) classrooms around the world (Kelch, 2011). Chang (2011)
explores to compare the feasibility of Grammar Translation Method and
Communicative Language Teaching in English grammar teaching as well as tries
to find out which on is more appropriate in Taiwan. In his study, the result
shows students benefit more from grammar instruction with Grammar
Translation Method (GTM) adoption. With contrast to GTM, Communicative
Approach focuses on fluency rather than accuracy. The teacher corrects errors
immediately if the scope of the classroom activity is accuracy, but if the scope of
the activity is fluency the errors will be corrected later on. As a result, combining
both methods might be the best way to improve circumstances in English
grammar teaching. Wei (2010) reviews the advantages of Communicative
Language Teaching method and analyzes the obstacles of implementation in
EFL classroom context. In his study, it provides guidelines for compromising
CLT with the conventional teaching approach. Additionally, it recommends
some techniques and principles for English teaching implementation in EFL
environment.
Teacher Training
The main purpose of language education is to enhance the quality of teachers as
well as the quality of education. The English teachers should possess
professional knowledge related to ELT (English Language Teaching) and be
capable of employing varieties of teaching methods. Regarding mid- and
long-term teacher training (MOE, 1999), MOE encourages normal universities to
establish departments of English education. Besides, school should provide
English subgroups, English minors or second specialty students a twenty-credit
course of ELT.
Teaching Methods
TPR (Total Physical Response) was originally developed by James Asher. In the
1960s, TPR makes good use of physical movements and associates with the
theoretical framework of mother tongue. Most importantly, teachers can check
young learners’ comprehension through their reactions linked to body
movements, which reinforce their comprehension ability.
Methodology
Subjects
The target subjects were an unselected convenience sample. Thirty 5th and 6th
elementary school students voluntarily participated in this study. They were
asked to take the identical pre- and post-test to evaluate the appropriateness of
three different teaching techniques (TPR, CLT, conventional teaching) in
different classroom settings.
Course Material
The researchers created an innovative story that students have never read before.
In addition, ten sentences and vocabulary cards were made to emphasize
grammar instructions and practices.
Results
Analyses
The test contains twenty questions. Among these twenty questions, ten are
© 2015 The author and IJLTER.ORG. All rights reserved.
27
vocabulary, five are picture matching, and the remaining five are reading
comprehension. The result of the test focuses on which teaching technique was
the most suitable for primary school students.
Table 1 (Test Question Distribution)
Question Categories Numbers Percentage
Vocabulary 10 50 %
Picture matching 5 25 %
Reading comprehension 5 25 %
Results
The means and standard deviations of the pre-test and post-test scores for the
conventional teaching method were presented in Table 2.
Table 2 Descriptive Statistics of Pretest and Posttest (Conventional)
N=20
Conventional M SD
Pretest 25 11.055
Posttest 57 6.770
The means and standard deviations of the pre-test and post-test scores for Total
Physical Response method were presented in Table 4.
Table 4 Descriptive Statistics of Pretest and Posttest (TPR)
N=20
TPR M SD
Pretest 25 7.45356
Posttest 76 10.28753
The means and standard deviations of the pre- and post-test scores of the
communicative language teaching method were presented in Table 6.
Table 6 Descriptive Statistics of Pretest and Posttest (CLT)
N=20
CLT M SD
Pretest 32 15.12907
Posttest 90 5.77350
The second question of the present study was the following “Which type of
question (vocabulary, matching, and reading comprehension) was influenced
most after exposed to these three teaching techniques?” A multivariate analysis
of variance (ANOVA) was performed on the data with the three scores (scores of
vocabulary questions, matching questions, and comprehension questions) used
as dependent variables and Group as the independent variable. The three
dependent variable scores were calculated by subtracting test scores of each
question type obtained at the beginning of the instruction (pre-test scores) from
© 2015 The author and IJLTER.ORG. All rights reserved.
29
The ANOVA for the Group main effect was found to be significant, F (6,50)=
25.515 (Wilks’ Λ = .061), p < .001. As a result, the univariate ANOVAs on each
dependent variable were conducted as follow-up tests to the MANOVA. Using
Bonferroni method, each ANOVA was tested at the .0167 level (.05/3). There
was a significance in the vocabulary question scores, F (2, 27) = 63.224, p < .001,
eta squared = .824. The difference in the picture matching questions scores was
significant as well, F (2, 27) = 8.113, p = < .001, eta squared = .375. The difference
in the reading comprehension questions scores was nonsignificant, F (2, 27) =
25.317, p= .159, eta squared = .652. (Table 8)
nor do they know how to apply them to the real life. The teaching methods, TPR
and CLT can help students become more confident and have more involvement
in class.
Overall, the findings of the study support that the participants enhanced in the
vocabulary part (requiring respondents to select the best word according to the
picture) and picture matching (requiring respondents to choose the best sentence
describes the pictures) of the posttest. The second important finding of this
study deals with the question that which type of test questions was influenced
most after the instruction of these three teaching methods. It was found that the
participants in this study in fact did tend to use the physical movements to link
the meaning of the vocabularies. Besides, the pictures cards do assist them to
have better understanding of plots of the story.
Discussion
The first results show students achieve better improvement in TPR and CLT
classrooms. The reasons are provided as follow. Firstly, during the instruction of
TPR, instructors gave a lesson in target language, and students responded with
whole body actions. Students were not forced to speak, and instructors waited
until students acquire enough language input through listening comprehension,
then they would speak out without any fear. Namely, language learning should
not involve any stress and the lively interaction could impress the physical
response upon students’ mind.
Secondly, during the instruction with CLT teaching method, students were
taught the story along with picture cards, and they were asked to communicate
with instructors. By means of these, more interactions were expected. As a result,
students could keep the story in mind easier and more efficient.
Lastly, during the instruction with the conventional teaching method, instructors
taught by simply reading aloud the story lines and made explicit translation.
Compared to TPR and CLT, the conventional teaching method was not lively
that the students only sat tediously and sometimes did not catch what were
taught thoroughly.
The second results indicate that students achieve better toward vocabularies and
picture matching than reading comprehension. The reasons are explained in
© 2015 The author and IJLTER.ORG. All rights reserved.
31
detail.
Vocabulary
Vocabulary picture cards were created to employ during the instruction.
Students saw the picture at the first glance and were encouraged to guess the
meaning of the vocabulary. Then, vocabulary card was revealed and students
were requested to repeat after the instructors and sounded it out. Moreover, an
exciting game was designed for students to play in class. As a consequence,
students learned through the action and memorized novel words more easily
and efficiently.
Picture Matching
The story picture cards were used to associate and connect the pictures with the
content. Through viewing picture cards, students found the key words from
story lines, which enhanced their visual-mental correspondence. While having
an exam, students were easier to reason the story and match the right pictures.
Reading Comprehension
Instructors invented the story taught in class, and it has never been heard before.
Although students learned with the visual aid of picture cards and some exciting
games were set up especially for them, most of the students still had difficulties
reading as well as comprehending long paragraphs. As a consequence, while
having a test, students expressed they guessed instead of answering
conscientiously.
first, and then do the action spontaneously. Gradually, when students are
familiar with the commands, teachers can add more commands at one time.
However, do
not add more than three commands at one time because students might get
confused while receiving the signals. Most importantly, teachers can observe
students’ comprehension easily and directly. If students can correctly do the
action after the command, then they do really comprehend what teachers teach
in class, which makes them feel confident and self-achieved.
With regard to CLT, there are some suggestions provided as follow. Situational
Language Teaching (SLT) can motivate students’ interests in learning. When
teachers introduce a new target language in words or phrases, instead of
translating them into students’ native language, teachers can demonstrate the
lessons through the use of realia, pictures or pantomime. Teachers may also use
intonation, rhythm, and concert pseudo-passiveness to get students’ attention
and motivate their interests in the lesson. Initially, students are really dependent
on their teachers. After teachers’ questions, students tend to make themselves
understand first, and then they are encouraged to answer in front of the whole
class. Gradually, with more practices, they may be more independent and have
greater security. Meanwhile, students can also listen to other’s opinions, and
learn from each other little by little. In fact, the interaction goes both ways, from
teachers to students and from students to teachers. Although students might
make mistakes, teachers usually employ various techniques to get students to
self-correct. Namely, the feeling of security is enhanced by many opportunities
of the cooperative interactions with their fellows and teachers. By means of this,
teachers evaluate not only students’ accuracy, but also their fluency. Teachers act
as advisors or co-communicator. Rardin (1988) mentioned language learning is
neither student-centered, nor teacher-centered, but rather teacher-student
centered. The CLT method makes students feel proud to use the knowledge to
express in different languages.
Two reasons are provided to explain why these three teaching methods were
chosen in the first place. First, TPR and CLT are the most popular teaching
methods adopted in educational institutions. Most instructors consider students’
interest in learning foreign languages is the priority. When students feel
interested in English, they will feel more comfortable and easy to communicate
with others by using a foreign language. Next, the traditional teaching method is
© 2015 The author and IJLTER.ORG. All rights reserved.
33
still employed now and then. Under the circumstance, most language learners
deem memorizing vocabularies a rather tough task; needless to say, speaking
English causes pressure and anxiety. Worse still, it surely lessens learners’
motivation toward English learning.
Teacher Training
During the past decade, the communicative language teaching approach has
been recommended especially for language teachers because of the essential and
emphasis of language use in foreign/second language classrooms (Mangubhai
et al., 2005). In addition, Li and Yu (2001) have identified the communicative
language teaching method has improved communicative ability of language
learners in which the conventional teaching approach has been demonstrated
unsuccessfully. However, due to the lack of sufficient teacher training in CLT,
teachers usually do not know how to implement CLT as well as do not possess
confidence in English speaking capabilities to carry out the communicative
approach (Butler, 2011). Specifically, most language teachers lack of this kind of
training and they are often afraid of “losing face” or feel embarrassed when
making errors or when they are not capable of answering students’ questions
promptly (Park, 2012). In light of the significance, Carrier (2003) points out the
different teaching approaches should be demonstrated and highlighted through
direct explanation, explicit teacher modeling, and extensive feedback in teacher
training programs in terms of the implementation in language classrooms.
Specifically, in the environment of English as a foreign language in Taiwan, the
supply of language input and practice opportunities are insufficient for the
learners to become immersed. Therefore, teachers should value process-oriented
instruction more highly than content-oriented or grammar-oriented instruction
because it is beneficial for students to become independent learners.
The language teacher should also bear in mind that elementary school children
are not mature enough to take full responsibilities for their own language
learning. Therefore, children’s proficiency levels and their cognitive maturity
would determine the types of activities (strictly-controlled ones, semi-guided
ones, or free communicative ones) the teacher puts into practice in a
communicative classroom.
Limitation
The sample size is small, which causes the effect of the experiments was not
statistically significant, so the results cannot be completely generalized to young
EFL learners from other areas. In addition, time duration of each class is two
hours. Within this short period of instruction, it was at time difficult for the
instructors to circle around the classroom while the activities were conducted
since the instruction involved observing the class and providing assistance.
Consequently, language learners would benefit from the instruction with
sufficient guiding period.
Pedagogical Implication
The result of this study could be a good demonstration for teachers to provide
more options in English learning. Through the curriculum, teachers could
promote the ability to devise a flexible variety of activities in order to stimulate
pupils’ learning as well as make them better interested in English. This study
has set up a great value for other similar researches and should be replicated
with students at various English proficiency levels. For instance, in addition to
TPR and CLT, The Direct Method, Community Language Learning, and
Reciprocal teaching are strongly recommended as the integrated teaching
method to promote the teaching process.
This study explores Taiwan’s education to find out new approaches to revision
and innovation. According to Jarvis and Atsilarat (2004), new teaching
approaches have been addressed so as to diversify the approaches in existence to
accomplish global innovation. As for the future investigation, more
breakthroughs in curriculum and instruction need to be put into consideration,
in order to gain an overall picture of the optimal outcomes of education.
References
Introduction
This paper reports on a study regarding whether or not pre-service teacher
candidates feel knowledgeable and confident in the acquisition of skills they
need to teach in their own classrooms at the completion of their respective
teacher preparation programs. The study contrasted responses from teacher
candidates who completed their teacher preparation programs in different
models. One group graduated through an eight month program, involving 13
weeks of classroom practicum time; the second group graduated with a 5 year
concurrent education degree, including 19 weeks of classroom practicum. The
focus of this study is on teacher candidates’ perceptions of what is gained
through practicum experiences in the classroom. We investigated how effective
Background
Theories may provide the knowledge that teacher candidates require to work
effectively with students in the classroom. However, without opportunities to
apply these theories to practice during practicum time, candidates may lack the
necessary confidence to address new contexts with equal effectiveness, and they
may lack the pedagogical content knowledge to determine strategy efficacy as
they encounter new situations early in their career. Practicum time in a teacher
education program is typically designed as a professional internship of short
duration, strategically placed in the teacher candidates’ professional program.
The practicum allows the teacher candidate to try out ideas that they have
learned in courses in the context of a classroom where a certified teacher can act
as a mentor for them.
However, not all teacher preparation programs provide the same amount of
classroom practicum experience for teacher candidates. In the jurisdiction where
this study took place, teacher candidates are required by their accreditation body
to acquire a minimum of 12 weeks of successful practicum experience. Success in
the practicum is assessed by the professional judgment of the mentor teacher,
who is referred to as an associate teacher (AT) in this jurisdiction. In this study,
however, two paths to acquiring the professional teacher accreditation are
examined in relation to the perceived impact of the practicum on knowledge and
confidence of the new teacher. Students acquiring their accreditation through a
consecutive program route in this jurisdiction engage in 13 weeks of practicum
(i.e., one week more than required by the local accreditation body), while those
who acquire their accreditation through the concurrent program route acquire
19 weeks of practicum (i.e., 7 weeks more than required by the local
accreditation body). Additionally, the 19 practicum weeks in the concurrent
program are distributed across the 5 years of the program, while the 13 weeks of
the consecutive degree route are spread across 8 months.
While we acknowledge that the quality of the practicum experience each teacher
candidate may experience can be vastly different due to many circumstances,
our study focuses solely on examining perceptions related to how the length and
placement of the experience may have an instructional impact. As teacher
candidates, prospective teachers enter the professional arena through practicum
experiences; however, they are often unequally exposed to many learning
opportunities (Beck, Kosnik & Rowsell, 2007). It is logical to assume that more
time in a practicum context would allow more exposure to a greater variety of
learning opportunities. Many of the learning opportunities that a pre-service
teacher candidate may have during any practicum may be wholly dependent on
the skills and resources of the teachers to whom they are assigned for their
practicum. Additional practicum time may allow new teachers to have
otherwise unavailable exposure to strategies utilized by experienced teachers,
and they may lack contextualized opportunities to apply their course-based
knowledge in contexts that would allow the teacher candidate to develop
confidence in their ability to use these strategies if they have little or no time to
see them in operation and to adapt theoretical ideas to pragmatic contexts.
Therefore, the current study provides us with a benchmark of current reports of
knowledge and confidence acquired through practicum experiences on which to
base program design decisions for this aspect of teacher preparation.
Additionally, in the jurisdiction where this study is taking place, the government
has recently made significant changes to accreditation criteria, which will come
into effect in fall of 2015. In response to the demands for new program designs in
the accreditation program for teacher certification in this jurisdiction, many
accrediting institutions are considering the elimination of the concurrent
program route and retaining the single option of a 2-year consecutive program.
This study may shed some light on the efficacy of this decision as it relates to
decreased opportunities for longer program embedded practica.
As this university offers two routes to the completion of the same bachelor of
education (B.Ed.) degree, with two approaches to the placement and differences
in the total amount of time provided for the practicum, we identfied the need to
compare teacher candidates’ perceptions of the relative value of these
differences in providing them with the skills and strategies needed to support
their developing professional skills to prepare to be successful with the role of
teacher. The skills that were identified for this aspect of the larger study were
selected because, while some theory for each skill can be provided in the context
of their courses, each skill could reasonably be expected to develop more fully if
teacher candidates had contextualized opportunities in schools to use these skills
and to consider the impact of their practices in relation to the outcomes they
achieved.
Literature Review
During the past 15 years there has been a considerable amount of intensive
investigation into the value and learning afforded to teacher candidates whose
professional preparation program provides opportunities for them to hone their
theoretical course knowledge by participating in classroom placements, usually
referred to as practicum experiences, or collectively as practica. While we were
able to find many studies related to the perceived value of teacher practicum
experiences, there seems to be an absence in the professional literature regarding
investigations of the relative perceived value of different approaches to
providing the practicum experience and the perceived value of different
amounts of practicum experience. It seems reasonable to assume that more time
in a classroom practicum placement is likely to provide more opportunities for
the teacher candidate to gain a wider variety of professional skills, but there is a
dirth of literature about existing programs to support this contention.
It seems clear from this study that prospective teachers recognize and value the
theoretical aspects of the preparation program to help them understand what
they should do, but they value the practical experiences of the practicum to
show them how and when to do these things. The Brouwer and Korthagen
(2005) study also demonstrated that by gradually increasing student teaching
activity complexity, by increasing cooperation among students (triads of student
teachers), cooperating teachers, and university supervisors, and by alternating
between student teaching and college (in-class) sessions, teacher education
programs allowed student teachers to relate theory and practice. This need for
balance between the course theory and the practicum experiences is supported
by the research of Ng, Nicholas, and Williams (2010). This research revealed that
pre-service teacher beliefs may also be influenced by placement experiences,
suggesting that placements may be an important factor in shaping beliefs about
teaching and teaching efficacy. Also, this study argued that teacher education
programs should strive to improve pre-service teachers’ teaching efficacy, since
efficacy leads to improvements in teaching ability.
Schultz (2005) provides support for the concept of day-to-day problem solving
capacity development through practicum learning. The study highlighted the
need for teacher preparation to support new teacher inquiry to help teacher
candidates use problem solving approaches when they face the day-to-day
challenges in a classroom.
However, other research shows that the type of school where a practicum takes
place influences the learning that a teacher candidate acquires from the
practicum. Results of a study by Ronfeldt (2012) demonstrated that teachers
who had field placements at easier-to-staff schools were more capable of
improving students’ test scores and also more likely to remain teaching in
challenging city schools during their first five teaching years. Thus, according to
the results of this study, teacher education programs should consider assigning
pre-service teachers to field placements at easier-to-staff schools. This study also
emphasized the importance of identifying what and how pre-service teachers
learn at easier-to-staff schools. The authors argued that it may be that these
schools have many characteristics of overall effective schools where good
teaching and learning flourish, such as high quality administration and support,
professional staff relations and collegiality, and more experienced teachers
(Ronfeldt, 2012). It seems logical that exposure to such contexts would influence
a teacher candidate’s learning about how to teach well.
The research literature about practicum experiences is also very clear about two
other key findings. First, the structure of a practicum matters to what can be
learned from it. Second, but far from less important, is the nature of the
relationship between the teacher candidate and the classroom teacher who hosts
their practicum placement is critically important to how successful that
placement will ultimately be, as measured by the teacher candidate’s
perceptions of their learning in a classroom context.
The nature of the relationship between the teacher candidate and the classroom
teacher who hosts their practicum placement has been found to be critically
important to the perception of the teacher candidate about how valuable their
placement has been to their professional preparation to teach. A study by
Korthagen, Loughran, & Russell (2006) attempted to identify central principles
that can be used to create teacher education programs and practices that address
teacher candidates’ and teacher educators’ expectations, needs, and practices
accruing from a teacher preparation program. By analyzing three pre-service
teacher education programs (one from each of The Netherlands, Canada, and
Australia), the researchers identified seven principles of practice for teacher
candidate learning and for guiding change and improvement in teacher
education programs. The seven principles identify requirements for enhancing
the process of learning about teaching. These seven principles include the
perceptions that: learning about teaching involves continuous conflict and
competition among various demands; knowledge about how to teach should be
perceived as a subject that is yet to be created, rather than as an already created
subject; learning about teaching means that the teacher’s focus must be on the
learner, not the curriculum; teacher candidate research enhances learning about
teaching and teacher candidates can guide their own professional development
by conducting research on their own teaching practice; learning about teaching
requires that those learning to teach work closely with peers, in horizontal, not
vertical relationships; meaningful relationships must exist among the schools,
universities, and teacher candidates to promote learning about teaching; and, to
enhance learning about teaching, teacher educators should model the teaching
and learning approaches used in the teacher education program in their own
practice. All of the principles are based on learning from experience.
teachers and their students. The principles reflect three major program (change)
components: perceptions of knowledge and learning that guide teacher educator
practices, program structure and specific practices, and staff and organization
quality (Korthagen, Loughran, & Russell, 2006). Each of these principles also
connects to the nature of the practicum and to its role in extending the
theoretical learning of a course into the situated learning of practice in the
classroom.
Other studies that highlight the critical nature of the relationship between the
hosting teacher in the classroom and the teacher candidate help to identify the
specific behaviours that support successful practicum experiences. Beck and
Kosnik (2002) identify seven components of the relationship between the
classroom teacher (often referred to as the associate teacher or AT) and the
teacher candidate including: the provision of emotional support by the AT;
having a peer relationship characterized by mutual respect (as opposed to a
supervisory one) with the AT; opportunities for ongoing collaboration with the
AT in planning but independence in teaching a lesson; room to be flexible with
the content and the methods they use to teach; feedback from the AT;
opportunities to observe good teaching by the AT; and a demanding but not
excessive workload while the teacher candidate is on a practicum placement.
This study highlights the complexities and the necessity of providing
opportunities for teacher candidates to develop their skills to interact
productively with other professionals and to deal with difficult situations in a
school context. It does not, however, address how such skills are developed in
teacher candidates.
The need for the development of such positive relationships is supported by the
work of Evelein, Korthagen, and Brekelmans (2008) in another study which
found that new teachers have much lower measures of need fulfillment in the
early stages of their careers than more experienced teachers. This difference
might be attributed to more skill, and therefore more success, in dealing with
day-to-day interactions that form the basis of classroom implementation. This
study is supported by the work of Ferrier-Kerr (2009) who found that the
professional relationships between the AT and the teacher candidates are based
on several factors, including: personal connections; interpretation or
understanding of respective roles (of the AT and the teacher candidate), the AT’s
style of supervision; and, engagement in reflective practice.
Grundoff (2011) studied first year teachers’ perceptions of how their practicum
experiences helped them prepare for early career teaching. Findings supported
the importance of the practicum in developing contextualized skills of the
profession but found that some practicum features supported skill development
while other features hindered development. Participants in this qualitative study
found that the practicum had many differences from the reality of actually
teaching in their own classrooms. Differences that facilitated the transition of a
teacher candidate into the role of teacher included feeling like they were part of
the school community once they were teachers; they felt respected by other
teachers and by students. They also valued having time to develop relationships
with the children. They enjoyed the increased sense of autonomy.
However, they also found some differences between the practicum and their
professional teaching roles that they felt had hindered their transition into their
professional teaching roles. Differences that hindered the transition included a
feeling of shock and anxiety upon beginning to teach due to a discrepancy
between expectations and reality, which mostly had an impact on teachers for
their first few weeks of teaching. Participants’ transition was disrupted in two
areas due to the mismatch. First, new teachers did not have a clear
understanding of what they had to do at the beginning of the year. Second, new
teachers did not recognize the size and scope of teaching. Practicum placements
were limited to practicing and developing skills within the context of the
classroom. As new teachers, they were overwhelmed by their responsibilities
outside of the classroom. As a result, student teachers underestimated the range
and amount of work, and were frequently tired as new teachers.
Each of these prior research studies informed the selection of questions we used
to structure the current study. While the current body of literature about the
importance, nature, location, relationships, and structure of the practicum as a
component of a teacher preparation program has been examined, the
comparative perceptions of these characteristics across programs routes toward
a B.Ed. degree in the local jurisdiction does not appear to have been studied.
Method
Participants. Participants in this study were from both the consecutive and the
concurrent programs at three campuses from one Northern Ontario University,
during the 2011-2012 acadenic year. A total of 212 respondents (25 males,186
females, 1 gender not reported) completed the survey and were included in the
study. Respondents’ ages ranged between 18 and 58 years old (M = 23.18, SD =
4.91). Respondents were completing or had completed a consecutive teacher
Research Questions. Therefore, the 2 key questions for this aspect of the larger
study are: 1. Do the additional 6 weeks of practicum experience make a
perceived difference in the level of knowledge and confidence of the teacher
candidate?; and 2) Do student teachers perceive that the distribution across time,
of the practicum experiences, influences their knowledge and confidence as
teachers?
potential participants followed the link to the information sheet which provided
all informaton necessary for informed consent.
Potential participants could agree to continue or could exit the program, after
reading the introductory letter and examining the informed consent form.
Completion of the questionnaire indicated each respondent’s agreement to
participate in the study. One reminder of the opportunity to participate in the
survey research was posted on the Facebook site approximately one month after
the study was first advertised. Data collection was completed over a two month
period. Completion of the entire questionnaire required approximately 15
minutes. Only those questions related to perceptions of the value of learning as
a direct result of the practicum experiences were analysed for this
subcomponent of the larger knowledge and confidence study.
Results
Independent samples t-tests were conducted to compare participants from the
consecutive and concurrent education programs on their average responses to
each of the six survey questions.
For the question, “How well do you think your practicum placements have
prepared you to manage a classroom?” results demonstrated significant
differences between consecutive and concurrent program participants, t(207) = -
2.018, p = 0.045, with concurrent program participants (M = 3.30, SD = 0.89)
scoring, on average, higher than consecutive program participants (M = 3.05, SD
= 0.82).
For the question, “How well do you think your practicum placements have
prepared you to interact with parents?” results did not demonstrate significant
differences between consecutive and concurrent program participants, t(206) = -
0.568, p = 0.571, with concurrent program participants (M = 1.45, SD = 1.30)
scoring comparably to the consecutive program participants (M = 1.35, SD =
1.24).
For the question, “How well do you think your practicum placements have
prepared you to interact with administrators?” results did not demonstrate
significant differences between consecutive and concurrent program
participants, t(207) = -1.429, p = 0.154, with concurrent program participants (M
= 2.20, SD = 1.24) scoring comparably to the consecutive program participants
(M = 1.95, SD = 1.25).
For the question, “How well do you think your practicum placements have
prepared you to manage difficult behaviours?” results demonstrated significant
differences between consecutive and concurrent program participants, t(207) = -
2.205, p = 0.029, with concurrent program participants (M = 2.98, SD = 1.06)
scoring, on average, higher than consecutive program participants (M = 2.65, SD
= 0.98).
For the question, “How well do you think your practicum placements have
prepared you to deal with difficult situations?” results demonstrated significant
differences between consecutive and concurrent program participants, t(207) = -
2.265, p = 0.025, with concurrent program participants (M = 2.80, SD = 1.08)
scoring, on average, higher than consecutive program participants (M = 2.46, SD
= 1.08).
For the question, “How well do you think your practicum placements have
prepared you to address the learning needs of all children?” results
demonstrated significant differences between consecutive and concurrent
program participants, t(207) = -2.489, p = 0.014, with concurrent program
participants (M = 3.27, SD = 0.81) scoring, on average, higher than consecutive
program participants (M = 2.96, SD = 0.93).
Table 1
Comparison of Concurrent and Consecutive Education Program Participants’ Responses
to the Individual Items Related to the Question: How well do you think your practicum
placements have prepared you to:
Education Program
Concurrent Consecutive
M SD M SD t
When the six questions were combined to create a total score, results
demonstrated a significant difference between the participants from the
concurrent and consecutive education programs on average responses to the
overall question: How well do you think your practicum placements have
prepared you to…?, (t(207)=-2.186, p=0.030). Specifically, participants from the
concurrent education program (M=2.67, SD=0.86) scored higher, on average,
Discussion
This cross-sectional study was an attempt to understand the knowledge and
confidence of current and recently graduated faculty of education students
regarding their perceptions of how well prepared they felt to handle complex
interactions that are required in a classroom. Results indicate that teacher
candidates in this program are feeling well prepared to handle some of the
interaction tasks that will be required of them as teachers but feel severely
underprepared in other areas. Both groups of teacher candidates feel fairly
knowledgeable and confident in their ability to manage the day-to-day
operation of a classroom and to address the learning needs of all children. Both
groups in this study reported feeling they had achieved considerable knowledge
and confidence in both of these areas through their practicum experiences,
reporting 3.30 (concurrent) and 3.05 (consecutive) on measures related to
managing a classroom and 3.27 (concurrent) and 2.96 (consecutive) on their
ability to address the learning needs of all children. Since each of these abilities
is a crucial aspect of a teacher’s role in the classroom, these results are
encouraging, although significantly more so among concurrent students.
However, the remaining four measures of this aspect of the larger study are a
cause for concern about the efficacy of practicum experiences as they are
currently structured. While survey participants felt that they had developed
some knowledge and confidence to manage difficult behaviours in the
classrooms (2.98 concurrent; 2.65 consecutive), participants felt less
knowledgeable and confident in their ability to deal with difficult situations,
interact with administrators, and interact with parents (Table 1). In fact, our
results suggest that as teacher candidates move further and further away from
interaction with students and face the need to address situations with greater
complexity or which involve other adults in the school environment, they feel
less prepared to do so confidently.
Preparation to interact with parents is an area that is very weak for both
concurrent and consecutive teacher candidates, indicating that the practicum
experiences as they are currently structured, provide them with insufficient
exposure to these situations to allow them to develop knowledge and confidence
in this area.
In this jurisdiction, the practicum placement for both concurrent and consecutive
teacher candidates is evaluated by associate teachers and by faculty advisors.
The evaluations are structured by criteria for assessment, which include foci that
reflect the standards of the profession in the province. Among these standards is
a category of professional behaviours titled “Leadership in Learning
Communities”. The Ontario College of Teachers, the accreditation body for the
teacher preparation institutions in the province, explains this aspect of the
standard as follows (http://www.oct.ca/public/professional-
standards/standards-of-practice; Accessed January 28, 2014):
In most jurisdictions, this standard forms the basis of assessment of the teacher
candidates’ professional interactions in the school context. The standards are
broken down into observable behaviours that are assessed by both the associate
teacher and the faculty advisor at set times during the teacher preparation
programs, which differ across the concurrent and consecutive program routes.
Each Faculty of Education has internal control over how they define details
within each of the standards of practice. In the jurisdiction where this study took
place, these standards are associated with two professional behaviours: 1)
collaborating with others to create a learning community; and 2) assuming
professional responsibility (the planning binder, duties, meetings, punctuality,
and initiative). It may be that these behavior descriptions are too vague to draw
the attention of the teacher candidate, the associate teacher, or the faculty
advisor to the specific types of knowledge and skills addressed in this survey. It
is often said that what is assessed gets attention and that adage may apply in
this instance. If we were specific about the types of interactions that need
development through a practicum experience, we might expect that teacher
candidates could more fully develop knowledge and skills with classroom
management, interaction with parents, interaction with administrators,
managing difficult behaviours, dealing with difficult situations, and addressing
the learning needs of all children.
Conclusions
This study was part of a larger study, which investigated the knowledge and
confidence, which is perceived by teacher candidates as a result of many aspects
of their teacher preparation programs, including the practicum components.
Due to the length of the larger study, investigation into the practicum aspects of
the study focused on only 6 measures. It may be of value to examine the
practicum experiences in either or both of these participant populations more
thoroughly through more detailed questions and to triangulate survey responses
through other forms of data.
While we cannot over extend the interpretation of our data since it reflected
responses from only 212 teacher candidates. While this is a solid basis for some
conclusions, this number of participants represents only about 20 percent of the
total teacher candidate population from this university in the study year. It may
be that a larger participant group would reveal different trends.
References
Beck, C., & Kosnik, C. (2002). Components of a good practicum placement: Student
teacher perceptions. Teacher Education Quarterly, Spring, 81-98.
Beck, C., Kosnik, C., & Rowsell, J. (2007). Preparation for the first year of teaching:
Beginning teachers’ views about their needs. The New Educator, 3, 51-73.
doi:10.1080/15476880601141581
Brouwer, N., & Korthagen, F. (2005). Can teacher education make a difference? American
Educational Research Journal, 42(1), 153-224.
Bryan, S.L., & Sprague, M.M. (1997). The effect of overseas internships on early teaching
experiences. Clearing House, 70(4), 199-201.
Bullough, R.V., Young, J., Birrell, J.R., Clark, D.C., Egan, M.W., Erickson, L., Frankovich,
M.,Brunetti, J., & Welling, M. (2003). Teaching with a peer: A comparison of two
models of student teaching. Teaching and Teacher Education, 19, 57-73.
Evelein, F., Korthagen, F., & Brekelmans, M. (2008). Fulfilment of the basic psychological
needs of student teachers during their first teaching experiences. Teaching and
Teacher Education, 24, 1137-1148. doi:10.1016/j.tate.2007.09.001
Ferrier-Kerr, J.L. (2009). Establishing professional relationships in practicum settings.
Teaching and Teacher Education, 25, 790-797. doi:10.1016/j.tate.2009.01.001
Grierson, A., & Denton, R. (2013). Preparing Canadian Teachers for Diversity: The
Impact of an International Practicum in Rural Kenya. In L. Thomas (Ed.), What is
Canadian about Teacher Education in Canada? Multiple Perspectives on Canadian
Teacher Education in the Twenty-First Century (pp 187-210). Sherbrooke, PQ:
Abstract. The aim of this study was to explore a team of teachers‘ (n=4)
use of theoretically based formative assessment strategies within the
course of a learning study. The thematic analysis is based on video
observations of teachers‘ discussions during planning meetings,
teaching in the classroom and evaluation meetings. The subject-specific
content focused on learning about fractions, specifically the concepts of
double and half, in three groups of six- to seven-year-old students (n=51
in total). An iterative process was used in which the teachers in the
study used video recordings as a tool for analyzing their work in the
classroom. The thematic analysis shows that the use of a general
learning theory – variation theory – strengthens the effect of the
teachers‘ formative assessment. Without explicit use of the assumptions
from the theoretical framework, the formative assessment strategies had
only a minor impact on students‘ learning outcomes.
1. Introduction
There have been several attempts to develop teachers‘ formative assessment by
means of in-service training. A range of studies concerning programs for in-
service training aiming at developing teachers‘ competence in performing
formative assessment have been carried out, with various outcomes (e.g., Bennet
2011, Phelan, Choi, Vendlinski, Baker & Herman, 2011). It thus seems rather
difficult to transform formal training about formative assessment into classroom
practice. Wiliam (2006) claims that ―tools for formative assessment will only
improve formative assessment practices if teachers can integrate them into their
regular classroom activities‖ (p. 287). School-based in-service training could
therefore be one way to develop teachers‘ abilities to use formative assessment
to increase the students‘ learning outcomes. This school-based research project
involved a team of teachers who had previously participated in an in-service
training course on formative assessment, and the focus was on their use of
Research has shown that formative assessment often lacks a theory of action,
which makes it difficult to evaluate and understand the mechanisms causing the
intended effects:
the clear message that more formative assessments in school can lead to great
improvements in student learning, the above-mentioned study by Phelan et al.
(2011) emphasizes that it is not simply the case that any formative assessment
tool leads to an improvement in student performance. The teachers‘ competence
in performing the formative assessment is most likely crucial for the outcome,
and therefore teacher variables need to be more closely analyzed. What matters
seems to be how the formative assessment is carried out, i.e., what is noted by the
teacher and the way in which this is addressed during the instruction. This means
that effective formative assessment is constituted by both knowledge of the
content in question and knowledge of what it takes to learn this specific content,
in line with Bennet (2011), as well as Black and Wiliam (2009). This is the reason
why we have added a theoretical framework to strengthen the teachers‘
knowledge about learning during the in-service training.
In this article, we have taken into consideration the fact that to be formative, the
instruction needs to be specific at a micro-level in relation to what is elicited
through the assessment. This involves determining in what way the content is
offered in relation to how it is experienced by the students and doing this
according to theoretically based assumptions about what it takes to learn. The
teachers in our study were guided by the variation theory of learning (Lo &
Marton, 2012; Marton & Booth, 1997; Marton, 2014; Lo, 2013; Holmqvist,
Gustavsson & Wernberg, 2008; Holmqvist, 2011), to which they were introduced
by the researchers participating in the interventions during the introduction to
the in-service project. The concept of variation in variation theory does not refer
to varying methods, but rather to varying the features of the content that have
not been previously discerned by the students. The assumptions are that to
discern new aspects of the object of learning, these have to vary against an
invariant background consisting of the aspects already known. Variation theory
will be more thoroughly described below.
4.1 Method
The analysis of the teachers‘ strategic use of variation theory is qualitative and
based on video recorded meetings and interventions as well as observations. A
thematic analysis was made (Boyatzis, 1998) based on several readings of the
material (which was transcribed verbatim), as well as watching and re-watching
of the video recorded meetings and lessons. The students‘ summative
assessments were used as a triangulation to strengthen the observations and find
out whether the teachers‘ use of theoretical assumptions was reflected in
students‘ learning outcomes.
4.2 Context
The study took place during an in-service training project, which was conducted
in a school district in a rural area close to a small town in the south of Sweden. In
the Swedish school system, all classes are mixed with regard to both gender and
abilities. Students spend 9 years in compulsory school, from age 7 to 16. The first
time the children receive grades is at the age of 12, i.e., in grade 6. All classes
include children of the same age and during the first 6 years, the classes
normally consist of groups of 15-25 children.
The lessons and the planning meetings were video recorded. Before each new
lesson, the recording of the previous lesson was analyzed, and experiences from
that lesson were evaluated and discussed, including the results of the pretest
and posttest. Three lessons were conducted, meaning that there were four
meetings between the teachers and researchers (see Fig. 1). During the first
meeting, the pretest was constructed and the first lesson was jointly planned.
During the second and third meetings, the previous lessons were evaluated with
respect to learning outcomes, and the coming lessons were planned on the basis
of this evaluation. During the fourth meeting, the last lesson and the results from
the tests were discussed, as was the outcome of the learning study as a whole.
The length of intervention for each group of students was one lesson comprising
approximately 15 minutes of pretest, 30 minutes of instruction, and 15 minutes
of posttest. The lessons were planned and evaluated on the basis of variation
theory.
4.4 Data
The data collected during the school-based study consists of video recorded
meetings (3), video recorded lessons (3) and one final observed meeting.
Participating observations: Four planning meetings with the teachers were
conducted during the four weeks of the learning study. Each meeting took place
at the schools where the teachers worked. The meetings lasted 2 hours.
Videotaped lessons: Three lessons were videotaped. Each lesson lasted 1 hour.
Group 1 (Cycle 1) consisted of 24 students, group 2 (Cycle 2) consisted of 13
students and group 3 (Cycle 3) consisted of 14 students.
4.5 Analyses
The video recordings from the lessons (n=3) and planning meetings (n=4) were
transcribed verbatim. The data was analyzed as an exploratory single case study
(Stake, 2006; Yin, 2009) at a fine-grained level (Phelan et al., 2011). For the
qualitative analysis of the teachers‘ use of theoretically based formative
assessment strategies and reasoning, a thematic analysis was used as a first step
(Boyatzis, 1998). The analysis was based upon how the core concepts of variation
theory – contrast, simultaneity and variation (Holmqvist, 2011; Kullberg, 2010;
Marton, 2014; Lo, 2012) - were used by the teachers, and in what respects they
were used in the planning, implementation and evaluation of each lesson.
Thereafter a more detailed and specific analysis was made across the three
lessons including the planning and evaluation meetings, during which the
different data sources were compared and analyzed in parallel.
5. Results
The analysis describes the different ways the teachers‘ formative assessment
strategies are expressed during the study. The analysis ends up with three
themes regarding the teachers‘ development of formative assessment strategies
guided by the theoretical assumptions. The first theme describes how the
teachers, through their formative assessment, gradually developed insight about
what critical aspects are and how they can be used to increase the students‘
understanding of halving and doubling (Theme A). The second theme highlights
another example of how the teachers‘ formative assessment led to increased
evidence-based insight into the students‘ understanding during the course of the
learning study (Theme B), by taking into consideration the connection between
the object of learning and the learners pre-knowledge (non-dualism). The third
theme addresses the teachers‘ feedback during their respective lessons, focusing
mainly on how the teachers used variation theory (Theme C) in their feedback to
the students.
During the screening, when one of the students was asked by the teacher to
double the number four, the child answered five (i.e., four plus one). This
student evidently interpreted ―add one more time‖ as ―add one (the number 1)‖.
During the planning meeting before the first lesson, one of the teachers
confirmed that this was a critical aspect of the object of learning.
T1: Yes, we have many children who cling to that; they don‘t get
doubling, for them it is … plus one. We tried previously when we taught
this particular topic not to start with one, because we thought that was
what made them stick to this, but there are those who still hold on to
―plus one‖ even if we start by asking them to double three.
The teachers and the researchers consequently set up the hypothesis that one
important part of the instruction might be to avoid the expression ―one more
time‖, which the students could interpret to mean ―plus one‖. Instead, the
expression ―the same amount one more time‖ was to be used during the first
research lesson to make it possible for the students to discern that ―one more
time‖ is not the same as ―one more‖. For the students who had not already
noticed the difference, this might be a critical aspect.
One of the concepts in the theoretical framework used here is about how to
make aspects of the content discernible for the students. In this case the concept
of simultaneity was used. The first lesson was therefore planned to allow the
students to apply doubling and halving at the same time, but without having the
original number of objects on their desks. The decision not to let the students see
the objects representing the original number was based on previous experience
of students making a mechanical visual doubling, without really understanding
the concept of doubling, e.g., by adding the same amount once again, which
results in an understanding of double as ―the same as‖; doubling four then gives
four instead of eight, as the student added four to the original amount.
The review during the meeting of the video recording of the first lesson clearly
showed that the teacher had followed the agreed plan based on the micro-
analysis, that is, using the expression ―the same thing one more time‖ rather
than ―the same thing twice‖:
T1: Now, you are to add twice as many as six. I put six pieces here. I have
now added six one time. If we want to have double the amount, we must
have this six and then I have to add another six pieces.
One conclusion after this discussion was that it was very important to be even
more explicit and use the phrase ―the same amount as the first and then the
same amount one more time‖ instead of ―the same amount twice‖.
Another discussion concerned how to design the task to help the children
distinguish between the original amount and the new amount. While watching
the video recording, the teacher who had taught the first lesson reflected on
whether one reason for the students‘ confusion might be that the students did
not have the initial number of objects in front of them while working with their
tasks.
T1: So, the question is whether it is best to put the objects away or
whether it would have been better to let them have the original number
of objects in front of them and let them do it again, so to speak … do it
next to … so that both were there to make it possible for them to compare
…
Here, the teacher who had given the first lesson wondered whether it might
have been better for the students‘ learning if they had had the original amount
that they were asked to double or halve in front of them so that they would be
able to contrast this with the new amount. This discussion concerned how the
children could be helped to separate the original amount from the solution,
avoiding just adding the same amount as the initial and adding this to the
‗doubled‘ amount (double of four experienced as plus four). In the end, the
group agreed upon increasing the contrast between the amounts, and the plan
for the second lesson was therefore to work with the original number of objects
and its double or half simultaneously but separated from each other. To make
this relation clear, it was decided that the original amount should be placed on
one side of a border and the doubled or halved amount should be placed on the
other. In this way, the original number of objects and the halved or doubled
amounts could be contrasted and discernible simultaneously. This should be
done by both the teacher and the students while they solved tasks during the
lesson, and the initial number of objects should be explicitly contrasted with the
new amount, in accordance with the assumptions of variation theory.
The increased mean scores indicated that the design chosen for this lesson had
been successful. However, the tasks in the test that allowed the original number
of objects to be chosen by the students themselves were still problematic in this
second group of students. This third meeting therefore included discussions
about various ways to improve the students‘ abilities to solve these tasks.
The teacher who taught the second lesson (T2), the teacher who would teach the
third lesson (T3), and one of the researchers (R1) discussed the issue of
simultaneously contrasting different amounts when working with the concepts
of doubling and halving in class.
R1: We talked last time about having the original amount, and then half
and twice that amount. This change, I think, would be interesting
because you would have the example of both half and twice the same
original amount. … You put the original amount there at the same time
as you tell them to put down half and double that amount.
T3: However, why have you put the ruler along there? (Points at R1‘s
paper, on which a ruler has been laid down to divide the paper into two
areas.)
T2: To make them see the point with … the original amount.
During this third meeting, the group decided that simultaneity, not only
regarding the initial amount and half or double but all three, should be used in
the subsequent lesson for both the original numbers and their doubles and
halves. The students would work with double and half of the original amount at
the same time, meaning that even more simultaneity would be involved in the
teacher‘s instruction. The original amount, half the amount, and double the
amount would be contrasted with each other simultaneously. Another
important conclusion was that the original amount should remain unchanged in
the middle of the children‘s worksheet, and the teacher should say ―the same
thing again and once more‖ to explain the doubled amount. This would be
emphasized by the use of one more borderline than in the second lesson—that
is, the paper used for the exercise would be divided into three fields, instead of
two, with the initial amount placed in the middle, and the doubled and halved
amounts on either side, separated by lines.
The pretest and posttest associated with the third and last research lesson did
not show improvements over the second lesson. The mean score in this group of
students (n=14) was 6.1 in the pretest and 7.4 in the posttest, a difference that
was not significant (p=0.162). This can be compared to the previous research
lesson (lesson 2), after which the mean posttest result had increased significantly
(from 6.5 to 8.2 points). The tasks in which the students themselves chose the
original number to double or halve were still problematic, as were other tasks
dealing with halving and doubling a predetermined amount.
This result illustrates how small changes in instruction make differences in what
the students can learn. It also shows how, even if the teachers in their joint
planning of the lesson had found a way to adjust the instruction to the students‘
knowledge, these plans were not always actually understood and implemented
by the teacher giving the lesson. To understand the result, the teachers and the
researchers reviewed the videotape of the lesson. This showed that the exercise
where the intention had been to use simultaneity to clarify the difference
between the original amount and the doubled and halved amounts, was
performed mostly by the teacher, but not by the students during their work in
the classroom. The change planned for this lesson, to use simultaneous contrast
between all the amounts to make the difference even more explicit, was thus
only partly carried out by the teacher in charge of the third lesson. The formative
assessment used during the lesson was therefore not informed by the theoretical
assumptions discussed during the planning meeting. One example was the
decreased number of examples of amounts used during the teacher-led
instruction, which only included one number (6). Another issue was when the
teacher, demonstrating the examples, twice placed the objects in the wrong area
on the overhead projector (placing the doubled amount in the area for half). She
corrected the error when it was pointed out by a student, but did not explain to
the class what had been wrong or why she changed the placement of the objects.
This oversight might give students the impression that this placement is
randomly chosen and not a deliberate content-based choice.
In conclusion, the third lesson could not verify the group‘s hypothesis that
increased contrast between the different amounts would help students to discern
the concepts of double and half, since simultaneity was not used as was agreed
upon. The theory was thus not used as planned and the students‘ learning
outcomes did not improve. However, the teachers became even more aware of
how small changes might have a great impact on students‘ performance and in
what way theoretical assumptions can be handled in the classroom.
T1: The difficulty with ―half‖ is not really the concept of ―half‖. As long
as you stick to saying ―half‖, then you can do it. Because these things,
they can divide them. What makes it complicated, as we have understood
it, is when you say half as many. For that, of course, makes it difficult
because there should be fewer and yet you say many.
Analysis of the pretest and posttest results associated with the first lesson
showed that before the lesson, more students gave a correct answer to the
questions asking for ―twice as much‖ than to the questions asking for ―half as
many‖. After the lesson, more students were better able to answer questions
about halving, but still not as well as they could answer questions about
doubling. Five items in the pretest and posttest concerned doubling, and five
concerned halving. The students‘ mean scores on the pretest were 0.45 for the
items on doubling and 0.33 for those on halving, while the corresponding scores
on the posttest were 0.51 and 0.44. However, despite this data, at the second
planning meeting, the teacher quoted above still believed that the concept of
―half‖ was easier for students.
R1: Yes, but we thought last time that half was harder … half as many.
T2: However, I think you learn to take ―half‖ before you learn how to
―double‖, that is … to share with a sibling or with a friend … they have
known this since before …
R1: Yes, exactly, and the b-alternative is the most difficult straight
through, except for the question with the squares.
T1: The question is whether it would have been easier if they had had
this stuff [in front of them], then I think certainly they can take half of it,
but only if they can actually see it …
R1: Yes, but they cannot either. They have the stuff there, but still …
T1: However, don‘t they get it more right when they have to split it in
half?
This example shows that eliciting the students‘ understanding through fine-
grained analysis of the scores on diagnostic test questions gradually changed the
teachers‘ view of what the students found hard to understand. Before the first
test was conducted (during the first research lesson), the teachers assumed that
halving was easier for the students to understand than doubling, but when the
tests revealed that this was not the case, their opinion was slowly altered. The
excerpts indicate, however, that their initial opinion was quite persistent and
difficult to change; they were not easily convinced of the contrary even if the test
results showed this. The design of instruction thus risks focusing on aspects that
are not problematic for the students and neglecting the aspects that are critical.
The test was designed on the basis of variation theory, and by contrasting
halving with doubling, it was possible to compare the students‘ knowledge of
these concepts. The results of the tests informed the teacher‘s formative
assessment, as long as the teachers accepted what the tests really said about the
In spite of the teacher‘s obvious engagement with teaching in the first lesson and
in the evaluation of this lesson, the subsequent analysis of the video recording
showed that the lesson rarely included direct feedback to the students. The
teacher listened to the students‘ answers, but seldom commented on whether the
answers were correct.
S1: We had six, then we thought that half of them was all of them.
T1: You thought half was all and so you took them all away. So, we have
some different answers. Let us see what the others thought. What did
you come up with, [student name]? (Turns to another group.)
After listening to the first student, the teacher turned directly to the next group
without revealing the correct answer, and this answer was not revealed until
summing up the instruction at the end of the lesson. This lack of feedback was
not highlighted or discussed when the group evaluated the first lesson during
the second planning meeting, so we do not know whether it was common for
this teacher or not; it is possible that the teacher conducted the lesson differently
from how she usually would due to her awareness of being filmed.
Nevertheless, it is evident that the students were left in doubt several times.
According to the videotapes, the teacher who conducted the second lesson gave
the students more detailed feedback regarding critical aspects than did the first
teacher. It is probable that the discussion during the evaluation of the first lesson
had an impact on this teacher‘s awareness of which aspects of the concepts of
doubling and halving the children might not have discerned, which may have
led her to challenge the students‘ answers with a more developed formative
assessment than the first teacher was able to. In the example below, the teacher
worked formatively with a student who was asked to show half of eight. The
teacher clearly contrasted the doubled and halved amounts, comparing them
with each other to help the students discern the difference between double and
half, taking advantage of the experiences from the previous lesson.
S: Half!
T2: Half. Is there the half of the amount on that side here? (Points to the
tray and one of the piles with four chestnuts.)
T2: How many are here? (Points to the tray and one of the piles with four
chestnuts.)
S: Four!
T2: And how many are here? (Points to the tray and the other pile with
four chestnuts.)
S: Four!
Here, the teacher made a serious effort to understand which aspects the student
had experienced and which the student had not yet discerned. She evidently
used knowledge gained from the analysis of the first lesson about how the
aspects might be discerned and how to use patterns of variation to make aspects
discernible. She noticed that the student had taken four chestnuts from the
original amount, resulting in two piles with the same amount of nuts, i.e. four
chestnuts in each pile. As she wants the student to discern both eight and half of
eight (four), she puts a question to make it possible for the student to discern the
difference. Because the student had taken four items from the original pile of
eight items when halving, the distinction between four and eight was not visible
anymore; the child ended up with two piles with equal amounts instead. Based
on such mistakes, to draw attention to the difference between the original
amount and half this amount, the teacher repeatedly used simultaneous contrast
in her instruction. There were, however, also examples during the lesson in
which the teacher did not give a clear indication of whether a student‘s answer
was correct.
During the third research lesson, the students were given some direct feedback
during their performance of the tasks, but not after, and often this feedback
neither confirmed nor rejected the students‘ answers. The lack of
correspondence between the intended design of the third lesson and the
teacher‘s actual performance of it could be one reason for this lack of
T3: You also think that 4+4 is 8. Have a look here … I have 8 pieces from
the beginning. That is my quantity, my amount that I am not allowed to
touch. If I am going to put down the double amount, I have to put down
as many as I have here (points at the 8 pieces in the middle area) + the
same amount. This means I have to put down 8+8, and this means?
The feedback in the third lesson thus ends with the teachers telling the students
the correct way to solve the problem, a process, instead of taking into
consideration how the students might have understood the problem and what
aspects had been discernible.
seems to have helped the teachers to understand the difficulties students may
have in comprehending the object of learning, and to give relevant feedback to
the students.
The teachers‘ formative assessment, which increased over the course of the
learning study, was informed by the results of the students‘ tests and the video
recordings of the lessons, both of which proved to be powerful tools for the
teachers‘ assessment of the students‘ learning and understanding. Thus, there is
evidence within the study that the learning study design encourages teachers to
work formatively and that the formative assessment is strengthened by the
theoretical framework used. This is in line with the argument put forward by
Black and Wiliam (2009, 2012) and Wiliam (2009) that in order to be able to give
relevant feedback to a student, the teacher needs a theory of how students learn
and the ability to apply this theoretical understanding in a specific context. This
is also in line with the work of James and Pedder (2006) and Pedder and James
(2012), who suggest that the concept of ‗research lesson‘ could be a useful
strategy to promote assessment for learning in classrooms.
The evaluation of the last lesson showed that the simultaneity of double, half,
and the original amount had not been presented in the third lesson as intended.
During the planning meeting before this lesson, the teachers developed a new
design, which was then not implemented during instruction. It was also clear
through subsequent analysis that direct feedback to the students differed
between the teachers and the groups; the teacher conducting the second lesson
gave more frequent and more specific feedback than the other teachers by
explicitly using the concept of simultaneous contrast. It is therefore possible that
the improvement in scores after the second lesson was due to the combination of
increased feedback with a lesson design that followed the assumptions of
variation theory with regard to determining which aspects of the content should
be in focus and which should be kept in the background. On the basis of this, we
can conclude that the outcome of the learning study as a whole might have been
even better had the joint evaluation of the lessons focused more strongly on the
teachers‘ feedback (or lack thereof) to the students, both individually and to the
class as a whole, along with discussing the effectiveness of the lesson design, as
revealed by the students‘ test results and comments and behavior during the
lessons.
8. Acknowledgments
We would like to thank the participating teachers, who generously shared their
time and teaching, the Learning Design research team (LeaD) at Kristianstad
University, for their encouragement and support; and the Department of
Pedagogical, Curricular and Professional Studies, University of Gothenburg,
Sweden, for partly financing this study. We are also grateful for valuable
comments provided by the reviewers and for the thorough language review
carried out by Dr. Catherine MacHale, which increased the clarity of the paper.
9. References
Adamson, B., & Walker, E. (2011). Messy collaboration. Learning from a learning study.
Teaching and Teacher Education, 27, 29-36.
Bennet, R. E. (2011). Formative assessment: A critical review. Assessment in Education:
Principles, Policy & Practice, 18: 1, 5 – 25.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in
education: Principles, Policy and Practice, 5(1), 7–73.
Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment.
Educational Assessment, Evaluation and Accountability, 21(1), 5–31.
Black, P., & Wiliam, D. (2012). Developing a theory of formative assessment. In J.
Gardner (Ed.), Assessment and learning (pp. 206–229). (2nd ed.) London: Sage
Publications.
Boyatzis, R.E. (1998). Transforming qualitative information: Thematic analysis and code
development. London: SAGE.
Brewer, M. (2000). Research Design and Issues of Validity. In H.T. Reis & C.M. Judd
(Eds.), Handbook of research methods in social and personality psychology (pp. 3-16).
Cambridge, U.K.: University Press.
Elliott, J., & Yu, C. (2008) ―Learning Studies as an Educational Change Strategy in Hong
Kong: An independent evaluation of the ‗Variation for the Improvement of
Teaching and Learning‘(VITAL) Project‖, funded by the Dept for Education and
Manpower, Hong Kong. Published by the Centre for Learning Studies, HK Inst
Educ. http://www.ied.edu.hk/cls/resources.htm
Elliott, J., & Yu, C . (2013) Learning Studies in Hong Kong Schools: A Summary
Evaluation Report on ‗The Variation for the Improvement of Teaching and
Learning‘ (VITAL) Project, Education and Didactique, 7(2), 147-163.
Gustavsson, L. (2008). Att bli bättre lärare. Hur undervisningsinnehållets behandling blir till
samtalsämne lärare emellan. [To become better teachers. How the handling of the
lesson content becomes a topic of conversations between teachers.] Umeå:
University of Umeå.
Hermerén, G. (2011). Good research practice. Stockholm: The Swedish Research Council.
Holmqvist, M. (2002). Lärandets pedagogik. Forskningsansökan till Vetenskapsrådet.
[The Pedagogy of Learning. Research application to the Swedish Research
Council.] Dnr 721-2002-3386
Holmqvist, M., Gustavsson, L., & A. (2007). Generative learning: learning beyond the
learning situation. Educational Action Research, 15(2), 181-208.
Holmqvist, M., Gustavsson, L. & Wernberg, A. (2008) Variation Theory – An Organizing
Principle to Guide Design Research in Education. In Kelly, A.E., Lesh, R., &.
Baek J. (eds) Handbook of design research methods in education, p 111-130. New
York: Routledge.
Stigler, J.W., & Hiebert, J. (1999). The teaching gap: best ideas from the world's teachers for
improving education in the classroom. New York: Free Press.
Stigler, J. W., & Hiebert, J. (2009). The teaching gap: Best ideas from the world’s teachers for
improving education in the classroom. New York: Free Press.
Swedish National Agency for Education (2011). Curriculum for the compulsory school
system, the pre-school class and the leisure-time centre 2011. Stockholm: Swedish
National Agency for Education [Skolverket].
Wernberg, A. (2009). Lärandets objekt: Vad elever förväntas lära sig, vad görs möjligt för dem
att lära och vad de faktiskt lär sig under lektionerna [The Object of Learning: What
students are expected to learn, what is possible for them to learn and what they
actually learn in class]. Umeå: University of Umeå.
Wiliam, D. (2006). Formative Assessment: Getting the Focus Right. Educational
Assessment, 11:3-4, 283-289.
Wiliam, D. (2009). An integrative summary of the research literature and implications for
a new theory of formative assessment. In H. Andrade & G. J. Cizek (Eds.),
Handbook of Formative Assessment (p.18-40). New York: Routledge.
Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation 37,
3–14.
Abstract. E-books are now our imperative need. Because of their strong
points, e-books help the process of learning a lot. E-books allow us to
teach an infinite number of students without depending on location and
studying time. In addition, E-books improve independent thoughts,
creative thoughts and informatics thoughts for students.
Keywords: e-books, e-learning, technology.
1. Introduction
In recent years, the research, design, and produce tendencies of ebooks have
been developing a lot. An e-book called Class-book produced by Vietnamese
Education Publishing House has justified the above thing. Students learning
with the e-book can avoid carrying books cumbersomely. If students buy this e-
book, they can use it in all 12 years of learning. Because e-books can interact in
two dimensions with customization ability, multimedia content, they help
students learn enthusiastically. In addition, e-books have other functions that
ordinary books do not have, such as feedback, relating documents, adapting to
the fast changing world, etc. Therefore, e-books were and will be the first
selection in innovating methods, equipment and learning contents.
Although e-books are important, being researched and developed strongly,
some of us still do not know what e-books are, how many kinds of e-books there
are, what characteristics e-books have, what the differences between the learning
with e-books and the learning with e-learning are. Thus, we will refer to these
things in the article. Moreover, we have also built an interactive geometry e-
book, which has many strong points of technology, and is better than current e-
books. We have also designed tools that allow us to divide learners, e-
communication, etc. in order to help the process of learning better.
2. Research content
2.1. The conception on e-books
According to Wikipedia, an electronic book (digital book, e-edition) is a book-
length publication in digital form, consisting of text, images, or both, readable on
computers or other electronic devices. Although sometimes defined as “an
electronic version of a printed book”, many e-books exist without any printed
Of all the above e-books, interactive e-books now have the best ability to support
learning in general and learning mathematics in particular at secondary schools
according to the orientation of innovating learning methods.
- They are ready to use. - They are small and light according to
the exploitable equipment, without
- They do not need sub-equipment. depending on the quantity of e-books.
- The contents are restricted due to the - They interact with two dimensions
fees of their print. with customization ability for students.
- The contents are not easy to reuse - They allow users to feedback.
among related books.
- They allow users to approach other
- They do not contain multi-media documents intermediately.
functions (video, sound, or
interaction.) Weak points
- Each document is single and does not - The cost is expensive (due to the need
Now, let’s study the most basic parts and functions of the e-forum:
a. Images: this item allows us to Switch on or Switch off and control the signals
of Webcam or Camera.
In order to switch on the images we click on the black square Start My Webcam
in the middle of the screen, in the frame Video. After clicking on this square, the
screen displays the announcement: Camera and Micophone Access. We click on
the item Allow and the screen displays the window Preview with camera
images or Webcam. In order to swich on the image signals, we click the button
Start Sharing. After clicking on this button, we will see the image signals. This
time, the official signals of images will be transmitted to all of the television
places.
In order to switch off the image signals or to pause the images, we click on the
Stop My Webcam button at the top right corner of the image frame.
b. Sound: This item allows us to switch on or switch off the microphone or adjust
the sensitiveness of the microphone.
We choose Microphone and Test loudspeaker: this step is forced when we log
in the conference room. This step is performed only one time and we do not
need to do it again. This step allows us to choose the input signal of the
microphone (if we have many input signals) and to test the loudspeaker as well
as the sensitiveness of the microphone. In order to do this, we click on
Meeting/Audio Setup Wizard on the menu bar. After that, we click on Next to
continue the progress of installation.
Click on the Play Sound button to test the loudspeaker. If the signal of the
loudspeaker is good, the blue signal runs along as shown above. Click on Next to
install.
After clicking on Next, we see a new table with a list of microphone equipment
available on the computer. We choose each item and test the microphone by
choosing the input signals and then click on Next. To continue, we click on
Record to see whether the microphone works or not. If the microphone is good,
a blue signal runs along. Finally, we click on Next/Finish to finish the process of
sound installation.
On the menu bar, we click on the symbol and then the down arrow
and choose Connect My Audio. After completing the connection, the green
If we click on Share My Screen, the system has three specific items as follows:
Desktop (allowing us to share all the applications which are running on the
computer to other television places), Application (Sharing the allowed
applications to suitable television places while the other applications are
invisible), Windows (allowing us to share the applications of the operating
system.)
d. Raise hand: is to ask for the chairman’s permission to speak. We click on the
menu bar, and then we click on the symbol (man icon) or click on the down
arrow and choose “Raise Hand”.
e. Discussion (Chat): is the same as the chat function in Yahoo Messenger.
and assessment can be done by collecting students’ products. E-books save the
traces of students on them automatically. If we use e-books, the saving is easy.
Teacher uses e-books to set up files of students. Finally, testing and assessment
can be done by using questions and exercises. Questions and exercises can be
used to assess learning, for example, if we define the levels of students, the
compilation needs to satisfy these following requests:
+) Questions and exercises must be suitable for the program, the
standard knowledge of Ministry of Education and Training, and the levels of
students.
+) Questions and exercises must be stated exactly and clearly in order for
students to understand exactly.
+) Besides the questions and exercises oriented to the basic requests, we
need to prepare questions, deeper exercises requiring wider knowledge to
encourage active thoughts.
+) Assessing results is not only giving marks, but also giving comments
on the contents, forms and learning methods as well as giving aid and plans to
help students.
The function e-forum is designed to test and assess using questions and
exercises. The use of e-forum has these strong points as follows:
+) Because e-forum decentralizes giving marks to many teachers, we can
partly avoid subjective factor. E-check allows us to check, assess each student.
We do not need a lot of efforts for checking and assessing writing exams. We
only spend once entering data and use it for the next times.
+) Besides the strong points of objective testing such as independence of
the subjective ideas of teacher, assessing broad knowledge, covering all desired
contents in a short time, and assessing the fast thought abilities of students, e-
books also allow us to self-assess automatically. When students finish the exam,
we will know the result immediately. Using questions and objective exercises to
test students does not require a lot of efforts. We only enter the data once and
use it for the next times.
+) We can combine writing form with oral form to test students easily.
The interface of an objective test is as follows:
+) With the help and instructions of teacher, students give the way of
approach, survey problems and collect data.
From these factors, we see that discovery learning is divided into three various
kinds:
their mistakes easily, and from that, students can remedy the mistakes in the
next problem.
Example 2
With two given fixed points B, C on the circle (O ; R) and a point A moving
on this circle, prove that the orthocenter of ABC is on a fixed circle.
The divided solution in the interactive e-book is as follows:
Knowledge (TT1)
According to the hypothesis, B, C are fixed on the Figure
circle (O ; R) . Thus, BC is fixed. The directions of line
d are a set of lines that are parallel or coincident with
line d. Hence, the direction of line
(A) AH is (B) AB is (C) BH is (D) OH is
invariant invariant invariant invariant
Branch 1 (Branch ( A ))
If students choose answer ( A ), this are correct. They continue to deal with
knowledge (TT2).
Branch 2 (Branch ( B ))
If students choose the answer ( B ), this is incorrect. The system of the e-book
gives the complement instruction Hd and the situations of the mistakes of
students:
The complement instruction Hd
A moves on the circle, so the line connecting a fixed point with point A
always has changeable directions.
A moves on the circle, so the orthocenter H of triangle ABC is also
movable. Thus, the line connecting a fixed point with point H always has
changeable directions.
The invariant line is the line … Figure
( A2 ) AB . ( B2 ) ( C2 ) AO . ( D2 ) AC .
Since AB AH . Since AO Since AC
always Since AH always always
makes BC always makes BC makes BC
a constant makes BC a constant a constant
angle. a constant angle. angle.
angle.
If students choose answer ( B2 ), this is correct. They continue to deal with the
knowledge (TT2).
If students choose the answers ( A2 ), ( C2 ), or ( D2 ), they come back to the
knowledge (TT1) to relearn.
Branch 3 (Branch ( C ))
If students choose answer ( C ), this is incorrect. The system of the e-book gives
the complement instruction Hd and the situations of the mistakes of students:
If students choose answer ( C4 ), this is correct. They continue to deal with the
knowledge (TT3).
If students choose answers ( A4 ), ( B4 ), or ( D4 ), they come back to the
knowledge (TT2) to relearn.
Knowledge (TT3)
Thus, B ' is a fixed point on circle ( O ) ( B ' is the Figure
intersection point of the fixed line CO with O )) and
//
B ' B AH . Translating to vector B ' B , we have:
If students choose answer ( B ), this is correct. They continue to deal with the
knowledge (TT4).
If students choose answers ( A ), ( C ), or ( D ), they come back to the
knowledge (TT3) to relearn.
Knowledge (TT4)
Because A moves on circle ( O ), the Figure
locus of points H (when A moves
on circle ( O )) belongs to circle ( O ' )
being the image of circle ( O ) through
vector B ' B.
After answers have been chosen, the e-book automatically gives the branches of
the user’s choice. Its interface is as follows:
12. Self-learn task: the e-book gives the self-learn request and self-learn task to
each student. This function is similar to the function e-check.
8.55% 0%
Very necessary
Necessary
31.62%
59.83% Not necessary yet
Not neceassry
We also surveyed the target of using interactive e-book in lessons. The result is
as follows:
Chart 1.2: The target of using interactive e-book in teaching
Percentage %
59.83 62.39
70 55.56
60
50
40 29.91
25.64
30
Percentage %
20
10
0
To ilustrate To stimulate To instruct To practise To
according the interest student to the skills of strengthen
to lesson of learning discover applying e- the practice
knowledge book
Thus, the majority of teachers in the survey asserted that they had used
interactive e-books to stimulate the interest of students. However, there were
still many teachers using the application of the interactive e-book to illustrate
their lessons (70 teachers (59.83%)). Whereas the most important target of using
interactive e-books is to help students find out knowledge. There were only 65
(55.56%) teachers applying this. Besides, very few teachers (35 teachers (29.91%))
were interested in applying the interactive e-book to form knowledge, to
practice skills, and to strengthen the practice of students. By observing,
attending the lesson together with interviewing, we realize that there are a lot of
reasons that make teachers fail to use the interactive e-book. Some teachers do
not know how to use the interactive e-book. Some other teachers supposed that
they were not fully equipped with informatics, so they were puzzled about
teaching lessons.
We also delivered survey forms to 251 students on the application of the
interactive e-book in learning. The result is as follows:
Chart 1.3. The effect of the interactive e-book on students
Percentage %
69.72
63.35 64.94
70 59.36
53.79
60
50
40
30
Percentage %
20
10
0
To increase To help To motivate To help To help
the interest of student to students to students to students to be
learning study create self-learn confident
initiatively
Chart 1.3 shows that students are interested in using interactive e-books
in learning. There are 159 students feeling interested in learning (63.65%) and
especially, 175 students (69.72%) feel that this e-book helps them to study. The
interactive e-book improves students’ self-learning and self-study. Students feel
more confident when they learn with the help of the interactive e-book.
4. Conclusion
E-books have many strong points than other kinds of education, which
are easy to use and to learn everywhere, every-time. E-books can act on-line or
off-line, which allows us to e-educate, to e-exchange, to transmit images, sound,
MP3, and MP4 files, or to interact, etc. In addition, e-books can be designed
completely to improve the activeness of students, the classification of students,
the individual learning of students in order to help the process of learning
develop best. Finally, e-books allow us to educate an infinite number of students
without depending on location and learning time, so they are good for learning.
References
Cumaoglu G., Sacici E, & Torun K. (2013) E-Book versus Printed Materials: Preferences
of University Students, Contemporary educational technology, 4(2), 121-135.
Leslie Czechowski (2011), Problems with E-books: Suggestions for Publishers, Journal of
the Medical Library Association, 99, 3, pp 181–182.
Dao, T. L., & Nguyen, V. H. (2010). Learning mathematical concept and theorem
according to the divided strategy in e-learning environment, Vietnamese
Educational Journal, (244), 33-34, 51.
Encyclopedia. Retrieved from http://www.pcmag.com/encyclopedia/term/42214/e-
book
Hibbard, L. E. (2014), Ebooks: An Alternative to Paper Books for Online Students?,
International Journal of Learning, Teaching and Educational Research, 46-56.
Dracine Hodges, Cyndi Preston, and Marsha J. Hamilton (2010), Resolving the Challenge
of E-Books, Collection Management, Vol. 35, No. 3 & 4, pp.196–200.
Hanho Jeong (2010), A comparison of the influence of electronic books and paper books
on reading comprehension, eye fatigue, and perception. Retrieved
from file:///C:/Users/ADMIN/Downloads/Students_who_read_print_b
ooks_have_a_better_reading_comprehension_of_the_text_and_prefer_paper
_books_over_e_books%20(1).pdf.
Le, T. T. T (2013). Some activities of forming evaluated competence of learning results for
students at primary educated department, Vietnamese Educational Journal,
(323), 39-40.
Nguyen, V. H. (2007). Applying guided discovery teaching in the process of teaching
mathematics at upper secondary school. Journal of Vietnamese Educational
Science, 28-29.
Nguyen C. T., Nguyen K., Le K. B., & Vu V. T. (2004), Learning and teaching the way of
learning, Vietnamese Pedagogy University Publishing House.
Nguyen C. T., Nguyen K., Vu V. T, & Bui T. (1998), The process of teaching – self-learning,
Education Publishing House.
Owen V., Tiessen R., Weir L, DesRoches D., & Noel W. (2008), E-Books in Research
Libraries: Issues of Access and Use. Retrieved from http://carl-
abrc.ca/uploads/pdfs/copyright/carl_e-book_report-e.pdf.
Tran T., Dang X. C., Nguyen V. H., & Nguyen D. N. (2011), The application of informatics
in teaching mathematics at secondary school, Education Publishing House.
Tran V., & Le Q. H. (2006), The discovery of grade 10th geometry with the Geometer’s
Sketchpad, Eduaction Publishing House.
Tran Vui., & Le Q. H. (2007), The discovery of grade 11th geometry with the Geometer’s
Sketchpad, Eduaction Publishing House.
Walters W. H. (2013), E-books in Academic Libraries: Challenges for Acquisition and
Collection Management, Libraries and the Academy, Vol. 13, No. 2, pp. 187–
211.
Moet.gov.vn. Retrieved from http://www.moet.gov.vn/?page=9.6
Vst.vista.gov.vn. Re trieved from http://vst.vista.gov.vn/home/database/an_pham_die
n_tu/MagazineName.2004-06-09.1932/2004/2004_00004/MItem.2004-09-30.4740
/MArticle.2004-09-30.5116/marticle_view.
Wikipedia. Retrieved from http://en.wikipedia.org/wiki/E-book.
Introduction
The effect of the use of digital literacy technology on students has become a
topical area for research, which is not surprising as education in itself has to
meet the students in their own arena. It is an acceptable requirement that
Students’ engagement
Engagement
Student academic engagement involves the willing commitment of the students
to the course of their academic pursuits. Kuh (2009) stated broadly that student
engagement is reflected in the amount of time and effort students put into
achieving college outcomes. This involves the participation in the achievement
of the required learning outcomes before they come into the classroom, in the
classroom and after they leave the classroom. Students‟ engagement could be
enhanced and influenced by instructors‟ pedagogical choices and practices (Lane
and Shelton 2001). Such good practices provide students with prompt feedback,
encourage active learning and communicate high expectations, encourage
interaction between students and faculty, cooperation among students and
respect different talents and ways of learning (Chickering and Gammon 1987).
Performance
The academic performance of students is viewed as a measure of the students‟
ability to show that they have achieved the learning outcomes in a particular
course. This can be measured in myriad ways such as: attendance monitoring,
observation, interview, tracking their online engagement with course content
and participation, self-reporting. Also, there are other traditional means of
checking performance based on the achievement of learning outcomes: essays,
oral and poster presentation, critical reviews, discussions, examination and test.
In the past, traditional ways of teaching and learning were upheld in the higher
education institutions. This was viewed as a process of transmitting content to
the students and comprised of the lecturers deciding on the topics, teaching and
assessment methods (Biggs and Tang 2010). However, in today's larger classes
with diversified students, many lecturers could encounter major difficulties in
sustaining academic standards. In relation to these difficulties Biggs (1999) states
that they can be overcome when all components of teaching and learning are
aligned constructively. This will be based on the premise that learning objectives
Students are thus encouraged to engage in learning activities that are relevant in
achieving these learning outcomes. Biggs and Tang (2015) explained that in this
context, teaching is not topic-based, as is traditional teaching, but focuses on
what students are intended to do after they have learned the curriculum topics.
As a result, several efforts have been placed in developing activities that could
engage students while enhancing the attainment of learning outcomes. Since the
modern classroom is faced with several challenges with student engagement as
one of the key issue, there have been efforts in the use of students‟ digital
devices in fostering engagement while enhancing learning.
To meet these needs, the setting of the lecture hall remains the convention in
most higher institutions, but, these are being enhanced by the integration of new
tools, techniques and pedagogies (McAleese et al. 2013) which these „native
speakers‟ are conversant with. This integration has necessitated studies in
relation to the best use of innovative technologies in higher institutions. A
reflection on what Ihde (1993) calls the „active relational pair‟ presents a view on
the ways in which mobile devices have become absorbed into human social
networking practices. Robinson and Hullinger (2008) also found that
asynchronous instructional technology encourage students to achieve higher
order thinking skills such as evaluation, analysis, synthesis, judgement, and
application of knowledge.
In corroboration with this view, Merchant (2012) observes that the mobile
phone, with Twitter, Facebook and YouTube is heavily marketed by a range of
providers due to human reliance in their everyday lives. This is due to the fact
that most customers rely heavily on the use of these applications in their daily
social lives. Also in line with meeting the needs of the digital age by innovative
ways to engage students through the use of various applications, games and
tools in the classroom, Wilson and McManimon (2014) corroborated with
McAleese et al. (2013). However, they argue that best practice session is utilizing
the cloud as a tool to bridge the learning gap by providing more useful
instruments for the enhancement of teaching styles.
Christensen (2013) suggest that technology, particularly web 2.0, can help
increase the depth of learning by increasing interaction, critical thinking, and
collaboration.
Several devices and innovative tools and software have been introduced to
different classroom settings. The results have varied slightly, but what all these
results have in common is a positive disposition of most of the students to the
innovative practices. Some have used clickers as a means of enhancing students‟
classroom engagement. Park and Farag (2015) explored the use of addressing
clicker in a legal studies course. Findings from their studies suggest that both
lecturers and students are more engaged with the course material and in the
process of teaching and learning. He claims that clickers can be used to break-up
the monotony of lecture, assess student understanding of material and difficult
concepts, and identify areas of student misunderstanding and confusion. This
can give the lecturer an idea of where to focus on. Also, they suggested that the
use of clickers give every student, even those who are uncomfortable
participating in class, an opportunity to provide input (Farag 2015).
lecture setting. The findings from his study were positive suggesting that the use
of Socrative as an online Student Response System increased the in-class
engagement of students.
Overview of Tophat
In the light of the argument by Parry (2011), the use of several tools and
techniques have been explored in educating students (Park and Farag 2015,
Ravishankar et al. 2014, Karakostas et al 2014) but there is not much recorded on
the use of Tophat.
Research Purpose
Besides the issue of the appropriate type of teaching, learning and assessment,
there is also the issue with the students‟ learning styles which are diverse. An
attempt at developing strategies that will focus on the different students‟
learning styles would result to student engagement and likely lead to enhanced
student performance. Krause et al. (2008) argues that it is imperative to develop
a broader understanding of engagement as a process with several dimensions.
Since technology continues to be increasingly used by educational institutions
(Becta 2009), this implies the need for appropriating pedagogical and education
tools in supporting the enhancement of the quality of student experience.
This need has necessitated several researchers to carry out studies on pedagogy,
digital literacy technologies, student engagement and performance. However,
there has not been any study on the impact of the use of TOPHAT as a digital
literacy technology tool on students‟ engagement and feedback provision.
Consequently, this study will focus on the following questions:
RQ1: Does student class participation improve with the use of their „disruptive‟
devices on the Tophat platform?
RQ2: What is the perceived impact of the use of TOPHAT on students‟
engagement in the module?
RQ3: Does Tophat increase the amount of formative feedback received by
students?
Methods
Participants
Instruments
In the first set of designed questions, different types were used, which included
multiple choices, word, numeric, sorting problem, matching problem and „click
on target‟ questions. The second set comprised 4 Likert-scale question using
four-point agreement level from 1 (Strongly disagree) to 4 (Strongly agree) and
qualitative word questions.
Analysis
Descriptive statistics analysis in the form of percentages and means were used to
analyse the demographic data of the participants. The students‟ gender, age and
technology level were assessed. Four Likert scale survey questions focused on
the impact of Tophat on students‟ engagement, understanding of module
content, feedback, coursework feed-forward. The students were also asked to
evaluate the use of Tophat and its impact on their engagement and feedback
T-test and ANOVA test analysis were carried out to assess the differences in
students‟ engagement and understanding for the different gender and age
groups respectively. PASW (SPSS) Statistics Version 17 was used to analyse the
data exported from Tophat. To evaluate the degree of students engagement in
the module through Tophat, thematic analysis technique was utilised to
determine the efficiency of Tophat in enhancing students‟ engagement and feed-
forward for the integrated coursework.
Results
The presentation of the results is mapped onto the structure of the earlier section
“analysis”. The data showed that reliability cronbach‟s alpha (α) was 0.796,
indicating acceptable reliability.
The age of 103 participants ranged from 19 to 45 years, with 29% female and 71%
male, which were distributed as follows: Under 21 years old (6.8%), 22 – 25years
old (70%), 26 – 30 years old (12.6%), 31-35 years old (5.8%), over 36 years old
(4.8%). The finding showed that the average score of participant self-appraisal of
their technological competency was seven out of ten. The means and standard
deviations of the responses to the four Likert scale questions addressing the
students‟ perception of Tophat are provided in Table 1. Students‟ perceptions of
the effects of Tophat were distributed as follows: enhanced their engagement
(74.8%), enhance understanding of the topics (67%), enhance level of feedback
(71.9%) and feedforward to complete their coursework (61.1%).
The variable of age was divided into five groups: below 21; 22 to 25; 26 to 30; 31
to 35 and over 36. An ANOVA test was conducted to test how age influences
different students‟ perceptions of Tophat. The results of the ANOVA test are
shown in Table 3. The result showed that there is no significant difference
among different age group of students‟ perception of Tophat.
The qualitative data collected was analysed thematically with the initial codes
selected based on the data. These codes were the overriding concepts from the
data, which were presented as negative and positive views (Table 4 and 5,
column 3), where applicable. Presented in Table 4 – 7, are quotes (column 2),
In relation to the perception of the students‟ on the way Tophat affected their
engagement on the Module, several views were aired. Some of these quotes are
presented in Table 4 below and were coded broadly as impacting their:
participation; provision of game-based learning, a learning community;
emphasis on learning outcome and revision. A few of these comments presented
in table 4 are:
Tophat helps to track our progress throughout the module; it has a user friendly
interface
However, 10% of the entire sample believed that quality of feedback should not be
delivered on Tophat platform. A few of their quotes were:
I can't really say something about it that depends on evaluator if he/she thinks
Tophat is of more value to this course but for me feedback should not be based on
Tophat.
Students were also asked their most and least favourite feature of Tophat. Their
quotations are listed in Table 6 and Table 7. According to students‟ response, the use
friendly interface, game-based learning (tournament) and instant feedback encourage
students to involve in the learning process. However, students also expressed their
concerns in relation to the use of Tophat, such as: predominantly use a surface learning
approach, low quality of the feedback, instability of technical support (wifi, devices,
Tophat did not integrate with the commonly used education platform – Moodle).
Q: Would you say Tophat has impacted on your engagement in this module?
Codes Positive (48) Negative
(12)
Participation Easy for sign, it has the code for each lecture. Nothing. I
it has helped because I cannot be in the class mean when
while I'm not in the class I touch this
is the time I
must to
Game-based Tournament is an interesting feature and
learning makes us more engaged to the Tophat
Q: How would you say Tophat has impact on the level of feedback in this module?
Codes Positive (52) Negative (18)
Clarity Making me understand the meaning of Not much. As
answer. But, I have a suggestion that I think sometimes I don't
we can finish Tophat at home understand why
To see how much students learn from the answer is
lecture wrong.
It help me to familiarize with research
Feed development of my understanding
forward I don't know how to say that but its helped
me through questions
it increased my eagerness in learning
Quality It quite level of feedback. Not much
Increased my satisfaction some, but very feedback received
little. I'm not sure that
high impact Tophat impact in
Very good to get latest information feedback part
Satisfyingly great
Timeliness It has good impact because feedbacks are
always given on time
We get feedbacks immediately after each
session
The best thing with the Tophat that we get
the feedback immediately
Get the feedback easy and quickly
It had been perfect, proving prompt n
accurate feedback
Approach This way of feedback was a better way Not significantly
Top Hat helps to track our progress I can't really say
throughout the module , it has a user something about it
friendly interface that depends on
Convenience evaluator if
Easy way to deliver the feedback he/she thinks
It's a good way to give feedback, helpful Tophat is of more
Provided a very complete feedback system value to this
It has made my learning experience more course but for me
fluid feedback should
not be based on
Tophat.
The feedback for the module has been very
Accuracy accurate and good
It has impact on the level of feedback
because of its interactivity amongst students
It will help by getting different ideas from
different students
Q: What did you like about the use of Tophat in enhancing your learning experience?
Codes from the Actual quotations
data
Interactive The fact that it made accessing information about the course
approach online interactive.
Looked forward to the engagement after every class
engaged me with e learning
Revision, feedback Seems like a really good learning tool
and feed-forward to Choice question
the coursework Help me refresher course. Friendly interface.
It can help me to enhance my understand of course and if I
have any question that I can ask there
Tournament More interactive than writing on paper and creates
competition amongst fellow students
Increased competitiveness
Competition
Competitive with others at the end.
Convenient Was able to use device anywhere
It could be convincible approach but during in dissertation
time.
Everyone can answer
Can use with phone
Easy to use and No feedback on answered questions
user friendly Easy to use
Different types of questions in a good learning way
Test the It made me question my understanding of the course because
understanding, I'm unable to answer questions in time
summaries the It helps to understands the notes from the lectures
learning outcome Improve understanding of lecture
Giving me some choices, then ask which is correct.
Asking questions about topics thought in class served as a
refresher
Q: What did you not like about the use of Tophat in enhancing your learning
experience?
Codes from the Actual quotations
data
Surface learning, No feedback on answered questions
in-depth learning
Tophat is superfluous, why do not use our Moodle?
Technological The fact that the questions are done live in lectures. Sometimes
infrastructure (Wi- I did not have my tablet or laptop with me and it is annoying
Fi, devices, and to complete questions on a mobile device
classroom, Sometimes no signal
integrated with Sometimes it gets reset, keeps showing the same question,
Moodle) what I experienced during the questionnaire
network problem
I didn't like its complex procedure, everybody can't afford
android phones and this software is working on assumption
that everybody has android phones and that also in working
condition every time, please keep learning procedure simple,
so that we learn what is required not the technology for its
usage.
Time to response Not very useful. When I can use this program, the time,
question, answer are limited. On other time, is just a picture on
my iPad and nothing in it
Not enough time cope the word answers
Not enough time to complete all the questions
Access through phone which is difficult to type
Fully explanation A bit complication to register
on how to use it The fact that it wasn't introduced in the beginning of the
and the purpose of course, If It was introduced earlier, I feel it would have really
using it enhanced my learning experience
Never use it before
Open questions are not clear enough
Not fully prepared and used when the class was done so to
motivation from the class was lacking
I should use it more and know an effective way to get use it
It doesn't work well on my phone and the time frame before
the next question pops in is too short have experienced
difficulties to use it and don't know where to find feedback for
my answers
A bit hard to understand at the beginning how it works
Discussion
RQ1: Does student class participation improve with the use of their „disruptive‟
devices on the Tophat platform?
The data demonstrated that the students‟ participation in the module with the
use of Tophat on their digital devices increased from week to week. User
friendly interface and active learning approach resulted to increase in the
student excitement and motivation to participate in the module activities.
Variation in the types of questions used served to encourage student
participation and held their attention for a longer period of time.
On the other hand, some challenges were identified. Some of these were that it
took around three weeks to encourage all the students to register on Tophat.
Also, the students struggled with the technological infrastructure; these were in
relation to access to internet connection in that classroom (wifi), affordability of
smart mobile devices; proficiency in the use of these devices; desire of working
on a familiar existing platform (Moodle). When the students did get involved
with the activities, the data suggested that some of them (11%) were of the view
that the time given for them to respond to the questions was too short.
Both quantitative and qualitative results showed that the use of Tophat as part
of the module plays an important role in providing instant and clear feedback.
Easy access to the feedback could help students develop their understanding of
the knowledge and feed-forward to complete their integrated coursework.
However, using Tophat as a means of focusing on providing prompt and
summative feedback results to a lack of detailed explanation and as such limits
Conclusion
This study aims to determine whether the use of Tophat could enhance students‟
engagement and provision of feedback/feedforward in a higher education
module. The findings showed that the integrated features of Tophat such as
tournament and quiz increase students‟ engagement across all the age range and
gender. Prompt feedback received by the students enables the process of their
revision and ability to apply the knowledge of concept acquired (feedforward).
These results are important for educational practitioners as there is university
emphasis on a move towards the application of digital technology in teaching,
learning and assessment. The impact of Tophat was found to transform
disruptive digital devices to efficient tools for pedagogical interventions.
Furthermore, in relation to the classroom application, the students‟ digital
devices were used to constructively impact the lecture sessions. They were
found to engage with the learning through the activities set up on the Tophat
platform. The implication from the findings of this research is that the use of
digital devices as an innovative tool can enhance students‟ learning experiences
by providing instant and quality feedback.
Reference
Biggs, J. (1999) What the Student Does: teaching for enhanced learning. Higher Education
Research & Development, 18(1), 57-75.
Biggs, J. & Catherine, T. (2010). Applying constructive alignment to outcomes-based
teaching and learning. Training Material for “Quality Teaching for Learning in
Higher Education” Workshop for Master Trainers, Ministry of Higher
Education, Kuala Lumpur.
Biggs, J., & Tang, C. (2015). Constructive alignment: An outcomes-based approach to
teaching anatomy. In L. K. Chan, & W. Pawlina (Eds.), Teaching anatomy: A
practical guide (pp. 31-38). Switzerland: Springer International Publishing.
Brown, E. A., Nicholas, J., Thomas, & Lisa, Y. T. (2014). Students‟ willingness to use
response and engagement technology in the classroom. Journal of Hospitality,
Leisure, Sport & Tourism Education, 15, 80-85.
Bruff, D. (2009). Teaching with Classroom Response Systems: Creating Active Learning
Environments. San Francisco, CA: Jossey-Bass.
Chickering, A.W., & Gamson, Z.F. (1987). Seven principles for good practice in
undergraduate education. AAHE Bulletin, 39(7), 3-6.
Dervan, P. (2014). "Increasing in-class student engagement using Socrative (an online
Student Response System)." AISHE-J: The All Ireland Journal of Teaching and
Learning in Higher Education. 6(2), 1977-1983
Hoekstra, A. (2008).Vibrant student voices: exploring effects of the use of clickers in large
college courses. Learning, Media and Technology.33, 329–341.
Ihde, D. (1993). Postphenomenology: essays in the postmodern context. Evanston:
Northwestern University Press.
Jones, I., & Day, C. (2009). Harnessing technology: New modes of technology-enhanced
learning action research. Becta report Online available from:
<http://www.sero.co.uk/assets/capital/ht_new_modes_action_research.pdf> .
Kaleta, R. & Joosten, T. (2007). Student response systems: a University of Wisconsin
system study of clickers. EDUCAUSE Center for Applied Research: Research
Bulletin, 10 (1), 1-12.
Karakostas, A., Adam, D., Kioutsiouki, D., & Demetriadis, S. (2014, November). A pilot
study of QuizIt: The new android classroom response system. In Interactive
Mobile Communication Technologies and Learning, 2014 International Conference on
(pp. 147-151). IEEE.
Kärkkäinen, K., & Vincent-Lancrin, S. (2013). Sparking Innovation in STEM Education
with Technology and Collaboration.
Krause, K.L.; Hamish, C. (2008). Students‟ engagement in first‐year university.
Assessment & Evaluation in Higher Education, 33(5), 493-505.
Kuh, G. D. (2008). Advising for student success. In V. N. Gordon, W. R. Habley & T. J.
Grites (Eds.), Academic advising: A comprehensive handbook (2nd ed., pp. 68-
84). San Francisco, CA: Jossey-Bass.
Lane, D. R. & Michael W.S. (2001). The centrality of communication education in
classroom computer‐mediated‐communication: Toward a practical and
evaluative pedagogy. Communication Education, 50(3), 241-255.
Liburd, J. J. & Inger-Marie F. C. (2013). Using web 2.0 in higher tourism
education. Journal of Hospitality, Leisure, Sport & Tourism Education,12 (1), 99-108.
McAleese, M., et al. (2013). Report to the European Commission on Improving the quality of
teaching and learning in Europe’s higher education institutions. Luxembourg:
Publication Office of the European Union.
Wilson, L., & McManimon, S. (2014, October). Bringing Back the ICEAGE: Interactive
Cloud-based Engagement Activities Globalizing Education!. In World Conference
on E-Learning in Corporate, Government, Healthcare, and Higher Education. 2014(1),
2065-2068.
Merchant, G. (2012). Mobile practices in everyday life: Popular digital technologies and
schooling revisited. British Journal of Educational Technology, 43(5), 770-782.
Park, J. D., & Farag, J. D. (2015). Transforming the Legal Studies Classroom: Clickers and
Engagement. Journal of Legal Studies Education, 32.
Parry, D. (2011). Mobile perspectives: on teaching mobile literacy. British Journal of
Educational Technology, 43(5),770-782.
Prensky, M. (2001). Digital natives, digital immigrants part 1. On the horizon, 9(5), 1-6.
Ravishankar, J., Epps, J.; Ladouceur, F., Eaton, R., & Ambikairajah, E. (2014). Using
iPads/Tablets as a teaching tool: Strategies for an electrical engineering
Webster Kododo
Great Zimbabwe University,
Masvingo, Zimbabwe
Sparky Zanga
St. Josephs Tongoona High School,
Jerera, Masvingo, Zimbabwe
Background
Like most of the African states, Zimbabwe is a multilingual country. As such,
choice of language for use in the promotion of literacy and basic education for
citizens has been debated for quite some time. This has been due to the view that
the learner should be educated for his/her own benefit and ultimately for the
benefit of the society (McNab 1989). Therefore, in an attempt to strike a balance
between these two ends, Zimbabwe embarked on language policy innovation in
2006 (Education Amendment Act 2006). After independence in 1980, Zimbabwe
had fashioned its first Education Act in 1987 meant to address the perceived (by
some) negative dominance of the English Language where only Shona and
Ndebele (the two main indigenous languages) were allocated inferior status
with the rest of the indigenous languages having no recognised role in education
(Chimhundu 1984). Chimhundu (in Roy-Campbell & Gwete 2000; see also
Royneland 1997) sadly notes that in the 1987 Education Act, the Zimbabwean
government had failed to honour the proposal presented by the Minority
Languages Committee in 1985 that in areas where there were dominant,
specified indigenous languages, these should be taught in addition to Shona and
Ndebele. The proposal was in response to debates and experiments on
suitability of L1/L2 as medium of instruction in schools. The debate had been
exacerbated by the UNESCO (1953) claim that L1 is the best medium for infant
education.
Owing to heightened debate and accusatory complaints to the effect that poor
performance in schools were partly a result of the use of an L2 as medium of
instruction in education, the Zimbabwean government embarked on language
innovation that culminated in the 2006 Education Amendment Act. In the
amendments, a new section on language use replaced the old Section 62,
Chapter 25:4. Part of the new section reads;
1. Subject to this section, all the three main languages of Zimbabwe, namely,
Shona, Ndebele and English shall be taught on an equal time basis in all schools
up to Form Two level,
2. In areas where indigenous languages other than those mentioned in Section (1)
are spoken, the Minister may authorise the teaching of such languages in schools
in addition to those specified in Section (1),
3. The Minister may authorize the teaching of foreign languages in schools,
4. Prior to Form One, any of the languages referred to in Sections (1) & (2) may be
used as the medium of instruction, depending upon which language is more
commonly spoken or better understood by the pupils.
One can note that pronouncing policy in education is one thing but
implementing that policy is another. In spite of good intentions in policy
formulation, there can be various factors that may affect the implementation of
the said policy. One of such factors is language attitudes. As such, these
researchers got inspired to investigate attitudes of language users towards the
implementation of the language policy amendments in the sampled school
communities. Attitudes have the capacity to affect policy implementation
(Kadodo et al 2012). Research elsewhere shows that language choice for
individuals tends to be influenced by culture, politics and economics (Diamond
1993). Language attitudes raises the question of what language users prefer
when confronted by an array of competing interests ranging from social to
economic? Choices are arrived at after serious balancing acts for individual users
be it learners, parents, teachers, school managers or education managers. The
dilemma implied here makes it necessary to investigate and ascertain language
users‟ choices and the reasons they attach for such language choices. The
implementation of the language policy as directed by the 2006 Education
Amendment Act in Zimbabwe is subject to users‟ attitudes.
Research Question
What are the language attitudes of pupils, parents, teachers and school heads at
Nandi, Mwenje and Nyahanga primary schools towards the use of Shangaan as
medium of instruction as provided for in the 2006 Language Amendment Act?
Conceptual framework
1. Language and education
Language is one of the most essential asserts gifted to humans, indeed a miracle
that defines their existence (Aitchinson 2008). Being a key to communication,
language can bring human beings together as much as it can set them apart. One
primary cause of division among communities is differentiation of languages
roles in education. Defining language, Finocchiaro (in Brown 1987:5) says that it
is “a system of arbitrary … symbols which permit …people in a given culture, or
other people who have learnt the system of that culture, to communicate or to
interact”. Thus, apart from both the vocal and visual, “a language has a dual
character… [as] a means of communication and a carrier of culture” (Roy-
Campbell & Gwete 2000:7). Durckeim (in Blackledge & Hunt 1985) defines
education as an influence exercised by a generation on those not yet ready for
social life [or those who wield power over those without]. In other words,
communicating through language is one of the channels through which
particular norms and values of a society can be transmitted from one generation
to the next. Undermining the language of a people, therefore, devalues their
dignity and leads to, unfortunately, a painfully slow death of such languages
(Open Space 2008; waThiongo 1993). As McNab (1989:11) notes, “education is
perceived as the terrain for excellence where language related inequalities and
discrimination are manifested”. In other words, education must navigate
through this terrain of language use to ensure that all cultural groups are catered
for. Language is one of the significant factors in education that may lead to
either the educability or miss-educability of learners. It can have a telling effect
on the achievement of learners defining the quality of (or lack of) learning and
teaching in educational institutions. It is on this basis that some researchers
advocate for mother-tongue education seen as more effective for mastery of
educational concepts (See Open Space 2008; Adegbija 1994; Bamgbose 1991;
Mupande 2006; Brock-Utne 1993).
This research argues that every language is (or can be) an effective and efficient
tool for its users in education so long those users have firm control of that
language. Firm control of language implies speaker capacity in both the
linguistic and social nuances of the given language. The negation and relegation
of a language from an education system in a country is tantamount to excluding
the speaker community from national and developmental activities. For this
reason, the United Nations propagated the universal declaration of linguistic
rights (Open Space 2008) as paragon for the existence of even the so called
minority communities. Language is a kaleidoscope that unlocks various
meanings of existence for its users. Where learners, participants or community
members are in firm control of the means of engagement (language as one key),
they are able to display their abilities and contributions. Undermining a people‟s
language is equally undermining their confidence, ability and contributions. Use
of unfamiliar language leads to bad results (Prah 2000). This research was
guided by these beliefs regarding the intricate relationship between language
and education, thus seeking to find out to what extent the 2006 Language
Amendment Act was being implemented in selected schools. One can note that
language users‟ attitudes are key to the implementation of language innovation
(Kadodo et al 2012). This research sought to examine language attitudes of
pupils, parents, teachers and school heads at Nandi, Mwenje and Nyahanga
primary schools towards the use of Shangaan as medium of instruction as stated
in the 2006 Language Amendment Act.
2. Attitudes to policy
An attitude is “an organized predisposition to think, feel, perceive, and behave
toward referent or cognitive object … an enduring structure of beliefs that
predisposes the individual to behave selectively toward attitude referent”
(Taylor et al., 1997: 130; Ajzen, 1988:4; Kerlinger, 1986: 453; Kosslyn &
Rosenberg, 2006: 738). In fact, attitudes are the “very general evaluations that
people hold of themselves, other people, objects and issues” (Tesser, 1995: 196).
Beyond the basic functions of language, the roles that we subtly assign to
various languages at our disposal reflect what attitudes we hold of each of these
languages (Adler & Rodman 2001). The undertones in this move are the transfer,
in dosages, of the said attitudes (positive or negative) to the unsuspecting
language users. This is done through subtle insinuations that learning through
language A leads to employment and better life whilst learning through
language B may lead to lack of this and that. This method is borrowed from
most colonised worlds where it has been successfully used to shape oppressed
people‟s preferences. What we think about a language (cognitive attitude), what
we feel about it (affective attitude) and what we actually do with that language
(behavioural attitude) (Taylor et al 1997; Child 1993) are clear attitudinal
demonstrations of the values we attach to each of the languages that are at our
disposal. Consequently, the said attitudes will shape how we use language in
the various activities of our lives. In the same manner, this also has visible
influence regarding whether language policy innovation will or will not be
successfully implemented.
These researchers note that issues of language use are always bounded in power
struggles as demonstrated during the colonial processes in various parts of the
world. In the then Rhodesia (Zimbabwe now), there was incessant tutelage that
English was „the language‟ whilst the local ones were of no consequence. As
Diamonds (1993) at http://www.pre.org (accessed 20/08/2011) notes, “when a
people have been told for many years that their cultures [so their languages too –
our emphasis] are worthless, they come to believe it”. Consequently, this created
positive attitudes (in most colonised people) towards colonisers‟ languages
whilst conversely creating negative ones for indigenous languages that had been
disempowered by lack of economic rewards for them (Kadodo et al 2012).
Ironically, at independence, there has been unequal empowerment of
indigenous languages in Zimbabwe. The 1987 Education Act tended to raise
some languages (notably Shona and Ndebele) to national languages at the
exclusion of the rest of other local languages. Regrettable to say is that the same
power struggles hitherto stated regarding the colonial times came to haunt the
language use in an independent state. Linguistic imperialism, hence likewise
One key factor for language policy to work is the job market that should act as
an incentive for the said policy to be favourable to the user community (Open
Space 2008). Unless there are promises for the products of that policy being
employable, then that policy may not be supported by the user community.
Whatever language is economically incentivised will tend to attract positive
attitudes from the user community. In close proximity is the role of media (both
print and electronic) which can play a supportive role or become the devil‟s
advocate (Ndinde & Kadodo 2014). Media is key in shaping people‟s attitudes
by the way how it will campaign for or against a view or language. For instance,
the nature of programmes and the language employed to air them could have an
influence on people‟s language preferences. The points noted above are, in turn,
dependent on the commitment of the leadership of a country (Roy-Campbedll &
Gwete 2000). A language policy gazetted by leadership not politic enough will
hardly succeed. The policy itself must be outlined in succinct language leaving
no room for speculation or debate as to the meaning of statements. In other
words, tentative and speculative language must be avoided so that each
instruction is understood for what it is. An astute leadership will first ensure
that relevant teaching staff and appropriate teaching materials are in place prior
to gazetting education policy. It is no point pronouncing an education policy
when appropriate preparations have not been done because that is tantamount
to pronouncing its failure before implementation starts. Banda‟s 1968 Chewa
only medium in Malawi and Ratsiraka‟s 1972 Malagachisation in Madagasca are
cases in point. In short, it is important to look at both facilitating and debilitating
factors within the operating environment so as to take corrective measures for
policy implementation to succeed.
Methodology
The research was guided by the descriptive survey design in an effort to
understand the attitudes of pupils, teachers, headmasters and parents at the
three selected schools towards implementation of the 2006 language innovation.
The descriptive survey was seen as suitable for measuring user‟s attitudes
(Chikoko & Mhloyi 1985) regarding the implementation of the 2006 language
innovation. This research employed mixed methods (Maree 2010) where both
qualitative and quantitative data were collected. Use of the mixed methods
helped in the triangulation to increase reliability. Questionnaire, interview and
observation data collection methods were employed. These collection methods
were chosen for their versatility in data collection of users‟ language preferences.
The questionnaire is a self-report instrument that guarantees anonymity of
research participants (Best & Khan 1993) thus increasing chances for participants
to reveal their deep-seated feelings regarding attitude referent. On the other
hand, face-to-face interviews increased rapport with participants allowing
elicitation (Brenner 2006) of information. These researchers also felt they needed
to observe a couple of lessons at the research site to ascertain visible reactions of
learners when what language (or combination of languages) was/were used.
This was a useful method for learning learner behaviours (Sapsford & Jupp
2006) that betrayed learners‟ language preferences.
Selection of participants
Purposive and random sampling techniques were used in sample selection.
Whereas the three school heads (of the three schools) were purposively selected,
ninety pupils, thirty teachers (from the three schools) and ninety parents (of
learners at the three schools) were randomly selected using the lottery method.
The „global‟ picture of results from the presented data in Table 1 above and
Table 2 below reveal the respondents‟ attitudes to the three languages that are at
their disposal. In Table 1, 100% of participating school heads, 60% of teachers,
70% of learners and 58% of parents preferred that English continue to be the
There were other issues that were cited by teachers during interviews where
they felt that Shangaan as an education medium did not have the terminology
range to adequately convey the scientific and commercial „worlds‟. In
conjunction with this perceived problem is the non-availability of reading
materials even for Shangaan itself as a subject, let alone reading materials
written in Shangaan for content areas. These researchers, however, note that
languages can grow depending on what range of uses we assign them. Likewise,
Shangaan can develop to cover the range that the users so desire it to cover so
long we avoid the pitfalls of purism where we want languages to be what they
were centuries ago. Regarding materials and manpower development, these are
issues that can be resolved say in phases provided the policy-makers,
implementers and user communities are all agreed on the necessity for such
move. Simply put, policy development should not be conceived in the top-down
form which, more than often, leads to tissue rejection (Obanya 1987). There is
need for extensive positive consultations.
As implied above, respondents also noted that none of the teachers on the
ground at the time of data gathering was ever trained to teach in Shangaan. This
was seen to be a handicap to the implementation of the 2006 policy. This,
coupled with the reality of lack of materials on the ground, questions the
sincerity of policy-makers in gazetting the 2006 Education Amendment Act. The
ground surely was not „flattened‟ for indigenous languages to be used as media
of instruction in schools. Scales have always, and sadly remains so, in favour of
the English Language. There has not been any attempt to incentivise the
indigenous languages as a measure for users to commit counter-attitudinal
processes and develop positive attitudes towards them.
Table 2 below give results of observations made by these researchers when they
attended some of the lessons in the research schools.
3 Mathematics More Shona & High Enthusiastic & Very high for
Shangaan than confident Shangaan, high
English for Shona &
lower for English
4 Social Studies English but High Enthusiastic & Very high in
with more excited either Shona or
Shona & Shangaan than
Shangaan English
5 Mathematics English & a Low Look-warm Low
little bit of mostly
Shona
6 Environmenta English & a Moderate Look-warm Moderate
l Science little bit of
Shangaan
how are these learners engaging issues in this handicapped and ominously quiet
environment? Based on the observations above, we conclude that the question of
what language of instruction in education matters.
Looking at the sets of findings in this research one notices some internal
contradictions. Significant to note is that leaners seem to enjoy, and possibly
learn more meaningfully when they are in control of the language of education.
This should, supposedly, see them opt for their L1 as medium of instruction.
However, when learners consciously make a choice they still opt for a language
they may not have efficiency in. The matrix in the maze is that their conscious
choice of language is driven by what they yen for in their future lives, more of a
wish-driven choice. This is what holds sway to their language attitudes.
Unfortunately, a number of education systems today have become so
mechanistic and examination-driven that teachers can just coach learners in their
quiet environments to pass the national examinations in spite of their low
proficiency. Is this not possibly the reason why the industry sector is perennially
complaining of raw graduates from some of our institutions?
Conclusions
Based on the findings of this research one concludes that choice of language of
education is not always a rational process but is more often emotional.
Notwithstanding that learners may not be efficient in a language they may still
opt for that language as medium of instruction owing to their perceived life
opportunities. For that reason, before legislating any language policy there is
need to ascertain users‟ language attitudes. If these are not in tandem with the
proposed language we would rather incentivise the intended language for users
and implementers to prefer such language. This research also concludes that
contingent planning should precede gazetting of any education policy. This is
possible in situations where there are open and well-intended consultations. The
fact that this was not meaningfully done as precursor to gazetting of the 2006
Education Amendment Act in Zimbabwe, the innovation has not been embraced
by the user community and therefore has not succeeded.
Declaration
The researchers wish to declare that there was no research grant attached to this
research by any organization.
References
Adegbija, E. (1994). Langauage Attitudes in Sub-Saharan Africa: A Sociolinguistic
Overview. Clevedon: Multilingual Matters.
Adler, R.B. & Rodman, C. (2001). Communication. London: Routledge.
Aitchinson, J. (2008). The Articulate Mammal: An Intorduction to Psycholinguistics 5e
(5th ed.). London: Routledge.
Ajzen, I. (1988). Attitudes, Personality and Behaviour. Milton Keyness: OUP.
Bamgbose, A. (1991). Language and the Nation. The Language Question in Sub-Saharan
Africa. Edinburgh: EUP.
Best, J.W. & Khan, J.V. (1993). Research in Education (7 th ed). Boston: Allen & Bacon.
Blackledge, D. & Hunt, B. (1985). Sociological Interpretations of Education. London:
Groomhelm.
Brenner, M.E. (2006). “Interviewing in Educational Research” in American Educational
Introduction
The curricula of most degree programmes in natural sciences and engineering
predominantly involve classroom lectures, practicals for solving exercises,
seminars with student presentations and, rarely, excursions or actual fieldwork.
Classroom lectures, practicals and seminars account for the by far most part of
the higher education. Classroom lecturing is vital for setting a solid base for
primary skills such as understanding of theoretical concepts (e.g. mathematics),
first principles (e.g. basic physics), taxonomy and laboratory working
techniques. However, classroom lectures are mostly passive events for students
with a one way communication, although there are many concepts to stimulate
the audience (e.g. Laws, 1991, Powell, 2003, Reiber, 2006). There are also many
applied courses which concentrate on specific themes, e.g. monitoring of
environmental parameters, assessment of hazardous natural processes,
construction of buildings, instruments and machines etc. Teaching in some
During lectures the students should learn and understand the necessary
theoretical and conceptual basis of a subject. Yet, when examples are displayed
in the classroom, students experience only a limited aspect of a study or
situation which is presented in a slide or movie. Often a detailed and
comprehensive description of a situation or object and its environmental
circumstances and parameters is missing (even the lecturer may not know under
which circumstances a photo or film was taken). The true scale of objects is
often not clear to students, if they do not experience the real objects themselves.
In addition, visual material for class room teaching is often biased by well-
chosen factors or circumstances: optimum light conditions to illuminate an
object, undisturbed environmental conditions etc.
Students should participate actively during practicals where they present their
solutions to questions and exercises distributed by the lecturer and where they
apply the principles and concepts which were taught during the lectures. Such
solutions are mainly developed outside the classroom, sometimes within a small
group of students. In the majority of practicals, the students are given real data
e.g. measurement curves such as seismograms, rock samples, electronic modules
etc. to work on. Such material is often carefully selected to avoid complications
due to noisy data, unclear interpretational options etc. The problem of scale
arises also in many subjects, e.g. a complete bridge cannot be examined in the
lab during an engineering course.
Seminars are conducted in such a way that students learn to present, discuss and
think about case studies from articles in the literature (which they read typically
in the late evening just before the seminar is be held). Again, links to a realistic
situation are given in parts only and experience of the real world is missed.
During many excursions the students are carried from one point of interest to
the next one without any active participation. Sometimes even the connection
between basic theoretical principles and objects of an excursion are poorly
explained and, in this way, the link between them remains obscure. Students
may also be confused if there are different lecturers in the classroom and during
the excursion who use different ways and concepts to describe the same object
or process. Only rarely is actual fieldwork is done by students (e.g. mapping,
collecting, measuring in the field or assembling of an instrument). However,
well prepared lecturing outside the classroom trains students for their
professional career and widens their perspective (Hursh & Borzak, 1979) due to
the inherent interdisciplinary nature of outside teaching (Claiborne et al., 2014).
lectures to include practicals and seminars where the subject is actually based.
This concept can be compared to the idea of learning a language in the country
where it is actually spoken. In that case, the language will not only be adopted
during the language classes, but also during the rest of the day. In a similar
way, during an in situ class, students are confronted with the matter all day
long. This stimulates the students to think about the subject in more detail.
Such an approach leads to questions (and answers) which would not have been
asked (and answered) during a typical classroom lecture. Thus, students get a
much deeper and more comprehensive understanding of the subject. An
example can be seen in Figure 1, where students give in situ a seminar on the
flanks of Stromboli volcano, Italy. The background of our examples is described
in Box 1.
Table 1: Overview of the concept of in situ lecturing with its main elements A - F.
Our pedagogical concept for integrated in situ teaching includes the following
elements or steps (Table 1):
A) preparatory classroom lectures on basic concepts and theoretical work,
B) preparatory student work which includes the repetition of the basic concepts,
reading of relevant literature, preparation of classroom presentations, in
situ seminars and lectures, as well as getting familiar with the specifics
(e.g., geology, geophysics, eruption style, vegetation, accessibility) of the
location of the field study,
C) in situ lectures in order to reinforce the theoretical background and to transfer
the basic principles to real situations,
D) in situ practicals to actively apply the basic theory learnt during the lectures,
within the actual situation,
E) in situ seminars with presentations by the students to explain observations in
the field, review the background, explain the local history or add details
as well as discuss uncertainties and other problems related to the present
in situ state of the subject, and
F) post-trip documentation.
Figure 4: Local field engineer (left) during an in situ lecture (step C) in a quarry with
volcanic rocks, Vogelsberg region, Germany.
earlier during the preparatory classroom lecturing can be repeated and directly
linked with the real world. Of course the main point should be the direct
inclusion of the local phenomena (e.g. a geoscientific site, major machinery or
instrument, a building…). A classic example is the explanation of rock types
and the genesis of these rocks and their usage in a quarry (Figure 4). One may
also visit a seismic reflection field survey and explain the implementation of
recording arrays, the data acquisition and preliminary data interpretation. Such
a lecture can be given also by an external expert, e.g. a field engineer of a
company, a local geology expert etc. Generally, parts of the in situ lecturing can
be done by local experts who have specialised local expertise (Figure 5), specific
experience with a production machine or research infrastructure (Figure 6) etc.
It is useful to explain to such external lecturers in advance what is the aim of the
lecture, what is the state of knowledge of the students and what the students
could do as possible practicals.
Figure 5: In situ lecture (step C) at about 700 m depth inside a potash mine.
A group of geophysics students is instructed by a local geologist.
d: Historic Seismicity and its Use for Seismic Hazard Analysis: historic seismicity
deals with earthquakes and their impact on society and nature in the past
(mainly the time before instrumental seismicity started at around 1900). We
explained the relevance of historic seismicity for estimating the hazard and
risk by future earthquakes in the preparatory course and then visited a
museum with historic seismic instruments as well as a town near to our
university which suffered from destructive earthquakes in the past.
Discussion
We introduce an extensive concept in which a part of the traditional classroom
teaching is transferred to the actual places where the subject matter can be
studied directly and under realistic conditions. Compared to other teaching
approaches such as excursions or fieldwork, we propose to lecture and practise
comprehensively in situ (Table 1). Whereas students often follow passively the
visited sites during an excursion, we motivate and force them to actively
contribute to the lecturing. This active contribution goes beyond typical
fieldwork, as the students are involved e.g. in preparing the lecture notes or
giving presentations. Basic concepts and skills are taught and learnt ahead of
the in situ part during preparatory classroom activities. During the in situ part,
lectures are done to repeat these basics and deepen the students’ knowledge. In
addition new learning matter is introduced by including a direct link to local
specialities, some of which can never be presented in a realistic manner inside a
classroom (Figures 4-7). Active application of the freshly trained skills will
admit students an even deeper insight to the subject during the in situ practicals
and in situ seminars. This comprehensive learning cycle helps students acquire
a wide range of competences, even exceeding the main subject.
When the complete preparatory and in situ lecturing is prepared and executed
by the same lecturer or lecturer team, the subject matter can be presented in a
coherent way to the students. This avoids confusing students due to different
descriptions or parameter abbreviations of the same object as it can happen
when different lecturers use their own teaching material.
The in situ lecturing is suited for bachelor’s as well as master’s level teaching, we
also applied it to mixed groups (from first year bachelor’s to second year
master’s students). Of course, exercises and themes for seminar presentations
should be adjusted individually to the different students according to their
background and experience.
The in situ lecturing requires significant input and active contribution from the
students: both, the preparatory and in situ phases include practicals which can
be quite time consuming. Especially the preparation of the in situ seminar
presentation, including the generation of hand-out material for fellow students,
may take some time. An example for a work plan can be outlined as follows: we
plan about 40-60 working hours for the active preparatory phase and 10-20
hours for the preparatory lecturing. Depending on the subject and site, the in
situ part can last another 30 hours (3 days) to 120 hours (12 days). For the final
report about 20-40 hours may be required. This corresponds to an overall work
load of 200-300 hours or 7-10 credit points of the European Credit Transfer
System (ECTS, with 1 ECTS credit point equivalent to 30 hours of student work).
An important point is to make clear to the students what is expected from them.
Especially presentation material (posters, handouts, computer presentations …)
prepared for the in situ part must be done thoroughly by the students, because
missing background material may not be available during travel. If the students
prepare material for in situ lecturing, then flaws must be avoided as it may be
also difficult to conduct revisions during travel.
The students can be included in the organisation of the in situ part in order to
learn the organisational side of their subject. For instance students may organise
the travel to a starting point of the in situ part. We told our students that the in
situ part of a volcanism-related lecture series will start at the port of Naples
(Italy) at a specific pier, day and time. It was their own responsibility of get to
this place in time which is about 1000 km away from their usual classroom.
Most students are highly motivated during our in situ lecturing. The students’
seminar presentations, hand-out material and their final reports were prepared
predominantly in an excellent way. The average grades assigned to the in situ
courses were better compared to other classroom lectures. We interpret this
positive outcome to be the result of high motivation due to our concept of in situ
lecturing.
Likewise our in situ courses were evaluated very positively by the students
within the regular anonymous evaluation procedure which is conducted at our
university (KIT) (Craanen, 2010). The overall grade of the students’ evaluation
was always very high (between 1 and 1.5 with 1 as the best grade on a scale
between 1 as excellent and 5 as deficient). Furthermore, the students provided
helpful comments to improve this kind of lecturing method. Students
commended for example “that they liked to talk to local experts”, “that they
could go to sites which are not publicly accessible”, “that they were
demonstrated the subject in accordance with practical needs” or “that they could
evidently realise the relationship between theory and real measurement
instruments”. We are encouraged by this positive feedback to further conduct
and develop in situ lecturing as well as recommend this concept to other
lecturers.
Acknowledgements
Our development of the in situ courses benefitted much from the response of the
students who gave valuable comments in their official evaluation sheets for the
courses. In two cases our geophysics lectures were complimented with a
valuable geology part by Geologierat Bernd Schmidt (Mainz) during the
preparatory and the in situ phases. Such interdisciplinary input improved the
quality of the teaching. Prof. Norman Harthill (Karlsruhe) kindly helped
improve the manuscript. The in situ lecturing in Southern Italy was financially
supported by a teaching grant (Fakultätslehrpreis) to E. G.
References
Butler, R. (2008). Teaching Geoscience through Fieldwork. GEES Subject Centre,
Learning and Teaching Guide, http://www-new1.heacademy.ac.uk/
assets/Documents/subjects/gees/GEES_guides_rb_teaching_geoscience.pdf
(accessed 1 March 2015).
Claiborne, L., Morrell, J., Bandy, J., & Bruff, D. (2014). Teaching outside the classroom,
http://cft.vanderbilt.edu/guides-sub-pages/teaching-outside-the-classroom
(accessed 1 March 2015).
Craanen, M. (2010). Fakultätsübergreifendes Monitoring der Veranstaltungsqualität am
Karlsruher Institut für Technologie (KIT) [Title in English: Inter-faculty
monitoring of teaching quality at KIT]. Qualität in der Wissenschaft (QiW) -
Zeitschrift für Qualitätsentwicklung in Forschung, Studium und Administration, 4
(1/2010), 2-11.
Handelsman, J., Ebert-May, D., Beichner, R., Bruns, P., Chang, A., DeHaan, R., Gentile, J.,
Lauffer, S., Stewart, J., Tilghman, S. M., & Wood, W. B. (2004). Scientific
Teaching. Science, 304 (5670), 521-522.
Hursh, B. A., & Borzak, L. (1979). Toward cognitive development through field studies.
Journal of Higher Education, 50, 63-78.
Kastens, K. A., Manduca, C. A., Cervato, C., Frodeman, R., Goodwin, C., Liben, L. S.,
Mogk, D. W., Spangler, T. C., Stillings, N. A., & Titus, S. (2009). How
geoscientists think and learn. Eos, Transactions American Geophysical Union, 90
(31), 265-266.
Laws, P. (1991). Calculus-based physics without lectures. Physics Today, 44 (12), 24-31.
Lonergan, N., & Andresen, L. W. (1988). Field-based Education: Some Theoretical
Considerations. Higher Education Research and Development, 7, 63-77.
Powell, K. (2003). Science education: Spare me the lecture. Nature, 425, 234-236.
Reiber, K. (2006). Wissen – Können – Handeln. Ein Kompetenzmodell für
lernorientiertes Lehren [Title in English: Knowledge – Ability – Action: A
conceptual model for learning-oriented teaching]. Tübinger Beiträge zur
Hochschuldidaktik, 2/1.
Thompson, D. B. (1982). On discerning the purposes of geological fieldwork. Geology
Teaching, 7, 59-65.
More than a decade ago, UTS, and many other Australian universities,
introduced diagnostic testing of assumed knowledge (at UTS called the
Readiness Survey). The diagnostic tests worked (and work) in conjunction with
the pre-teaching subject, at UTS, Foundation Mathematics. Students failing the
Readiness Survey take Foundation Mathematics prior to their first core
mathematics subject to ensure they have the assumed knowledge of their
program.
1 Using the classification system of Barrington and Brown (2005), for New South Wales
high school mathematics subjects, Mathematics Extensions 1 and 2 are classified as
“advanced”. Mathematics (2 unit) is “intermediate” and General Mathematics is
“elementary”.
2 See Prince (2004) for a brief description of active learning and problem-based learning.
Mastery Learning was trialled at UTS in the first (Autumn) semester of 2013.
The results were promising enough to suggest a further trial of two subjects in
Autumn semester of 2014. Analysis of the results of these two subjects
confirmed Mastery Learning as a solution to the problems facing first-year
STEM students in their mathematics subjects, and Mastery Learning was
implemented in another two subjects in second (Spring) semester of 2014. This
paper examines the success, or otherwise, of this Mastery Learning initiative.
2.2 Implementation
After subdividing the subject curriculum into learning units and further into a
logical sequence of smaller objectives, learning materials, instructional strategies
and activities were identified, sequenced and executed over the teaching period.
Criterion-referenced tests were administered. These were supervised, online,
summative tests of just under an hour‟s duration, undertaken approximately
two weeks after the completion of a unit. They assessed the „fundamental‟
knowledge and skills objectives of the unit, that is, the knowledge and skills that
provide the basis for further development. In the UTS implementation,
„mastery‟ was set at 80% of the marks available on the mastery tests.
exhibiting mastery, could choose to undertake the second-chance test. The best
mark of the attempts was used in determining the final mark for the subject.
While there was slight variation between subjects as to the finer details of the
implementation, four mastery tests were scheduled over the course of the
semester. Each test was worth 16% of the final mark for the subject. The final
exam was then worth 36% of the final mark. Passing at mastery level in all four
mastery tests ensured that students achieved a Pass in the subject. Perfect scores
ensured an upper range Pass was achieved. Students achieving mastery in all
tests could sit the final examination to improve their result beyond the Pass
grade. Students were required to earn 13/16 on at least one of their attempts of
each mastery test. Students not achieving mastery on the first attempt were
given a choice of remediation activities to participate in. This remediation was
conducted outside scheduled class time. Where mastery was not demonstrated
on the second attempt, more structured activities were made available
(primarily small group and peer-assisted learning).
The third attempt was conducted at the end of the semester, allowing students
additional time to acquire the knowledge and skills they needed. (The third
attempt was only available to students who had not already demonstrated
mastery.) To facilitate a successful third attempt, the last third of the semester
consisted of enrichment activities which were examined in the final exam. The
emphases in these enrichment activities were the application of modelling and
problem-solving using the tools developed in the earlier units. These enrichment
activities also provided students who were yet to demonstrate mastery with the
opportunity for reinforcement and further exposure to content in context.
Students requiring a third attempt (approximately 5% of a class, on average)
were not eligible to sit the final exam. This demonstrates the primary trade-off
made to implement Mastery Learning in semesters of fixed length, and is similar
to some of the implementations reported by Twigg (2013).
UTS is not the only Australian university to implement Mastery Learning, the
University of Canberra implemented Mastery Learning in 2014. Twigg (2013)
reports on a number of US tertiary institutions that are also using Mastery
Learning as part of a learning design called the Emporium Model.
3. Data
Data was collected concerning program, tertiary entrance rank (ATAR),
Mathematics subject(s) studied in high school and mark(s) in the Higher School
Certificate (HSC) or other background, as well as final mark and grade in
Foundation Mathematics, Mathematical Modelling 1, Mathematical Modelling 2,
Introduction to Linear Dynamical Systems and Introduction to Analysis and
Multivariable Calculus for the 2012-2014 academic years (six semesters).
Information about the sample sizes can be found in Table 1.
The class sizes of the six semesters of Mathematical Modelling 2 were 286, 560,
235, 574, 274 and 483 respectively.
Students are assigned grades based on their final marks – A Fail (Z) is 0% to
49%, a Pass (P) is 50% to 64%, a Credit (C) is 65% to 74%, a Distinction (D) is 75%
to 84%, and a High Distinction (H) is 85% to 100%.
Focus groups were also conducted by staff not involved with the teaching or
administration of the subjects included in the study. Students self-selected to
participate, in line with ethics approval. Groups consisted of up to 10 students,
and responses to six set questions (on attitudes, confidence, stress and
assessment structure and nature) and a concluding open-ended, catch-all,
question.
4. Methodology
Mixed methods were used to assess the impact of Mastery Learning -
quantitative techniques were used to assess student achievement, though
students‟ perceptions of this achievement were also examined. Qualitative
techniques were used to assess the impact of Mastery Learning on things such as
confidence, anxiety, attitudes, and behaviour.
Spring outcomes with Spring outcomes (representing the different intakes into
the programs).
For pairwise comparisons of failure rates z-tests were used (Stat Trek, 2015).
Where normality couldn‟t be assumed, chi-square tests were used. For cohorts
where the expected numbers of failures or successes were too small (<5), Fisher
Exact tests (McDonald, 2014), as implemented at Preacher (2015), were used.
5. The outcomes
A one-sided t-test assuming unequal variances on the mean overall final mark
finds that the improvement in mark is significant at the 5% level when Autumn
2013 is compared with Autumn 2014, but is not significant when Autumn 2012 is
compared with Autumn 2014. These results are duplicated for the mean final
mark for the Mathematics and General Mathematics cohorts.
Median final marks also improve for nearly all background cohorts with
Mastery Learning. For the cohorts of primary interest, the General Mathematics
and Mathematics cohorts, the improvement in median using the z–test for the
General Mathematics cohort when Autumn 2012 is compared to Autumn 2014 is
significant at the 10% level (p = 0.051), and the increase in median when Autumn
2013 is compared to Autumn 2014 is significant at the 1% level (p = 0.00082). For
the Mathematics cohort, the Autumn 2012 comparison is not significant, while
the Autumn 2013 comparison is significant at the 1% level (p = 0.00036).
In Table 3 we see the results for the Spring offering – the reduction in failure
rates on pairwise comparisons overall are significant (p = 0.00002 and 0.027
respectively). We can also see improvement in the failure rates of students with
backgrounds in Mathematics and General Mathematics. The improvement in
failure rates for students with a Mathematics background is significant (p =
0.004) when Spring 2012 is compared with Spring 2014 (using the Fisher Exact
Test). However, when Spring 2013 is compared with Spring 2014, the
improvement is not significant. The comparisons of the failure rates for students
with a General Mathematics background between Spring 2012 and Spring 2014,
and Spring 2013 and Spring 2014 are both statistically significant (p = 0.009 and
0.049 respectively). Here we again see favourable outcomes for the main target
group (students with a General Mathematics background).
Using a one-sided t-test with unequal variances on the mean overall Spring final
mark, we find that this improvement is significant at the 5% level when Spring
2012 is compared with Spring 2014, but is not significant when Spring 2013 is
compared with Spring 2014. For the students with Mathematics backgrounds,
the increase in mean and median were significant when Spring 2012 - Spring
2014 comparison, but not when Spring 2013 was compared to Spring 2014. For
the General Mathematics cohort, the increases in the mean for both comparisons
were significant, but there were no significant improvements in the medians.
Ext. 2 91 91 0% - - - 88 85 0%
(2) (3)
Ext. 1 78 81 0% 88 94 0% 84 85 0%
(5) (8) (3)
No HSC - - - - - - 11 2 100%
maths (2)
Science students, also show a 67% decrease in failure rate in the Autumn cohort
and a 50% decrease in failure rates in the Spring cohort. Comparisons of the
failure rates for Science students reveals that the reductions in failure rates for
both Autumn and Spring 2013 cohorts are significant at the 5% level, while
Fisher‟s Exact test yields no significant reduction in comparing the Autumn 2012
cohort (p = 0.123), though in comparing Spring 2014 with Spring 2012 the
improvement is significant (p = 0.047).
Science 0.2 (25) 0.333 (6) 0.633 0.364 (11) 0.205 0.105 (19)
(30) (39)
Overall, we can see an approximately 50% reduction in the failure rate overall
compared with previous Spring semester results. This result is significant at the
5% level for both the Spring 2012 and Spring 2013 comparisons to Spring 2014.
Considering background cohorts, for those students with a Mathematics
background using Fisher‟s Exact test yields a significant reduction in failure rate
at the 5% level when the Spring 2014 Mathematics cohort is compared to the
Spring 2012 cohort (p = 0.0005), but the reduction is not significant when the
Spring 2014 cohort is compared to the Spring 2013 cohort (p = 0.185) . Other
background cohorts from the target groups are too small to perform significance
tests.
The failure rates for students who had undertaken Foundation Mathematics
prior to taking Mathematical Modelling 1 has seen a reduction from 25% down
to 6% for a similar sized cohort. Pairwise comparisons using a z-test
demonstrates that both reductions are significant at the 5% level (p = 0.00004 and
0.015 respectively). No strong conclusions can be drawn about the failure rates
for those students who did HSC mathematics and who did not take Foundation
Mathematics as the background cohorts are too small. However, when students
with other backgrounds are included, we see a reduction in failure rate from
20% to 12%. Pairwise comparisons using a z-test demonstrates that only the
reduction from 2012 to 2014 is significant at the 5% level (p = 0.00002), while the
reduction from 2013 to 2014 is not significant (p = 0.116).
Using a one-sided t-test with unequal variances on the mean overall Spring final
marks, finds that this improvement is significant at the 5% level when Spring
2012 is compared with Spring 2014, but is not significant when Spring 2013 is
compared with Spring 2014.
For the cohort that hadn‟t undertaken Foundation Mathematics (the top
groupings of Table 5), we see that a one-sided t-test with unequal variances
demonstrates that there is a significant improvement (α=0.05) between Spring
2012 and Spring 2014 mean final marks. However, the increase in mean marks
for the Spring 2013 and Spring 2014 cohorts is not significant. Similar results
hold for the comparison of mean final marks for students who did undertake
Foundation Mathematics prior to enrolling in Mathematical Modelling 1.
No FM
Ext. 1 50 42 55% 72 72 0% 64 67 8%
(11) (3) (12)
With FM
Ext. 2 90 90 0% 84 89 0% 60 60 0%
(1) (3) (1)
Ext. 1 0 0 0% 66 63 0% 63 63 0%
(0) (3) (1)
No HSC - - - - - - 54 54 0%
Maths (0/1)
The improvements in overall mean final mark for both Autumn semester
comparisons are significant at the 5% level, and are in fact significant at the 1%
level, with the implementation of Mastery Learning. The improvements in
median for both Autumn comparisons were significant at the 1% level using the
Wilcoxon Rank Sum Test.
For the cohorts who achieved a Pass grade or better in Mathematical Modelling
1, it is not universally true that the mean and median final marks improved. The
improvements in mean and median final marks are significant (at the 1% level)
for the two, year comparisons of students who obtained Passes in Mathematical
Modelling 1, and when Credits and Distinctions for Autumn 2012 are compared
with Autumn 2014, but are not significant at the 5% level for Credits and
Distinctions when Autumn 2013 results are compared with Autumn 2014
results.
The reduction in failure rates overall for both Autumn comparisons are
significant at the 1% level. For students who received a Pass in Mathematical
High 63 64 0% 84 87 0% 74 77 0%
Dist. (16) (7) (9)
Comparisons of mean final mark (unequal variance) and failure rate were also
undertaken for each Mathematical Modelling 1 grade cohort for Spring classes.
The improvements for the Mathematical Modelling 1 Pass and Credit cohorts
are significant for all years compared (means, medians and failure rates at the
1% level). For the Spring Distinction cohort only the 2013 to 2014 failure rate
showed significant improvement, though this was at the 10% level.
Dist. 68 70 7% 70 73 9% 67 68 2%
(91) (90) (100)
High 79 80 2% 82 83 0% 77 77 1%
Dist. (51) (78) (89)
Overall then, the improvements in achievement with Mastery Learning are very
encouraging and would appear to ameliorate the previous observation that
students who obtained a Pass grade in Mathematical Modelling 1 were not as
likely to achieve a Pass or higher in Mathematical Modelling 2.
There is a slight reduction in the failure rate of Pass students, though this is not
significant at the 5% level (p = 0.270 and 0.391 respectively). The reductions in
failure rates overall are not significant (p = 0.221 and 0.187 respectively).
General - - - - - - 69 69 0%
Math. (1)
Reduced stress and anxiety. Mastery Learning can reduce stress and anxiety by
having more frequent, lower stakes tests that students can also sit multiple
times. Final exam stress is also reduced – students know that they are walking
into the exam knowing that they have already passed the subject:
I liked the fact that … the final exam wasn't as stressful knowing
that you have passed the subject after passing the mastery tests.
(SFS – Mathematical Modelling 2 Spring 2014)
I enjoyed doing the mastery tests as it helped me to stay on top of
the work and made it much more relaxed when preparing for the
final exam (SFS – Introduction to Analysis and Multivariable
Calculus Spring 2014)
However, a small number of students felt that there was too much testing
and pressure:
Just thinking about getting 80% on a mastery test is too [much]
pressure. (SFS – Mathematical Modelling 2 Spring 2014)
developing skills and attributes that can not only be translated to the rest
of their studies but to their later professional life.
For some students the experience was less positive – they felt that
they were learning how to do the tests not master fundamentals of the
subject:
I was able to pass because all I had to do was learn how to do a
specific set of questions for each mastery test. (SFS – Mathematical
Modelling 2 Spring 2014)
As a consequence of this feedback from students in Autumn 2015 more
variation in questions was included in Mathematical Modelling 2.
Online testing. There are significant advantages to using online testing for the
mastery tests. These include:
[Proprietary online learning system] is a fantastic learning tool … I find
that my ability to learn through tools like this is greatly enhanced due to
the instantaneous feedback and the practice alternate problems. (SFS –
Mathematical Modelling 2 Spring 2014)
However, online testing was not universally popular, especially at the
beginning of the semesters when students were learning how to navigate the
environment:
I did not like the way assessments were conducted online. I spent
far too long trying to work out how to enter the answers on the
computer correctly rather than focusing on the actual material. (SFS
– Foundation Mathematics Spring 2014)
The … [online] assessment need[s] to be more reliable, the system is
unpredictable. (SFS – Mathematical Modelling 2 Spring 2014)
6. Conclusion
The aim of the paper was to contribute to the current debate on the ways to
address the lack of preparedness of some first-year students of STEM programs
(the Mathematics Problem). At the University of Technology Sydney, a form of
Mastery Learning has shown itself to address the lack of preparedness
successfully for many STEM students as well as for first-year mathematics
students overall – it has advantages over one-chance testing and heavily
weighted final exams. These advantages include improved academic
performance. Students also report increased independence and confidence, and
improved time management and retention of content. For many students the
learning experience is positive with less stress and anxiety. While some students
report poor experiences with the online learning and testing environment, most
appreciate the central role it plays in facilitating Mastery Learning. The poorer
experience of students in Mathematical Modelling 2 in 2014 resulted in the fine-
tuning of content in the sequences of formative and summative assessments.
Acknowledgements
Some funding for this project was received under the UTS First Year Experience
Grants.
We would like to thank Dr Kathy Egea for conducting and organising the
transcription of the focus groups. Thanks also go to Stephen Bush for his
assistance with Figure 1 and his encouragement and interest.
References
Anderson, S. A. (1994). Synthesis of research on mastery learning, US Department of
Education, Office of Education, Research and Improvement, Educational Resources
Information Center (ERIC). Retreived 13 October 2014 from
http://files.eric.ed.gov/fulltext/ED382567.pdf
Barrington, F. & Brown, P. (2005). Comparison of year 12 pre-tertiary mathematics
subjects in Australia 2004-2005, Australian Mathematical Sciences Institute/
International Center of Excellence for Education in Mathematics (AMSI/ICE-EM)
report.
Barrington, F. & Evans, M. (2014) Participation in Year 12 Mathematics 2004 – 2013,
Retrieved 14 October 2014 from http://amsi.org.au/publications/participation-year-
12-mathematics-2004-2013/
Bloom, B. S. (1971). Mastery learning, In J. H. Block (Ed.), Mastery learning: Theory and
practice.
Brandel G, Hemmi K & Thunberg H (2008), The widening gap - a Swedish perspective,
Mathematics Education Research Journal 20, 38-56.
Groen, L., Beames, S. Y., Coupland, M. P., Stanley, P., & Bush, S. (2013). 'Are science
students ready for university mathematics?', Proceedings of the Australian
Conference of Science and Mathematics Education (2013), University of Sydney,
Sydney.
Guskey, T. R. & Pigott, T. D. (1988). Research on group-based mastery learning
programs: A meta-analysis, Journal of Educational Research 81, 197–216.
Hawkes, T. & Savage, M. (eds.) (2000). Measuring the Mathematics Problem (London:
Engineering Council). Retrieved 28 October 2014 from www.engc.org.uk/about-
us/publications.aspx
Heck, A. & van Gastel, L. (2006). Mathematics on the threshold, International Journal of
Mathematical Education in Science and Technology, 8 (15), 925-945.
Hoyt, J. E. & Sorensen, C. T., (2001). High School preparation, placement testing, and
college remediation, Journal of Developmental Education 25, 26-34.
Kulik, C. C., Kulik, J. A. & Bangert-Drowns, R. L., (1990). Effectiveness of mastery
learning programs: A meta-analysis, Review of Educational Research 60, 265–299.
Levine, D. M., Berenson, M. L. & Krehbiel, T. C., (2008). Statistics for Managers Using
Microsoft Excel, Retrieved 11 March 2015 from:
http://www.prenhall.com/behindthebook/0136149901/pdf/Levine_CH12.pdf
Luk, H. S., (2005). The gap between secondary school and university mathematics,
International Journal of Mathematical Education in Science and Technology 36, 161-
174.
McKenzie, P., Rowley, G., Weldon, P. R. & Murphy, M., (2011). Staff in Australia‟s
schools 2010: main report on the survey. Retrieved 24 October 2014 from
http://research.acer.edu.au/tll_misc/14
Hu, Ying-Hsueh
Tamkang University, English Department
New Taipei City, Taiwan
1 Introduction
The Sapir-Whorf Hypothesis claims that one‘s language, depending on whether
it‘s the strong or weak hypothesis we are referring to, shapes or influences our
world view. After the hypothesis was formulated in the early 20th century, a
series of heated debates ensued, which have been followed by countless research
and experiments to refute or support the hypothesis. Evidence so far has
suggested that both the lexicon and the grammatical structure of a language do
seem to influence certain key conceptualizations such as color, space, time,
gender, and the event structure of various motions (Casasanto & Boroditsky,
2007). The evidence, in turn, supports the notion that language is also the product
and manifestation of human conceptualization faculties that have been
influenced by the physical, social and cultural environment humans live in.
This linguistic insight opens up an exciting avenue for foreign language teaching
and learning, particularly with respect to the role of culture, which comprises the
values and beliefs a speech community shares. It also highlights the necessity of
teaching language and culture simultaneously. One fruitful area is the
vocabulary itself. Works on concept transformation in the naming of lexical items
establish that the lexicalization process of a given language should consist of that
which is ―tantamount to category formation at the level of a whole culture.‖
(Györi, 1998, p.99) In other words, the formation of a cultural category inevitably
involves linguistic coding, as there is no other way for conceptual categories to
spread in a culture and for it to become explicitly part of cognitive structures of
the individual members of that culture. In this light, a closer look at the 214
radicals that structure over thousands of Chinese characters frequently used
today reveal a rich conceptual system of categorization. It groups experiences of
various interactions with the natural, social and cultural worlds ancient Chinese
lived in. This conceptual system, as Lakoff (1987) and Johnson (1987) argue,
converges on cognitive mechanisms of prototype, image schema, metaphor and
metonymy. They in turn helped create more vocabulary through these radicals to
form numerous compound characters and words as the language continued to
evolve. Such insight forms the basis of a Chinese e-learning course, CRILL
(Chinese Radical Integrated Language Learning) the researcher developed,
which aims to explore its pedagogical validity.
The findings of the study may have significant pedagogical implications for
teaching CFL, particularly in terms of writing, reading and cultural learning. This
study also provides some insight into the merit of holistic teaching by
expounding on the metonymic and metaphorical concepts encoded in lexical
items, and thus lending increasing support to the practical application of CL in
modern language classrooms.
Some linguists argue that metaphorical concepts may have emerged from
metonymic ones (Barcelona, 2000; Radden, 2000). Because of this connection ,
metaphor and metonymy are often intertwined to form ―metaphtonymy‖
(Goossens, 1990). Consider this example : ―She could read my mind ,‖ given by
Ruiz de Mendoza Ibáñez, (2000, p. 121). He explains that ―read someone‘s mind‖
combines a metaphor of MIND IS A BOOK and the metonymy of MIND
STANDS FOR THOUGHT, giving rise to the eventual understanding of ―She
understands me.‖ Metaphors and metonymies are often found on phrasal or
sentential level; however, they also help form lexical items.
Although Chinese has an entirely different writing system, it also extends its
lexicon following similar rules as those of PIE. A Chinese character consists of
one or more components put together in various ways in a typically square-shape
format. As in most PIE, it is not an arbitrary process how certain components are
combined to form new words or new meanings. Based on a printed posthumous
work of a Chinese scholar Xu Shen (86 BC), the Shou Wen Jie Zi, (―Explains Simple
Characters and Compounds‖) that was published in cir. 121 AD, there were10516
characters arranged under 534 to 544 primitive symbols which are the origin of
the 214 radicals used today. The most common way of forming characters is to
combine a radical component that stands for meaning and a component that
stands for sound. This phono-semantic principle created nearly 95% of
commonly used characters in modern Chinese (Dictionary of Chinese Character
Information, 1988). This high rate reveals the significance of radicals in Chinese
characters. Most of them are of pictograms, which as indicated in Shou Wen Jie Zi,
can be divided into 1) Cosmology and Geology, 2) Plants, 3) Zoology, 4) Human
Body Parts, 5.) Artifacts and Other Man-made Objects, 6) Clothing, and 7)
Housing and Shelters. The radicals found in the current frequent words from
both Taiwan and mainland China, of course, exceed these pictograms. Some
radicals may fall under other categories not mentioned in Shou Wen Jie Zi such as
Colors and Shapes, and different scholars may come up with slightly different
groupings. (Zhou, 2012). Such groupings do not represent arbitrary divisions of
the world; they in fact converge on the cognitive capacities of the human mind.
These concepts are all based on cognitive salient prototypes the speakers of a
community; these are folk categories rather than scientific ones (Ungerer &
Schmid, 1996, p.19).
In short, there are characters that are more transparent than others in terms of the
predictability of the radical. The less transparent characters are those whose
radicals are not directly related to their overall meaning. For example, Dunlap et
al investigated a cluster of words that have the radical 禾 ‗grain, rice plant,‘ and
they listed some characters that are not directly related to ‗grain‘ which include
稅 ‗tax‘, 稱 ‗to weigh, to call‘, and 稍 ‗a little bit‘; they are supposedly more
difficult to learn and recall. However, as Zhou‘s (2013) study of the
radical/character 土 ‗earth, soil‘ in Shou Wen Jie Zhi demonstrates, it was
through the principles of CMT (and semantic field) that words related to 土 had
emerged. Hence, on a closer inspection, applying CMT in examining the
etymology of those words of 禾, one would find that they are still related to rice
grain through various degrees of metaphorical and metonymic extension. For
instance, 稅 is a combination of 禾 and 兑 ‗exchange‘, so considering the
importance of agriculture in ancient Chinese society, using rice grain for tax
payment was probably practiced in those days. In this case, 禾 stands for money
or commodity used to pay for taxes—a metonymic principle made sense in a
cultural context.
Shen (2004) espouses deep learning which is using semantic cues in teaching
CFL; therefore, it would be of interest to investigate the role of metaphor and
metonymy in even deeper learning. The challenge is how to make metaphorical
and metonymic clues accessible to learners so that they can become teachable and
learnable? The study discussed below explores this issue so as to answer the
research questions raised earlier.
3 The Study
Experiment Material: CRILL
Based on the theoretical framework of embodied cognition, folk categories,
metaphor, and metonymy as discussed above, an on-line, self-learning,
asynchronized course CRILL (Chinese Radicals Incorporated Language
Learning) for learning the Chinese writing system and culture for English
speakers was designed between 2008 and 2010. Since September 2010, it has been
made accessible to students who enrolled at the university in northern Taiwan
In view of this principle, there are fifteen units with the first seven units dealing
with human body parts (outer and inner organs), followed by four units with
nature, and the last four with plants. The figure (Figure 1) below shows the table
of contents of CRILL as found on the website.
Under the heading ―Body‖, learners will find the radicals for eye 目, nose 耳,
mouth ロ, hand 手, foot 足, heart 心, and flesh 肉/(月?), whereas radicals for sun
日, moon 月, mountain 山 and water 水/氵 are under ―Nature‖, and radicals for
bamboo 竹, wood 木, grass 草, and rice 米 are under ―Plant‖. These radicals were
chosen as they represent the most common concepts and thus have generated a
rich vocabulary in Chinese with many of it suitable for beginners.
Each unit is divided into three phases of learning: lead-in activities (Section 1 to
5), core learning materials (6-10), and post-learning exercises (11-13). As can be
seen, the lead-in activities include 1) a list of learning goals, 2) warm-up activities
that ask learners to think iconically about a body part or a natural/artificial
object, 3) matching pictograms with the radical/character of the unit, 4)
animation of the evolution of the radical/character, and 5) the recognition of the
radical/character among various characters. The figure below (Figure 2) shows
what learners see when one clicks on Section 4 for the historical evolution of 足
(the foot) and 走 (to walk), two radicals which are characters as well for the same
body part, ―foot‖, in Unit 3.
The second phase of learning (Sections 6 to 10) involves several compound words
as well as fixed expressions that are commonly associated with the radical(s) of
the unit. These phrases, with some being polysemous, are a mixture of concrete
and abstract meanings so that learners can see the role metaphor and metonymy
play in meaning extension. For example, in Unit 3 as seen in Figure 2, two
radicals which are characters as well are introduced:足 and 走 with the former
representing the physical body part, foot, while the latter represents the motions
that are in tandem with the foot. In short, the same body part gives rise to two
related concepts represented by two slightly different icons. When 足 (the
physical foot) functions as a radical that helps create further semantic items, it is
written as 𧾷, which can be seen in many motion verbs that involve various
actions involving the foot such as 踢 (to kick), 跑 (to run), 蹲 (to squat), and 跳
(to jump). Nouns such as 路 (the road), 跡 (track or trace) are semantic items
extended from various interactions of the foot with certain objects. All these
words are phono-semantic compounds encoding concepts that are, according to
CM theory, metonymic, namely BODY PART STANDS FOR ACTION, and
ACTION STANDS FOR CONCEPT. They are relatively concrete. However, 足 is
polysemous, like most words in all languages. One of its senses which is more
abstract in fact means ―satisfied‖, deriving from the metaphor BODY IS A
CONTAINER, so expressions such 足 够 (enough, sufficient) and 滿 足
(satisfied, content) capture this metaphorical sense.
Together with these phrases, there are also sentence patterns and sentence
building activities included in these sections that help to provide some kind of
context for association. Both phrases and sentences have all been controlled in
terms of frequency and familiarity for the beginner‘s level. The final phase
consists of post-learning exercises which usually uses songs, poems, or nursery
rhymes that are associated with the radical/character of the unit (Section 11).
Section 12 provides exercises with feedback for learners to gauge their own
learning outcome. Finally, each unit ends with an idiom that contains the
radical/character of the unit with a story explaining the origin of the idiom.
Procedure
In order to ascertain the perceived efficacy of CRILL and answer the questions
raised earlier, a survey consisting of five open-ended questions was designed and
distributed from 2012 to 2013 to twenty-nine international students who were
studying at the university where the researcher works. They all enrolled in the
Fall-semester course entitled ―Cross-cultural Learning‖ the researcher offered.
CRILL was an integral component, among other course materials, of the course
syllabus, and it was assigned as self-study homework over six weeks each
semester after the mid-term exam. Prior to that, participants had been taught
about the concepts of metonymy and metaphor existing in all languages and they
were assigned to specific tasks in identifying those found in their own language.
They would present their findings in class so that they could also make some
cross-cultural comparisons. Following that, the organization of Chinese radicals
and the metaphorical and metonymic clues in Chinese radicals/characters were
incorporated. These exercises were meant to prepare them for CRILL. Once they
started with CRILL, they could decide when and how long they wanted to spend
on CRILL. They were encouraged to raise questions in class should they have
encountered any issues during their self-study. The website is equipped with a
log recording the frequency and time they actually spent on CRILL, although this
data was not taken into account in the final analyses.
At the end of the course, which lasted sixteen teaching weeks in total,
participants would take a test on various course materials and CRILL. At the end
of this test, a five open-question questionnaire was administered to investigate
the efficacy of CRILL. By completing these questions, participants would receive
extra points for the test.
Participants
The students that enrolled in the ―Cross-cultural Learning‖ course from 2012 to
2013 came to Taiwan either as exchange students staying six months to one year,
or as international students pursuing an undergraduate degree at the university.
Their Chinese proficiency would be considered pre-intermediate at the time of
enrolment, although speaking Chinese fluently was not a prerequisite for
attending the course since the course was mostly conducted in English.
Nonetheless, they all had previous Chinese writing experience before the course
started.
Their writing experience differed according to the region they grew up in and the
language they speak at home. Among the twenty-nine students, thirteen of them
speak Indo-European languages coming from Europe and the Americas (N=13),
nine of them speak Japanese coming from Japan (N=9), while seven of them come
from other Asian countries (OACs, N=7) including Korea, Malaysia, and
Vietnam. Such grouping is of particular interest when considering that Japanese
learners learn Kanji –a script based on Chinese characters – at a young age.
Would Japanese learners also find the radical based learning, as put forward in
CRILL, beneficial to them? In short, would early and long exposure to Chinese
writing make it easier or more difficult in understanding the explicit knowledge
for the formation of Chinese characters compared with speakers of different
All questions address the research questions one and two, namely whether the
design of CRILL can benefit and motivate learners learning Chinese characters
and culture, with questions D and E focusing on their critical comments on
CRILL. As for research question three, regarding learners‘ language background
and their evaluation of CRILL, data elicited from the five questions (A, B, C, D,
and E) in the questionnaire were further analyzed according to the participants‘
region of origin. The responses for each question were categorized and coded to
be calculated in terms of percentage, so as to yield an overview of the
participants‘ experience and evaluation of CRILL. For question A regarding what
they hadn‘t known about Chinese language before they started with CRILL,
participants‘ answers were able to be grouped into the following three codes: 1.
No familiarity with the radicals, 2. Having familiarity with the radicals, and 3.
Having some familiarity with the radicals. For question B, addressing what they
hadn‘t known about Chinese culture before they started with CRILL, the coding
was as follows: 1. Culture and idioms, 2. Culture and characters, 3. Neither of the
above, 4. Festivals/culture, and 5. None. For Question C, which asked what they
liked most about CRILL, for example, in what ways it was helpful for their
learning/reviewing, their responses were categorized into: 1.
Sentence/grammar, 2. Characters, 3. Idioms, and 4. Games/songs.
Responses from participants for question D and E received similar coding as they
address related issues. Hence, the coding for question D is as follows: 1.Too easy,
2.Confusing translation, pinyin, or pronunciation, 3. Repetition, 4. Insufficient
feedback, reading and composition input, 5. Insufficient examples (for the lexical
items and sentence patterns taught in each unit), 6. Silly, 7. Technical issues, and
8. No problems. Similarly, the coding for question E were: 1.Too easy-should
have more levels, 2. Correct or Confusing translation/pinyin/pronunciation, 3.
Reduce repetition, 4. Give feedback; more reading and composition input, 5.
Create more linguistic examples, 6. No songs, 7. Improve technical issues, 8. No
problems, and 9. No change. With these codes, it was possible to measure some
tendency in terms of percentage.
Although most participants were not familiar with the way radicals and
characters were presented in CRILL, they liked and considered such design
helpful to their learning with 69 percent of the participants expressing positive
perception to its design, in contrast to other components (See table 1).
Table 1
Overall Results for Question C: What do you like most about CRILL?
Coding N Percentage
1 29 11%
2 29 69%
3 29 9%
4 29 11%
Note: 1= Sentence/grammar 2=Characters 3=Idioms 4=Games/songs
The results of questions D and E (Tables 2 and 3) provide some further insight
into participants‘ criticisms and suggestions. The results of question D (Table 2)
show that the level of difficulty in materials ranks as the highest complaint (28%),
followed by ―inconsistent translation and pronunciation‖ (17%). ―No feedback
/reading/composition input‖ and ―insufficient examples for the lexical items
and sentence patterns‖ (10% for each respectively) were also among the major
complaints with ―technical issues‖ being another one (10%). A small number of
participants did find the design somewhat ―boring and childish‖, specifically
referring to the songs and nursery rhyme parts (7% and 3% respectively).
However, 10 percent of them found no major problems in the design of CRILL.
As for suggestions for improvement (Table 3), participants ranked having ―more
feedback/reading/writing practices‖ (29%) as the most important, followed by
having ―technical issues corrected‖ (24%). A small number of them (7%) gave the
suggestion of ―adding higher levels to existing CRILL curricula‖ in the future.
Participants considered ―reducing repetition and improving consistent
translation and pronunciation‖ more important than ―adding higher levels‖ (17%
and 14% respectively). A very small number of participants (3%) would have
liked to see ―some more linguistic examples‖ to be added to either fixed
expressions or sentence patterns.
Table 2
Overall Results for Questions D: What do you NOT like about CRILL?
Coding N Percentage
1 29 28%
2 29 17%
3 29 7%
4 29 10%
5 29 10%
6 29 3%
7 29 10%
8 29 17%
Note: 1=Too easy 2=Confusing translation/pinyin/pronunciation 3=Repetition
4=Insufficient feedback, reading and composition input 5= Insufficient examples
6=Silly 7=Technical issues 8=No problems
Table 3
Results of Question E: Suggestions to improve CRILL
Coding N Percentage
1 29 7%
2 29 14%
3 29 17%
4 29 29%
5 29 3%
6 29 0%
7 29 24%
8 29 3%
9 29 3%
Note: 1=Have more level 2=Correct confusing translation/pinyin/pronunciation
3=Reduce repetition 4=Give feedback; more reading and composition input 5=
Create more linguistic examples 6= No silly songs 7=Improve technical issues
8=No changes
Results by Group. When the results presented above are broken down into
region, with participants‘ language background taken into consideration, the
individual picture for each region resembles somewhat that of the overall results.
However, there are some minute differences which can help answer research
question three, regarding whether participants‘ language background would
affect their evaluation of CRILL and their motivation to learn Chinese.
with CRILL, whereas none from Japan claimed so. On the other hand, about one
third of the participants from Japan (33%) said that they were somewhat familiar
with radicals/characters, slightly more than those who claimed so in the OACs
group (28%). Only 15% percent of the participants from Europe/Americas
claimed to be somewhat familiar with the radicals and characters.
As for criticism, most of the Europe/America and Japan groups thought the
skill-level taught in CRILL was too easy for them (38% and 33% respectively),
whereas none of the participants from OACs said so. Similarly, the OACs group
had the highest number of participants who did not find any major issue with the
methodology of CRILL (42%). About a quarter of the participants in the Japan
group (22%) also found no serious issues with CRILL. However, none of the
participants in the Europe/Americas group expressed such an evaluation.
When we consider the ranking of the criticisms by group, it becomes clear that
both the Japan and OACs groups had different emphases on what was missing
when compared to the Europe/Americas group. The latter considered ―not
having feedback for exercises and no reading/composition input‖ (coding 4) a
big drawback, whereas the former two groups did not share this criticism at all.
They instead thought that the ―number of examples for the phrases and sentence
patterns were insufficient‖ (coding 5) and more of a hindrance to their learning.
Lastly, all three groups believed that there should be ―more feedback with their
exercises with additional reading/composition input and practices‖ (coding 4)
along with ―improving technical malfunctions‖ (coding 7). They also seemed to
agree that ―repetition needed to be reduced‖ (coding 3). The suggestions for
other issues are less unanimous.
4 Discussion
Benefit and Motivation
Although the analyses presented here are confounded by the small sample size of
subjects and the preliminary nature of this study, some tentative observations
can still be drawn. Firstly, in answering the research question one and two, when
the overall data without group division being examined, it is safe to argue that
the cognitive approach to teaching Chinese radicals/characters explicating some
metonymic and metaphorical principles of word formation can benefit and
motivate the learning of Chinese writing and culture.
In light of this, the design of CRILL is making an important step in the direction
of developing efficient approaches and teaching materials to teaching the
Chinese writing system. Similar to semantic cues, CRILL seeks to guide learners
with patterns and principles so that learning to write and read is not an arbitrary
and mundane task. Although semantic cues are useful in recognizing and
predicting the meaning of characters, they work best in transparent characters.
However, the approach applied in CRILL went further by incorporating
metaphorical and metonymic clues, so that hopefully less transparent characters,
compound words, and polysemy can be better explained, thus, recognized and
retained. Above all, cognitive approach highlights the rich cultural background
encoded in the characters and compound words/phrases. If teaching a language
is concerned with passing on cultural knowledge at the same time, CRILL is
certainly more satisfying in this respect and the results of the study arguably
support this assertion.
Language Background
In addressing whether the linguistic background or any previous knowledge in
the cognitive nature of Chinese radicals/characters could motivate and benefit
learners or not, the results suggest that Indo-European speakers from Europe and
the Americas tended to enjoy the cognitive method provided in CRILL more than
Japanese speakers. Even speakers from OACs considered this approach more
positively than those from Japan. This finding came as a surprise when the
number of participants in each group who had not had any knowledge of
Chinese radicals before their participation of the CRILL program is fairly
comparable.
According to the Japanese participants, they had to learn Kanji from a young age
but had never been taught explicitly about the cognitive principles involved in
the composition of characters. Despite a lack of prior knowledge, they did not
find this approach as engaging and motivating as their counterparts in the
Europe/Americas group and OACs. At this point it is difficult to determine
whether the difference in attitude is because of a language issue, that is, the
familiarity with Kanji which could give participants the impression that these
cognitive clues encoded in the radicals/characters are not that challenging or
interesting. This observation is supported by their own admission in class to the
researcher that they found Chinese (character) writing relatively easy, while
learners from Europe/Americas found the opposite. The latter considered
speaking easier than writing, whereas the former regarded speaking harder to
master.
Criticism of CRILL
Judging from participants‘ criticism and suggestions, they tended not to be
satisfied with general on-line language learning and the technical issues
associated with it. Research has shown in several cases, the efficacy of on-line
learning over traditional face-to-face classrooms, at least in higher education
settings (Xu & Jagger, 2013). However, failure does occur when inadequately
equipped e-learning systems are implemented (Hara & Kling, 2000; Zhang, et al
2004). The results from question D and E reflect some of the key challenges many
on-line language learning tools are facing nowadays: feedback and technical
issues. For language learners using an interactive, self-learning, and
asynchronized on-line learning tool, it is frustrating when they are unable to
check their own input. Although CRILL is equipped with some feedback
mechanism for character writing and vocabulary as well as grammar practice, the
technology involved is fairly basic and breaks down occasionally due to the
limitation of available technology and funding. These issues can certainly create
frustration.
Furthermore, the study could not provide a definite answer to the question of the
efficacy of CRILL as pre- and post-test were not administered. It also did not
examine how the CRILL approach can facilitate greater learning in writing and
reading Chinese characters. The study, at most, examined the perceptions of the
participants based on self-reporting, and it is also not clear whether the learning
approach adopted in CRILL is more effective than other approaches as there
were no control groups. Better experiment designs are undoubtedly required for
any future research.
The pedagogical implications for this study are crucial for effective learning
approaches and on-line tools in the future. In fact, a new Chinese on-line learning
website has been under development that has sought to rectify the shortcomings
of CRILL while also continuing to develop its positive features. These
undertakings hope to demonstrate the importance of combining sound theories
with viable practices in language teaching and learning.
5 Conclusion
This study set out to explore the viability of a teaching approach based on the
linguistic insight gained from CL in recent years. The results so far can establish
its overall merit in the field of TCF by providing a holistic view of the Chinese
writing system and in what way it is deeply rooted in the social and cultural
worlds of the people in the Chinese speaking communities. The study
demonstrates that understanding metaphor and metonymy in lexicon extension
can not only enhance the learning of Chinese characters but also promote the
understanding of the social and cultural knowledge encoded in them. Such
conclusions certainly require caution as it was found that this approach may suit
learners differently. The learner differences could be partly individual or partly
cultural. As CRILL was originally designed with American and European
learners in mind, it is of great interest to find that the Japanese learners in this
study were not as motivated by the approach as their European and American
counterparts. This finding is of value for any future development of teaching
materials and pedagogy.
Acknowledgments
This design and study of CRILL was funded by the Ministry of Science and
Technology (MST), Taiwan (99-2631-S-032-001). Any opinions, findings,
conclusions, or recommendations expressed in this material are those of the
author and do not necessarily reflect the views of MST.
References
Abreu, A. S., & Vieira, S. B. (2009). Learning phrasal verbs through image schemas: A
new approach. Retrieved, from http://ssrn.com/abstract=1491689
Barcelona, A. (2000). On the plausibility of claiming a metonymic motivation for
conceptual metaphor. In A. Barcelona (Eds.), Metaphor and metonymy at the crossroads
(pp. 31-58). Berlin: Mouton de Gruyter.
Boers, F. (2004). Expanding learners’ vocabulary through metaphor awareness: What expansion,
what learners, what vocabulary?. In M. Achard & S. Niemeier (Eds.), Cognitive
linguistics, second language acquisition, and foreign language teaching (pp. 211-232).
Berlin, New York: Mouton de Gruyter.
Boers, F., Eyckmans, J., & Stengers, H. (2007). Presenting figurative idioms with a touch of
etymology: More than mere mnemonics? Language Teaching Research, 11, 43-62.
Boers, F., & Lindstromberg, S. (2009). Optimizing a lexical approach to instructed second
language acquisition. Basingstoke: Palgrave Macmillian.
Casasanto, D., & Boroditsky, L. (2007). Time in the mind: Using space to think about time,
Cognition, 106(2), 579-593. doi:10.1016/j.cognition.2007.03.004
Dictionary of Chinese Character Information [汉字信息字典] (1988). Shanghai, China:
Science Press [科学出版社].
Dirk, G. (Ed.). (2006). Cognitive linguistics: Basic readings. Berlin: Mouton de Gruyter.
Dirven, R. (1985). Metaphor as a basic means for extending the lexicon. In W. Paprotte &
R. Dirven (Eds.), The ubiquity of metaphor: Metaphor in language and thought (pp.
85-119). Amsterdam: John Benjamins Publishing Company.
Dunlap, S., Perfetti, C. A., & Liu, Y. (2011). Learning vocabulary in Chinese as a foreign
language: Effects of explicit instruction and semantic cue of reliability. Retrieved
from: http://www.pitt.edu/~perfetti/PDF/DunlapLearningVocabulary.pdf
Goossens, L. (1990). Metaphtonymy: The interaction of metaphor and metonymy in
expressions for linguistic action, Cognitive Linguistics, 1(3), 323-340.
Györi, G. (1996). Historical aspects of categorisation. In E. H. Casad (Eds.), Cognitive
linguistics in the redwoods (pp. 175-206). Berlin, New York: Mouton de Gruyter.
Györi, G. (1998). Semantic change, semantic theory and linguistic relativity. Duisburg:
L.A.U.D.
Hansen, J., & Stanfield, J. B. (1981). The relationship of field dependent independent
cognitive style to foreign language learning achievement. Language Learning, 31,
349-369.
Hara, N., & King, R. (2000). Students' distress with a Web-based distance education
course: an ethnographic study of participants' experiences. Information,
Communication and Society, 3(4), 557–579.
Hu, Y. H., & Fong, Y. Y. (2010). Obstacles to CM- guide L2 idiom interpretation. In S.
Knop, F. Boers & A. Rycker (Eds.), Fostering language teaching efficiency through
cognitive linguistics (pp. 293-316). New York: Mouton de Gruyter.
Hu, Y. H., & Ho, Y. C. (2009). Prepositions we live by: Implications of the polysemy
network in teaching English prepositions in and on. In B.
Lewandowska-Tomaszczyk & K. Dziwirek (Eds.), Studies in cognitive Corpus
linguistics (pp. 336-370). Frankfurt: Peter Lang Verlagsgruppe.
Johnson, M. (1987). The body in the mind. Chicago: The University of Chicago Press.
Joy, S., & Kolb, D. (2007). Are there cultural differences in learning style? International
Journal of Intercultural Relations, 33(1), 69-85.
Lakoff, G. (1987). Women, fire and dangerous things. Chicago: University of Chicago Press
Lakoff, G., & Johnson, M. (1980). Metaphors we live by. Chicago: University of Chicago
Press.
Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh: The embodied mind and its
challenge to western thought. New York: Basic Books.
Littlemore, J., & Low, G. (2006). Metaphorical competence, second language learning,
and communicative language ability. Applied Linguistics, 27(2), 268-294. doi:
10.1093/applin/aml004
Ma, K. Y. (1997). The relation between the writing system and the use of metaphor in English
and Chinese (Unpublished master‘s thesis). University of North Dakota, Grand
Forks, ND.
Oxford, R. L. (1990). Missing link: Evidence from research on language learning styles
and strategies. In J. E. Alatis (Ed.), Linguistics, language teaching acquisition: The
interdependence of theory, practice and research (pp. 438-460). Georgetown University
Round Table on Language and Linguistics, 1990.
Radden, G. (2000). How metonymic is metaphor. In A. Barcelona (Eds.), Metaphor and
metonymy at the crossroads (pp. 59-78). Berlin: Mouton de Gruyter.
Ruiz de Mendoza, F. J. (2000). The role of mapping and domains in understanding
metonymy. In A. Barcelona (Eds.), Metaphor and metonymy at the crossroads (pp.
109-132). Berlin: Mouton de Gruyster.
Shen, H. H. (2000). Radical knowledge and character learning among learners of Chinese
as a foreign language. Linguistic Studies, June, 85-93.
Shen, H. H. (2004). Level of cognitive processing: Effects on character learning among
non-native learners of Chinese as a foreign language, Language and Education, 18,
167-182.
Shen, H. H., & Ke, C. (2007). Radical Awareness and Word Acquisition Among
Nonnative Speaker of Chinese. The Modern Language Journal, 91(1), 97-111.
Sweetser, E. (1990). From etymology to pragmatics: Metaphorical and cultural aspects of
semantic structure. Cambridge: C.U.P.
Talmy, L. 1988. Force dynamics in language and cognition. Cognitive Science, 12, 49-100.
Tyler, A., & Evans, V. (2004). Applying cognitive linguistics to pedagogical grammar:
The case of over. In M. Achard & S. Niemeier (Eds.). Cognitive
linguistics, second language acquisition, and foreign language teaching, (pp. 257-280).
Berlin: Mouton de Gruyter.
Ungerer, F., & Schmid, H-J. (1996). An introduction to cognitive linguistics. London:
Longman.
Wang, J., & Koda, K. (2013). Does partial radical information help in the learning of
Chinese characters? In E. Voss, S-J D. Tai & Z. Li (Eds.), Selected proceedings of the
2011 second language research forum: Converging theory and practice (pp. 162-172).
Somerville, MA: Cascadilla Proceedings Project.
Xu, D., & Jaggars, S. S. (2013). The impact of online learning on students‘ course :
Evidence from a large community and technical college system. Economics of
Education Review, 37, 46-57.
Xu, S. (121 AD). Shou wen jie zi [Text in Chinese].
Yasuda, S. (2010). Learning phrasal verbs through conceptual metaphors: A case of
japanese EFL learners. TESOL Quarterly, 44(2), 250-273.
Zhang, D., Zhao, J., Zhou, L., & Nunamaker, J. (2004), Can e-learning replace classroom
learning? Communications of the ACM, 45(5), 75-79
Zhou, L. J. (2012, May). The teaching of Chinese characters—A fun way of learning
characters to decode Shou wen jie zhi. Retrieved from:
http://mandarin.nccu.edu.tw/data/teacher/pdf [Text in Chinese].
Zhou, Y. L. (2013, May). Understanding the ontology of the radical/character 土
(Soil/Earth) in Shou wen jie zhi through conceptual metaphor theory and semantic
field theory. Paper presented at the 15th Symposium on Chinese Philology in Central
Taiwan 第十五屆中區文字學學術研討會, Chinese Cultural University, Taipei. [Text
in Chinese].
F. Moen
Norwegian University of Science and Technology,
Trondheim, Norway
R. Giske
University of Stavanger, Norway
R. Høigaard
University of Agder, Norway
Abstract. This study explores coaches‟ beliefs about what they think their
athletes expect from them as coaches in sport. A sample of 36 different
statements representing different opinions about coach behaviours and
how coach behavior affects athletes‟ motivation, performance, focus, and
emotions, was presented to 23 Norwegian coaches working in high
schools specialized for elite sports. The participants were coaches in
various sport disciplines and were asked to consider and rank-order the
statements by using a Q sorting procedure. The authors discuss their
analysis from a Q methodical factor analysis. In general, the coaches
share some common viewpoints that are represented in two different
factors (consensus). Each factor represents congruence views about
expectations in the role as a coach in sport. The dominant view (factor A)
is that coaches believe that their athletes expect involvement leadership,
whereas servant leadership was dominant in factor B; a view that only a
few of the coaches shared.
1. Introduction
The question as to what coaching behaviour is constructive in order to develop
the athlete in sport has occupied researchers and practitioners for several decades,
and the influence of the coach on the athletes is well documented (Abraham,
Collins, & Martindale, 2006; Blom, Watson II, & Spadaro, 2010; Côté & Gilbert,
2009; Myers, Chase, Beauchamp & Jackson, 2010). When a coach emphasizes
training and instruction, and gives positive feedback that recognizes and rewards
good performance, athletes are more satisfied with their leadership behaviour
(Chelladurai, 2007). Similarly, a study performed by Moen, Høigaard, and Peters
(2014) found that athletes who were most satisfied with their performance
2. Method
A Q methodology was chosen because this methodology in general investigates
subjectivity related to a defined topic (Brown, 1980). Subjectivity in all forms,
including beliefs, views, experiences and opinions, are investigated in Q
methodology (Brown, 1996). The methodological approach is completed through
five tasks: 1) selecting participants, 2) defining a concourse, 3) developing a Q
sample, 4) completing a q sorting, and 5) completing data analysis (Brown, 1996;
Moen & Garland, 2012; Watts & Stenner, 2012).
2.1 Participants
The data in this study was collected from 23 Norwegian coaches (mean 46 yrs.,
range 26 – 64 yrs.). Their average education was 4 years at the University level
with an average of 19 years practicing as a coach. The coaches were recruited
from one high school specialized for various sport disciplines (e.g. cross country
skiing, biathlon, track and field, football, volleyball, and handball). This
particular high school was selected because of its long experience with
developing youth athletes into top international athletes. The coaches work with
athletes ranging from 16 to 19 years old with performance levels varying from
national top level to national top regional level.
beliefs about the current research question (Stephenson, 1986). The statements
were written from an athlete‟s point of view: “My coach does not have to be open
for questions.” The concourse was then reduced into a meaningful Q sample in
order to create a balanced sample for stimulating the Q-sorters (coaches) to use
the subjective statements (sample) to rank-order them self-referentially and draw
a picture of their own self-conceived view on the topic (McKeown & Thomas,
1988).
2.3 The Q sample
In the present study, two main themes (what Stephenson, 1950, calls effects)
emerged in the concourse, coach behaviour and effect. Within the theme coach
behaviour three sub-themes (what Stephenson, 1950 calls levels) seemed to be
relevant: the coach‟s decision making style, the coach‟s motivational tendencies,
and the coach‟s instructional behaviour (see Table 1). Within the theme effect four
subthemes emerged: the athlete‟s motivations, focus, performance and emotions
(Table 1).
Table 1
The design of the statements based on coaching behaviour and effect
Levels
Coaching a. coach‟s b. coach‟s c. coach‟s
behaviour decision motivational instructional
making style tendencies behavior
Table 2
The combination of levels in the design
Combination of levels
Coaching a a a a b b b b c c c c
behaviour
Effect d e f g d e f g d e f g
Statement 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
No 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
25 26 27 28 29 30 31 32 33 34 35 36
The authours decided to use three statements that most clearly represented the
viewpoints from the concourse to represent each combination of cells. The final Q
sample resulted in 36 statements (3 x 12) that represent the different
combinations of cells showed in Table 1 and 2 (see Appendix). The statements in
each cell are interrelated and represent the viewpoint of that cell, but each cell
obtains both negative, neutral and positive statements. This is done to ensure
reflections related to the particular viewpoint representing each cell. As shown in
Table 2, the first statements in each cell were allocated numbers from 1 to 12, the
second statements were given numbers from 13-24, and the third statements
were given numbers from 25-36. In this way, it will be more challenging for the Q
sorter (the coach) to understand very clearly how the system is built up (Moen &
Garland, 2012).
most very strongly disagree disagree neutral agree agree strongly very most
-5 -4 -3 -2 -1 0 1 2 3 4 5
The coach is however free to place each statement from the Q sample anywhere
within the distribution, but the participant is forced to keep to the distribution
form in order to make all the necessary nuanced evaluations of the 36 different
statements (Kvalsund, 1998). The statements that are placed on both extreme
ends of the scoreboard, ±5 and ±4, are normally the statements that the
participants have a strong connection with. Statements placed in the middle of
the scoreboard are normally statements they have a more nautral connection
with (Moen & Garland, 2012).
factors as the final solution the overarching consensus of the factors reveals, since
the Varimax rotation is spreading the consensus across the rotated factors, which
causes them to be highly correlated.
3. Results
The two factors that were discovered in this study are the two categories of
beliefs related to the research question among the coaches who participated in
this study (Brown, 2002). Thus, the emerging factors are created and influenced
by the coaches who load on this particular factor. Q methodology uses an
estimate developed by Brown (1980) to decide how high a factor loading needs to
be to contribute to a factor or not (Pett, Lackey, & Sullivan, 2003, p. 208). A factor
loading on a minimum of .41 was estimated in this study to decide if a Q sort (a
coach‟s individual scoreboard) contributed to a factor (Brown, 1980; Kvalsund,
1998). The factor matrix in Table 3 shows that factor A has 16 pure cases (sorts
that load only on one factor) and 21 loadings when mixed cased are included.
Factor B has 2 pure cases and 7 cases when mixed cases are included.
Table 3
The Matrix of Rotated Factors and their Loadings
17 0.64X -0.22
18 0.50X 0.23
19 0.65X -0.39
20 0.41X -0.50X
21 0.53X -0.61X
22 0.64X 0.02
23 0.42X -0.02
Pure cases 16 2
Mixed cases 5 5
% variance explained 38 10
Note: X= significant factor loading. Factor loadings with bold faces are pure
cases loading on a factor, and loadings with italic faces are mixed cases loading on
more than one factor.
As seen in Table 3 all sorts that significantly load on factor A are positive, while
only one of the sorts that load on factor B is positive. Six of the significant
loadings on factor B are negative. The analysis found a negative significant
correlation between factors A and B (-.65).
The statements on both extreme ends of the scoreboard, ±5 and ±4, are the
statements that involves the most reflected beliefs among the coaches. Therefore,
our analysis in this study focuses on the statements on both extreme ends of the
scoreboard (Brown, 1980). Based on characteristic statement, distinguishing
statement and consensus statement we labelled factor A: Involvement leadership
and factor, B: Servant leadership.
Table 4
Distinguished Statements Loading on Factor A; Involvement leadership
allow extraordinary freedom for followers to exercise their own abilities and
place much higher degree of trust than would be the case in any leadership style
required by the leader to be directive. The coaches loading on factor B seem
therefore to have elements of a servant leadership.
Table 5
Distinguished Statements Loading on Factor B
31 I’m calm and steady regardless of a close relationship with my coach or not. -4
28 I’m curious regardless of involvement or not from my coach. -4
10 Clear instructions regarding what I am supposed to do develop my -5
performances.
Note. Included mixed cases, 20 cases loaded on factor B.
4. Discussion
The 23 coaches who participated in this investigation were instructed to sort 36
statements about different views on coaching behaviours, and rank the
statements on a scoreboard ranging from +5 to -5 regarding what they believe are
expected behavior in their roles as coaches in sport. Based on their experience
they were asked to reflect on the content of the statements and prioritized them
in accordance with their own personal view. The results show that 21 out of the
23 coaches, (when mixed sorts are included) loaded significant positive on Factor
A: Involvement leadership (Table 3). This factor counts for 38% of the variance.
Seven coaches (when mixed sorts are included) loaded on factor B: Servant
leadership (Table 3). However, only one coach loaded significant positive on
factor B and the rest of the loadings were negative.
After analysing the two different factors it is clear that the factors represent
individual viewpoints that clearly separate them from each other. The negative
correlation between the factors confirms this as well as the scores on each
statement representing the two factors (Appendix). In the discussions below
these two factors will therefore be treated based on their typical individual
viewpoints.
meet the quality that is needed to enhance the athlete‟s performance level
(Ericsson, 2009). Importantly, exercise that is needed to improve an athlete‟s
performance levels is not found to be a playful enjoyment (Ericsson, et al., 1993).
Developing an athlete‟s level of performance is an effortful endeavour that takes
engagement, curiosity and inspiration. Interestingly, the coaches that are loading
on factor A in this study believe that coaching behaviour promoting involvement
has an affect on these emotions.
The belief in the involvement of athletes during the training process can be
considered as a basic value and a prerequisite to empower the athlete to become
more independent and take ownership of the learning process. According to
Jowett (2007) is an interdependent relationship between coach and athlete
described in terms of closeness, commitment and complementarity. A belief in
athlete involvement in the training process, as well as an opinion of the necessity
of social support and feedback in the learning process seems to be an important
precondition to promote a positive and healthy coach-athlete relationship.
However, athlete involvement is also a necessary precondition establishing a
more democratic leadership.
One can ask if the emphasis on athlete involvement and social feedback among
these coaches on the one side, together with the absence of determined behavior
such as criticism and instruction on the other, are too friendly of nature for
coaches who are working to improve a junior athlete‟s levels of performance. A
recent study found that junior athletes expect a paradoxical mixture of humility
(involvement, positive feedback, a personal relationship and social support) and
determinate behavior (criticism and instructions) from their coaches (Moen &
Sandstad, 2013).
athlete must take the initiative and the responsibility for choosing what to learn
in unmediated learning situations. These concepts may also be applicable
concerning athlete learning and coaches loading on factor A seems to believe
strongly in learning these young athletes is a mediating process, while coaches
loading on factor B believe this is an unmediated process.
4.4 Limitations
Q studies are not designed to generalize results to larger populations or to
determine causal relationship between variables or estimate prevalence
(Øverland, Thorsen, & Størksen, 2012). Generally, Q methodology explores
subjective views and this was the intent in studying coaches‟ perceptions of
Acknowledgments
The authors would like to thank the coaches for their cooperation, enthusiasm,
and most of all, their participation in this study.
References
Abraham, A., Collins, D., & Martindale, R. (2006). The coaching schematic: Validation
through expert coach consensus. Journal of Sport Sciences 24, 549-564.
Allgood, E., & Svennungsen, H. O. (2008). Toward an articulation of trauma using
creative arts and Q methodology. Journal of Human Subjectivity, 6, 5-24.
Amorose, A. J., & Anderson-Butcher, D. (2007). Autonomy-supportive coaching and
self-determined motivation in high school and college athletes: a test of self-
determination theory. Psychology of Sport and Exercise, 8, 654-670.
Amorose, A. J., & Horn, T. S. (2000). Intrinsic motivation: Relationships with collegiate
athletes gender, scholarship status and perceptions of their coaches‟ behavior.
Journal of Sport & Exercise Psychology, 22, 63-84.
Bass, B. M. (1985). Leadership and performance beyond expectations. New York: Free Press.
Blom, L. C., Watson II, J. C., & Spadaro, N. (2010). The impact of a coaching intervention
on the coach-athlete dyad and athlete sport experience. Athletic Insight- The Online
Journal of Sport Psychology, 12. Retrieved from http://www.athleticinsight.com/
Vol12Iss3/Feature.htm.
Brown, S. R. (1980). Political subjectivity, applications of Q methodology in political science.
Yale University.
Brown, S. R. (1996). Q methodology and qualitative research. Qualitative Health Research, 6,
561-567.
Brown, S. R. (2002). Subjective behaviour analysis. Operant Subjectivity 25, 145-163.
Cassidy, T., Jones, R., & Potrac, P. (2009). Understanding sports coaching: The social, cultural
and pedagogical foundations of coaching practice (2nd ed.). London: Routledge.
Charbonneau, D., Barling, J., & Kelloway, E. K. (2001). Transformational leadership and
sports performance: The mediating role of intrinsic motivation. Journal of Applied
Social Psychology, 1, 1521- 1534.
Chelladurai, P. (1989). Manual for the leadership scale for sports. Unpublished manuscript.
The University of Western Ontorio.
Chelladurai, P. (1990). Leadership in sports: A review. International Journal of Sport
Psychology, 21, 328-354.
Chelladurai, P. (2007). Leadership in sports. In G. Tenenbaum., & R. C. Eklund (Eds.), The
handbook of sport psychology (3rd edition, pp. 113-135). New York: Wiley.
Chelladurai, P., Imamura, H., Yamaguchi, Y., Oimnuma, Y., & Miyauchi, T. (1988). Sport
leadership in a cross-national setting: the case of Japanese and Canadian university
athletes. Journal of Sport & Exercise Psychology, 10, 374-389.
Côté, J., & Gilbert, W. (2009). An integrative definition of coaching effectiveness and
expertise. International Journal of Sport Science and Coaching, 4, 307-232.
David, D., Lynn, S., & Ellis. A. (2010). Rational and irrational beliefs. Implications for research,
theory, and practice. NY: Oxford University Press.
Deci, E. L., & Ryan, R. M. (2002). Handbook of self-determination research. Rochester, New
York: University of Rochester Press.
Ericsson, K. A. (2009). Development of professional expertise: Toward measurement of expert
performance and design of optimal learning environments. Cambridge, UK: Cambridge
University Press.
Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in
the acquisition of expert performance. Psychological Review, 100, 363–406.
Haney, J. J., Lumpe, A. T., Czernaik, C. M., & Egan, V. (2002). From beliefs to actions: The
beliefs and actions of teachers implementing change. Journal of Science Teacher
Education, 13, 171-187.
Heinemann, K. (1980). Introduction to the sociology of sport. Schorndorf: Hofmann Verlag.
Horn, T. S. (2008). Coaching effectiveness in the sports domain. In T. S. Horn (Ed.)
Advances in sport psychology (pp. 309-354). Champaign, IL: Human Kinetics.
Horne, T., & Carron A. V. (1985) Compatibility to coach-athlete relationships. Journal of
Sport Psychology 7, 137-149.
Jones, R. L. (2006). The sports coach as educator: re-conceptualising sports coaching. Routledge:
Taylor & Francis Group, London.
Jowett, S. (2005). The coach-athlete partnership. The Psychologist, 18, 412-415.
Jowett, S. (2007). Interdependence analysis and 3+1 Cs in the coach-athlete relationship.
In S. Jowett, & D. Lavallee (Eds.), Social psychology in sport (pp.15-28). Champaign,
IL: Human Kinetics.
Jowett, S., & Cockerill, I. M. (2002) Incompatibility in the coach–athlete relationship. In I.
M. Cockerill (Ed.), Solutions in sport psychology. (pp. 16–31) London: Thompson
Learning.
Jowett, S., & Cockerill, I. M. (2003). Olympic medalists‟ perspective of the athlete-coach
relationship. Psychology of Sport and Exercise, 4, 313-331.
Jowett, S., & Meek, G. A (2000). The coach–athlete relationship in married couples: An
exploratory content analysis. The Sport Psychologist, 14, 157–175.
Jowett, S., & Poczwardowski, A. (2007). Understanding the Coach-Athlete Relationship.
In S. Jowett, & D. Lavallee (Eds.), Social Psychology in Sport (pp.3-14). Champaing IL,
Human Kinetics.
Kozulin, A. (2003). Psychological tools and mediated learning. In A. Kozulin, B. Gindis, V.
Ageyev & S. Miller (Eds.), Vygotsky's educational theory in cultural context (pp. 15- 38).
Cambridge: Cambridge University Press.
Øverland, K., Thorsen, A. A., & Størksen, I. (2012). The beliefs of daycare staff and
teachers regarding children of divorce: A Q methodological study. Teaching and
Teacher education, 28, 312-323.
Appendix A
Factors
Statements A B
1. My motivation for training increases when my coach involves me. 5 -3
2. If I‟m involved in the process concerning my training I perform better. 4 -3
3. My coach does not have to be open for questions. -1 2
4. I become curious and interested if my coach involves me in matters 4 -2
concerning my training.
5. My motivation increases when my coach is concerned about my 3 -2
personal well-being.
6. If I‟m supposed to achieve good performances my coach needs to focus -2 3
on my personal welfare.
7. My situation becomes less stressful when my coach contributes in 1 0
personal affairs.
8. A personal and close relationship with my coach makes me enthusiastic 2 0
concerning my training.
9. My motivation increases when I‟m told exactly what to do. -1 -1
10. Clear instructions regarding what I am supposed to do develop my 3 -5
performances
11. I keep my focus if the coach intervenes in training and explain what is 0 -3
right and wrong.
12. I become curious if my coach gives me clear instructions about what I 1 0
need to do.
13. My motivation increases when my coach takes decisions that concern -3 0
me.
14. My performances are not good when my coach denies complying with 1 -1
my opinions.
15. My coach needs to consult me if I‟m supposed to have an effective focus. 2 2
16. I become uncommitted if my coach includes me in decisions regarding -5 3
my sport.
17. My motivation increases when I receive positive feedback. 3 -2
18. Neither feedback nor social support is crucial for my performances in -4 1
sport.
19. A close and personal relationship with my coach makes me stressful. -3 2
20. My curiosity is best stimulated when the relationship with my coach is -1 1
not too close and personal.
21. I lose my engagement when I‟m observed by my coach and receive no -1 -2
feedback.
22. I perform at my best when I have to clarify my own task for training. 0 1
23. I become insecure if a coach does not tell me exactly what to do. 2 -1
24. I am losing my curiosity when my coach gives me clear instructions. -3 5
25. My motivation decreases when my coach needs my approval in 0 3
important matters concerning my training.
26. I‟m not able to perform if my coach often asks me for approvals in -2 0
important matters.
27. I become stressful if my coach involves me in important matters -4 4
regarding my training.
28. I‟m curious regardless of involvement or not from my coach. 0 -4
29. My motivation increases when my coach does not have focus on 0 1
personal issues.
30. In order to develop my performances I also need critical feedback from 2 -1
my coach.
31. I‟m calm and steady regardless of a close relationship with my coach or 1 -4
not.
32. Whether my coach is concerned about personal issues or not do not -2 0
affect my curiosity.
33. If I‟m told exactly what to do I lose my motivation. -2 4
34. I perform at my best when my coach just observes what I do during -1 1
training.
35. I lose my focus when it is too much instructions. 1 -1
36. It is easier to be curious when the coach is more in the background. 0 2
* Translated from Norwegian to English by the authors.
Introduction
Two counties in Mid-Sweden, Jämtland and Västernorrland, have identified
regional educational problems. These counties have lower educational levels
and school achievements compared to other parts of Sweden (Skolverket, 2014)
and have a relatively large proportion of an out-migration of well-educated
individuals from the region (Statistiska centralbyrån, 2014). The national school
results have, in some parts of the region, steadily declined for nine years; other
regional areas also need to be improved (Skolverket, 2014). An integrative and
Previous Research
The past decade may have witnessed considerable expansion and development
within the field of regional educational research (From & Olofsson, 2014). Good
education systems, ranging from preschools to universities, are vital for
development. This mutual relationship is described and emphasizes that
initiatives in the classroom or department are influenced by the surrounding
context of the school, the district, and the nation (Hernandez & Goodson, 2010;
Veugelers & Ziljsra, 2010). It is important not only from a regional perspective,
but also in a national and international context.
education and regional development (From & Olofsson, 2014). A further aspect
is the concept of regional management, which seems to play an important role in
developing regional headquarters into dynamic competence centers (Ambos &
Schlegermilch, 2010).
By virtue of the foregoing, the purpose of this study was, from an international
perspective, to identify and classify the published research on regional
educational development and school improvements. The objectives were to (a)
identify patterns and trends in the research, (b) describe and compare the
published findings, and (c) point to a future research agenda.
Theoretical Framework
The study departs from regional educational development and school
improvement theory. Regional educational development is a useful analytical
tool as it can mirror the relation between regional development and education.
There are correlations between level of education and regional development. To
succeed in regional development, a highly skilled workforce is required where
education is a cornerstone (e.g. Florida, Mellander & Stolarick, 2010; Tomaney &
Wray, 2011). Universities were perceived of utmost importance with regard to
regional development (Westlund, 2004), but regional development is also a
social change and transformation (Berglund & Johansson, 2007). To go from
stability and recognition to the new and unfamiliar is described in the following
words: ―It is the combination of the new and the traditional providing
innovative opportunities for regional development and economic growth, which
requires interaction and communication‖ (From & Olofsson, 2013, p. 35).
To analyze the contents of the various texts, a descriptive content analysis was
performed. The contents of the results of the studies were examined
methodically and progressively while interpreting texts to find prevalent
phenomena. Distinctive categories were identified and then narrowed down to
sub-categories. Traditional content analyses can be divided into three steps:
selection of focus texts, encoding of the texts, and interpretation of the results
(Auhiva, 2008). The situational context was taken into account, so the
―maximum variation sampling‖ was achieved (Franzosi, 2008). Maintaining
scientific integrity involves great attention to validity during the phase of the
integrative review, and not defining the operational definitions too narrowly or
too broadly. The reviewer must balance the definitions and methods review
constantly during the research process. The integrative literature review has
many benefits to the scholarly reviewer, such as identifying gaps in current
research and the need for future research, bridging related areas of work, and
bringing focus to central issues in an area (Cooper, 1998). A thematic integrative
study design was chosen to obtain a holistic understanding of the subject. This
Distribution
Figure 1, above, shows the 48 sources that were included in the review and
Table 1, below, specifies the themes. There were six themes: Political aspects,
Social Capital & Networks, Higher education, General Education, Values,
Languages & Cultures, and one named ―Others.‖ The sources were categorized
as one entity.
Table 1. Distribution of themes among the sources.
Themes N = 48 Percent
Politics 14 29,2%
Networks & Social 8 16,7%
Capital
Higher Education 7 14.6%
General Education 6 12.5%
Values 5 10,4%
Languages & 4 8.3%
Cultural
Others 4 8.3%
Two dominant themes are political aspects and social capital and networks
(almost half of the content). The theme ―Others‖ are technical solutions,
women‘s roles and dichotomization, major universities, and small colleges.
The majority of the sources were published from 2001 and forward, which were
a majority of the articles (see Table 2). In other words, interest in the subject has
grown rapidly in the last fifteen years.
The geographic distribution shows that almost half of all the sources are
published in European countries, and approximately one-fourth come from the
US/Canada (see Table 3). This means that the research has been conducted in
Western countries.
The qualitative approaches are highly varied from case studies to focus group
interviews and observations. A small percentage uses quantitative approaches or
mixed methods. When it comes to data collecting strategies, most of the sources
have one strategy, but some have 2 to 4 strategies. The strategies are many and
various, from video-taping to self-assessment reports to content analyses. We
have not been able to classify them, as they are too sprawling. There are also
problems with being able to classify analysis levels.
Thematic Analysis
In this SLR, it is possible to observe a lack of empirical research involving the
combination of regional educational development and school improvement,
especially for mainstream schools (cf. Boström 2015). Research is published, but
it focuses mostly on one aspect (school improvement) or the other (regional
educational development) or each, but not so much the combination. Political
aspects seem to be the most dominant theme with it described in almost one-
fourth of the contents. The content of policy moves at different levels, political
control, labor market reforms, decentralization, tax reduction to compensate
rural areas, and ideological debates about the central cities versus regions.
However, there are no concrete descriptions or visions of how the different
proposals could have repercussions on school improvement. The quote below
will illustrate the general approach:
Education today is characterized by two opposite tendencies: —a centripetal
tendency … a centrifugal tendency … these tendencies map out the main vectors
of the development of innovative activity in regional systems of education.
(Larina, 2006, p. 31)
Another focus is Social Capital & Networks, preferably in combination with the
development of a ―creative class‖ and a diversity that can develop business
ideas and promote entrepreneurial learning. The importance of interaction
between people trained in creative professions so that they can create new
products/services and generate jobs, new technology, and tolerance for the
regions as described by Florida et al. (2010): ―Canadian regional development is
shaped by the 3Ts of technology, talent, and tolerance. Talent in the form of
human capital and the creative class is strongly associated with regional
income‖ (p. 31).
los, 1996) and opportunities available through curricula (Rahimia et al., 2010).
Values are about the mental images that prevail and who prevail. Dichotomiza-
tion of urban and rural standards is evident above all in Swedish Research
(Svensson 2006; 2008; 2014). Cultural and linguistic aspects in certain regions are
language-related changes that can enhance or hinder regional development
(Daoud, 2010), and attitudes, as well as behaviors that may influence or be influ-
enced by culture (Karahanna, Evaristo, & Srite, 2005), such as settings for the
native language in school (Huguet & Llurda, 2001).
Conclusions
In summary, what appear to be common worldwide is the concerns pertaining
to urbanization, depopulation of rural areas; the need to highlight regional
development factors; as well as a belief in education. One result of this study is
that there are surprisingly few international studies on the relationship between
regional development and school improvement. Approximately 70% of all the
sources are published in Western countries, Europe and the USA/Canada. This
does not mean that the problems are greater there than in the rest of the world.
Perhaps instead, there are more research resources in these countries compared
to the rest of the world. The literature review shows that the study object was
observed more and more during the millennium. This may indicate that it will
be noticed increasingly more henceforth. More than half of all resources have a
qualitative approach, which means that the texts are descriptive. Needed are the
long-term, consistent empirical studies—preferably interventions studies—with
a mixed methods design.
Different regions, countries, and continents show various problems within this
focus. For example, regional educational development does not mean the same
thing in West Africa compared to northern Canada. Countries and continents
have different economic, structural, and cultural conditions and orientations.
Education in this context typically refers to higher education, while factors that
involve mainstream schools are not illustrated to any greater extent.
The literature review shows an internationally sprawling view of the content
and object of this study. It involves very different aspects, such as:
Political issues, such as policy (Mc Dade & Spring,2005), ideological (Cf
Laurina, 2006; Quiang, 2011) or economic governance, the effects of
decentralization (Toi, 2010) and its repercussions on schools and student
learning and ideological solutions;
Social capitals and network structures (Pachura, 2010), such as knowledge-
based c1usters as a guideline for regional development policies aimed at
stimulating regional industrial competitiveness and innovativeness,
with a focus on the importance to create eeducational achievement and
creative occupations (Florida et al,, 2010) and emphasize that creative
professionals are strongly related to regional income.
Educational focus, mainly the connections between regions and
universities.
Cultural and linguistic aspects and values within society and school
systems. Particularly interesting for the project V-brus is dichotomization
between urban and rural values (Svensson 2006; 2010; 2013)
References
Ahuvia, A. (2008). Traditional, interpretative and reception based content analyses:
Improving the ability of content analysis to address issues of pragmatic and
theoretical concern. In R. Franzosi (ed.), Content Analysis: Vol. 1. (pp. 183–202).
London: SAGE.
Ambos, B., & Schlegelmilch, S. (YEAR). The new role of regional management. Vienna:
Palgrave Macmillian.
Avalos, B. (1996). Education for global/regional competitiveness: Chilean policies and
reform in secondary education. Compare, 26(2), 217–32.
Berglund, K., & Johansson, A. (2007). Entrepreneurship, discourses and conscientization
in processes of regional development. Entrepreneurship & Regional Development:
An International Journal, 19(6), 499–525. doi:10.1080/08985620701671833
Björkman, C. (2008). Internal capacities for school improvement: principals’ views in Swedish
secondary schools (Doctoral dissertation). Umeå University, Umeå.
Boström, L. (2015). Regional educational development research in Sweden: A literature review
of research over half a century (Unpublished paper, in progress). Mid Sweden
University
Cheong, C., Wing, N., & Yen, C. (2011). Development of a regional education hub: the
case of Hong Kong. International Journal of Educational Management, 25(5), 474 –
493. doi:10.1108/09513541111146378
Cooper, H. (1998). Synthesizing research: A guide for literature reviews. Thousand Oaks, CA:
SAGE.
Crescneszi, R. (2005). Innovation and regional growth in enlarged Europe: The role of
local innovative capabilities, peripherality, and education. Growth and Change,
36(4), 471–507.
Dalin, P. (1994). Skolutveckling. teori. Bok 2. [School development theory. Book 2].
Stockholm: Liber Utbildning AB.
Daoud, M. (2006) The Language Situation in Tunisia. Current Issues in Language Planning,
2(1), 1–52. doi:10.1080/14664200108668018
Doloreux, D., & Shearnur, R. (2006). Regional development in sparsely populated areas:
The case of Quebec‘s missing maritime cluster. Canadian Journal of Regional
Science, 29(2), 195–220.
Evans, C., & Waring, M. (2012). Applications of styles in educational instruction and
assessment. In L. F. Zhang, R. J. Sternberg, & S. Rayner (Eds.), The handbook of
intellectual styles (pp. 297–330). New York: Springer.
Florida, R., Mellander, C., & Stolarick, K. (2010). Talent, technology and tolerance in
Canadian regional development. The Canadian Geographer, 54(3), 277–304.
Franzosi, R. (2008). Content analysis: Objective, systematic, and quantitative description
of content. In R. Franzosi (Ed.), Content analysis, Vol. 1 (pp. xxi–xli). London:
SAGE.
From, J., & Olofsson, A. (2013). Kunskapsekonomi och regional utveckling. [Knowledge
economy and regional development]. In Y. Fiedrichs, M. Gawell, M. & J.
Wincent (Eds.), Samhällsentreprenörskap för lokal utveckling (pp. 17–42). Sundsvall:
Mittuniversitetet.
Goodland, J.(1994). The national network for educational renewal. Phi Delta Kappan,
75(8), 35–42.
Hargreaves, A., & Fink, D. (2008). Distributed leadership: democracy or delivery?
Journal of Educational Administration, 46(2), 229–240.
doi:10.1108/09578230810863280
Hernandez, F., & Goodson, I. F. (2010). Social geographies of educational changes:
Drawing a map for curious and dissatisfied travellers. In F. Hernandez & I.F.
Goodson (Eds.), Social and Geographies of Educational Changes (pp. xi–xxi).
Dordrecht: Kluwer Academic Publisher.
Höög, J., & Johansson, O. (2014). Framgångsrika skolor: mer om struktur, kultur, ledarskap.
[Successful schools: more about the structure, culture, leadership]. Lund:
Studentlitteratur.
Huguet, A., & Llurda, E. (2001). Language attitudes of school children in two
Catalan/Spanish bilingual communities. International Journal of Bilingual
Education and Bilingualism, 4(4), 267–282. doi:10.1080/13670050108667732
Karahanna, E., Evaristo R., & Srite, M. (2005). Levels of culture and individual behavior:
An integrative perspective. MIS Quarterly . Vol.30(3), pp.679-704
Larina, V. P. (2006). The innovative activity of schools in a regional system of education.
Russian Education and Society, 48(3), 31–42.
Lemar, S. (2006). Developing collaboration between schools and labour market in
vocational education training—A fruitful or futile ambition? conference
presentation.
Luo, W., & Chen, S. (2010). Fiscal decentralization and public education provision in
China. Canadian Social Science. 6(4), 28–41.
Marmolejo, F., &. Pukka, J. (2006). Supporting the contribution of higher education to regional
development: Lesson learned from an OECD review of 14 regions through 12 countries.
Paper presented at the UNESCO Forum on Higher Education, [Location].
McDade, B., & Spring, A. (2005). The ‗new generation of African entrepreneurs‘:
Networking to change the climate for business and private sector-led
development. Entrepreneurship & Regional Development, 17, 17–42.
McFarlane, C. (2010). The comparative city: Knowledge, learning, urbanism. International
Journal of Urban and Regional Research, 34(4), 725–42
doi:10.1111/j.1468-2427.2010.00917.x
Mittuniversitetet (2012). Projektplan Världens bästa regionala utbildningssystem.
Nilsson , B. & Lundgren, A.S. (2015). Logics of rurality: Political rhetoric about the
Swedish North. Journal of Rural Studies, 37, pp85-95.
Pachura, P. (2010). Regional cohesion: Effectiveness of network structures. Berlin: Springer.
Persson, L. O., Sätre Åhlander, A. M., & Westlund, H. (2003). Rural communities facing
challenges and conflicts. In L. O. Persson, A. M. Sätre Åhlander, & H. Westlund
(Eds.), Local responses to global changes (pp. 1–9). Stockholm: National Institute for
Working Life.
Polit, D., & Beck, C. (2008). Nursing research: Generating and assessing. Philadelphia:
Lippincott, Williams & Wilkins.
Rahimia, A., Ahmad, S., Borujenib, M., Esfahanic, A., & Liaghatdard, H. (2010).
Curriculum mapping: a strategy for effective participation of faculty members in
curriculum development. Procedia - Social and Behavioral Sciences 9, 2069–2073.
Santema, M. (1997). Regional Development and the Tasks of vocational Education and
Training Professionals. Journal of European Industrial, Training. 21(6), 229–237.
Shi, L. & Wei-qing, C. (2010). Fiscal decentralization and public education. Canadian
Social Science, 6(4), 28–41.
Skolverket. (2014). SALSA/SIRIS. Retrieved from http://www.skolverket.se/statistik-
och-utvardering/statistik-i-databaser. 2014-02-02
Statistiska Centralbyrån (2014). Utflyttande högskoleutbildade från Jämtlands och
Västernorrlands län. [Out-migration of college-educated in Jämtlands and
Västernorrlands counties compared to the whole of Sweden]. Retrieved from
http://www.scb.se/sv_/Hitta-statistik/, 2014-02-02
Svensson, L. (2006). Vinna och försvinna? Drivkrafter bakom ungdomars utflyttning från
mindre orter (Doctoral dissertation). Linköpings universitet, [Location].
Svensson, L. (2013). We don‘t want you to join us if you don‘t leave us! Economic and
Social Aspects of Peripheral Regions Special Issue, 4(99), 46–52.
Svensson, L. (2014). Nu styr vi upp stan. In M. Vallström (Ed.), När verkligheten inte
stämmer med kartan. Lokala förutsättningar för hållbar utveckling. Lund: Nordic
Academic Press.
Toi, A. (2010). An empirical study of the effects of decentralization (From, 2014) in
Indonesian junior secondary education. Educational Research for Policy and
Practice, 52(3), 107–125.
Tomaney, J., & Wray, F. (2011). The university and the region: An Australian
perspective. International Journal of Urban and Regional Research, 35(5), 913–31.
doi:10.1111/j.1468-2427.2011.01020.x
van Velzen, W., Miles, M. B., Ekholm, M., Hameyer, U., & Robin, D. (1985). Making
school improvement work. A conceptual guide to practice. Leuwen Acco: OECD-
publication.
Veugelers, W., &. Ziljstra, H. (2010). Networks of schools and constructing citizenship in
secondary education. . In F. Hernandez & I.F. Goodson (Eds.), Social Geographies
of Educational Changes (pp. 65–78). Dordrecht: Kluwer Academic Publishers.
Westlund, H. (2004). Regionala effekter av högre utbildning, högskolor och universitet: en
kunskapsöversikt. [Regional effects of higher education, colleges and universities:
a systematic review]. Östersund: Institutet för tillväxtpolitiska studier;
A2004:002
Whittemore, R., & Knafl, K. (2005). The integrative review: updated methodology.
Journal of Advanced Nursing, 52(5), 546–553.
Introduction
Assessment of higher education learning has been considered increasingly
important (Clouder et al., 2012; Kushimoto, 2010; Sambell et al., 2012). Yet,
measuring how much students learn has been a challenge for stakeholders
involved in higher education assessment (Hardison and Vilamovska, 2009).
While the grade point average (GPA) has traditionally been used for measuring
students’ academic performance at university, it is not considered a reliable
indicator for learning because grading varies according to institutes, instructors,
and other factors (Shavelson 2009). For instance, “As” in one institute and in
another institute (or from one instructor and another instructor) may not have
the same academic value. The implication of GPA from a certain year to another
is also difficult to interpret, as they do not measure the same components. Even
if a student’s GPA was “A” in the first year and “B” in the second year, it is
infeasible to determine that his or her learning deteriorated because the level of
academic content may differ in the first year and the second year. In this context,
various kinds of assessment tools have been created to measure university
students’ learning. This paper will review these assessment tools and then
quantitatively and qualitatively analyze how much students improved generic skills
at a Japanese university during the first two years of higher education.
Literature Review
In the late 1980s, the College Basic Academic Subjects Examination (College
BASE) was introduced. Apart from the subject content areas (i.e., English,
mathematics, science, social studies), the College BASE assesses generic skills:
interpretive reasoning, strategic reasoning, and adaptive reasoning. The College
BASE has three forms: 1) the long form with content areas, 2) the short form with
English and mathematics, and 3) an institutional-matrix form. As is the case with
COMP, however, a study revealed that the test assesses only a fraction of generic
skills (Pike, 2011).
It is in this context that CLA has emerged as one of the most popular assessment
tools of generic skills in higher education in the United States (Klein et al., 2007).
Other parts of the world now recognize CLA because the Organization for
Economic Cooperation and Development (OECD) has been developing
Assessment of Higher Education Learning Outcome (AHELO) based on CLA
(Douglas et al., 2012). CLA is an open-ended, value-added, performance
assessment tool that measures generic skills such as critical thinking, analytical
reasoning, problem solving, and written communication through writing tasks,
make-an-argument tasks, critique-an-argument tasks, and realistic performance
tasks (Council for Aid to Education, 2013).
In the United States, Arum and Roksa (2011) examined the CLA scores of 2,322
students at 24 universities over two years between the beginning of their first
year in 2005 and the end of their second year in 2007. The study indicates that
undergraduate students on average improved generic skills by 7%. While there
are no universal standards for learning in higher education, they argue,
students’ gains in academic performance were low. They concluded that the
poor result was attributed to the fact that the US college students on average
study only 12 hours a week.
Tsuji (2013) estimates that the average number of hours spent studying by
Japanese university students is only 3.5 hours. He explains that while Japanese
university students are no less intelligent than students in other nations, their
analytical reasoning and problem-solving skills are not well developed due to
their lack of studying time. MacVeigh (2002) echoes Tsuji, stating that many
Japanese university students are unable to think and write critically and
logically and describes Japanese higher education as “a nationwide educational
failure” (p.4).
Tsuji (2013) argues that there exists what he calls “(p. 77) spiral” between
Japanese industry and higher education: companies’ human resources personnel
believe that university students do not study and they dedicate themselves to
their part time jobs and/or circles/clubs, but companies’ human resources
personnel want students to study to acquire generic skills that lead to
employability. University students claim that human resources personnel do not
consider GPAs for job applications. Students thus spend more time on part time
jobs and circles/clubs because they believe that it is more important for their
future employability. Instructors are afraid that if they made students study
hard, students would evaluate them poorly in the course evaluations. It is
therefore better for instructors to give “whatever” lectures without sufficient
preparation and spend more time on their research. Students then complain that
instructors do not teach them well and spend even more time on part time jobs
and circles/clubs instead of studying.
In 2011, Tsuji (2013), as part of his NPO project, conducted surveys with 2,000
senior students in 28 departments at nine prestigious Japanese universities such
as Waseda, Keio, Hitotsubashi, and Sophia. The student participants reported
that only four out of approximate forty courses that they took at university
helped them learn to think. A student, for example, reported that “Instructors
didn’t ask us any questions. At the end of the semester, we were only given a
one-page report for evaluation.” Another student claimed that “Professors just
read textbooks in front of us.” Tsuji’s study indicates that Japanese university
students are not in an environment where they are encouraged to study and
develop their generic and employability skills. However, how can we know how
much students learn at university in Japan? One answer is to employ learning
assessment tools such as CLA and PROG.
PROG
This study employs PROG in an attempt to measure students learning. PROG
examines two sets of generic skills: literacy and competency. This usage of the
terms literacy and competency can be confusing as the elements of literacy and
competency overlap. For instance, the Organization for Economic Cooperation
and Development (OECD)’s definition of literacy—using tools interactively (e.g.
language, technology)—is one of its Definition and Selection of Competencies
(DeSeCo)’s key competencies. As Matsushita (2010) puts it, these tools include
non-cognitive elements such as social and emotional elements that are part of
competency (See Figure 2).
In the PROG test, 45 minutes are allocated for the Literacy section and 40 minutes
for the competency section. Literacy is composed of data collection, data analysis,
problem solving, and conceptual thinking skills. Critical thinking skills, which
are considered important generic skills, are partially integrated into the data
analysis skills. PROG’s literacy assessment also involves a few short essays to
measure written communication and other skills. These elements are similar to
what CLA examines. Competency is composed of skills in general
communication, collaboration, networking, leadership, negotiation, and stress
management as well as problem solving that is also included in literacy.
According to Kawaijuku and Riasec (2015), problem-solving skills in literacy are
tested on whether students can solve problems logically while problem-solving
skills in competency are tested to what extent students solve problems as young
professionals do (as described later). There are some notable differences between
CLA and PROG. CLA is composed of open-ended essays while PROG is based
on a combination of short essays and multiple-choice questions. Also, CLA is
designed to produce results at the institutional level such as school average
scores while PROG is designed to produce results at the individual level, which
is scores for each individual who takes the exam. PROG also provides feedback
sheets after the test with suggestions of how to improve generic skills.
The PROG score ranges from 1 to 7 for both literacy and competency. Score 4 is
the level desired to be reached by the end of the first year of university. Students
with this score are expected to be able to adequately understand and rephrase
information from documents and graphs. Score 7 is the level desired to be
reached by the time of graduation. Students with this score are expected to be
able to organize data and demonstrate information derived from the data in
academic writing and graphs. Students at this level are able to establish
arguments logically (Riasec, 2012).
a. Preparing a presentation
b. Collecting information and selecting ideas
c. Deciding a group theme
d. Analyzing information
e. Deciding the content
f. Practicing and modification
g. Reviewing the presentation
h. Deciding the roles
Students are expected to draw a flowchart that illustrates what they would do
and when they would do it (Riasec, 2012). PROG has a few short essay questions
that are similar to CLA’s make-an-argument prompt. For instance, students are
asked to read questions and answer in writing. The following is an example of
such as question.
A university student, who travelled to South Korea the other day, said that
while young Koreans did not understand Japanese, those who were close
to 80 years old whom s/he met were fluent in Japanese. Why do you think
elderly Koreans are able to speak Japanese fluently? Briefly write down the
reason(s).
While a student could write a creative story, for example, that the Koreans had
lived in Japan in their youth and learned Japanese, they are expected to write a
short essay based on his/her knowledge of Japanese colonial education in Korea
between 1910 and 1945.
Arguably, there is no right or wrong answer for this question. How does PROG
score competency then? Kawaijuku and Riasec (2013) explain that they
administered the test to the young business leaders that are currently active in
society and collected sample data. They then analyzed the patterns of this
group’s answers to each question. That is, PROG attempts to measure students’
competency by comparing their answers with the young professional leaders’
answers and how similar students’ answers are to those of the young leaders.
Similar answers to young leaders’ answers score higher in competency while
dissimilar answers score lower.
According to Kawaijuku and Riasec (2014), as of April 2014, more than 100,000
university students took PROG. The average scores of PROG are 3.89 in literacy
and 3.22 in competency. Approximately 63% of the test takers were first year
students, 13.3% second year students, 19.7% third year students, and 3% fourth
year students.
As Table 1 shows, students’ scores in literacy improve through the entire course
of university; however, the results show that there exists no improvement from
the second to third year and then a remarkable improvement from the third to
the fourth year. In competency, scores deteriorated slightly from the second to
third year but improved from the third to fourth year. Although the number of
test takers is smaller than in other years, fourth year students substantially
improved in both literacy and competency. It may be important to note, however,
that Japanese fourth year students hardly go to university due to job hunting.
Their improvement in generic skills, therefore, may not be attributed to
university education, as described in the next section.
Given that this study examines how much students improved generic skills in
the first two years of NUCB education, it may be important to explain the first
year experience (FYE) program called the Vision Planning Seminar (VPS) and
the following year seminars at this university. The purpose of VPS is to help the
first year students acquire generic skills and envision the professional careers
based on the assumption that if they can envision their futures in the early stage
of university life, they should be able to set goals and work toward acquiring
skills necessary to achieve those goals. As shown in the Table 1, students are
explicitly expected to acquire generic skills such as critical thinking, analytical
reasoning, problem-solving and writing.
The generic skills acquired in VPS feed into the second year to fourth year
seminars and for writing the bachelor’s thesis. NUCB has set eight learning goals
(LGs) to be achieved before graduation. LGs are assessed through the following
skills:
Students are expected to demonstrate the first four of these eight skills in their
bachelor’s thesis and the last four in their second to fourth year seminars. These
skills are generic skills as well. For instance, the skills to set a research topic or
apply knowledge entail critical thinking, analytical reasoning, problem solving
and writing. After all, generic skills overlap with basic research skills in many
respects.
Methodology
The current research examines the PROG scores of 45 NUCB students who took
PROG tests twice, first in April or May 2013 and second in December 2014, and
analyzes the difference between these scores. That is, this study focuses on how
much NUCB students learn in the first two years of university. One of the
limitations of this study is its low sample size, which may hinder the
generalization of the result. Yet, this type of value-added assessment is still rare
in Japan and serves as an exploratory study. Some may also point out that the
research period of two years is insufficient for this type of longitudinal value-
added study. As Arum and Roksa (2011) affirm, however, “most of the gains in
generic skills occur in the first two years of college…seniors do not spend much
more time studying than freshmen” (p. 36-37). Although Kawaijuku and Riasec’s
(2014) study shows that Japanese university students substantially improved
generic skills in the fourth year, the majority of Japanese university students do
not attend regular courses in their fourth year and thus their improvement in
generics skills are more likely attributed to their own study for the Synthetic
Personality Inventory (SPI), an aptitude test for the selection of personnel, or
related exams to seek employment, not university course work. Thus, the first
two years of university learning is a reasonable indicator for the overall learning
at university.
Signed Wilcoxon’s rank sum test was conducted to examine whether there was a
significant difference in the medians of the PROG scores between April/May
2013 and December 2014 (p<0.05) In selecting a statistical hypothesis test, we
conducted the Lilliefors test based on the Kolmogorov-Smirnov test, which can
determine whether sample data is normally distributed and found that p-value
was less than 0.05 for all sample data, which shows that the sample data do not
seem to be normally distributed. We thus selected Wilcoxon’s rank sum over
matched sample t-test that examines the means of sample data difference.
they have found courses at NUCB. The result of the current research might also
be supplemented in the future by another longitudinal research of the same
students at the end of their fourth year.
Results
N Means SD Min. Max.
L1 45 3.09 1.427 1 6
C1 45 2.91 1.379 1 6
L2 45 3.16 2.225 1 7
C2 45 3.20 1.375 1 6
Table 3. Signed Wilcoxon’s rank sum test: Descriptive statistics
*L1: Literacy score in the first year; L2: Literacy score in the second year
*C1: Competency score in the first year; C2: Competency score in the second year
While students’ PROG scores improved by 0.07 (2.27%) in literacy and improved
by 0.29 (9.97%) in competency, the difference of the medians between L1 and L2
was not statistically significant at any critical value (p-value=0.394). The
difference of the medians of the medians between C1 and C2 was not
statistically significant at p<0.05 but significant at p<0.10 (p-value=0.057). Given
that Arum and Roksa (2011) consider 7% improvement in CLA for two years
“not much” in the US context, the PROG results of Japanese students can also be
interpreted as “not much.”
In literacy, 17 students scored worse in the second test than in the first test while
17 students improved their scores. In competency, 20 students improved their
score while 12 worsened it. Overall, NUCB students little improved generic
skills for the first two years of higher education, though the results are not
statistically significant and further studies at a larger scale are required to
generalize the results.
With regard to interview results, while on average students spent 13.5 hours
working, they spent only 40 minutes studying per week. The studying hours of
NUCB students mark below the average studying hours of Japanese university
students, which are 3.5 hours, as indicated in the literature review section. The
maximum studying hours of NUCB students per week were three hours while
one third of the students reported that they did not study at all. Three-quarters
of students reported that they have encountered courses that they found
interesting such as marketing, management and statistics. However, the number
of courses that students found interesting is limited to a few out of many others.
Some students claimed: “There is not much difference in content between high
schools and universities,” “The university courses are boring,” “I always sleep in
class,” “High school teachers teach better than university professors,” “One class
has too many students,” and so forth. One student also lamented that she had
never received any feedback from professors regarding her assignments.
Discussion
This study indicates that in terms of PROG scores, NUCB students little
improved generic skills during the first two years of higher education. Literacy
did not improve at any significance level. While it is possible that NUCB
students have made a statistically significant improvement in competency at
p<0.10, though not at p<0.05, they might have acquired competency-related skills
through interactions with others off campus (e.g., part-time jobs).
Apart from descriptive interpretations of PROG results, there may also exist two
possible reasons/interpretations of the result: 1) PROG may not measure generic
skills that NUCB intends to develop (or NUCB may not develop generic skills
that PROG measures) and 2) (some) students make different levels of effort
during the first and the second PROG tests.
when taking the PROG test. At any event, however, the sampled NUCB students
study little as the study shows.
This paper does not suggest that NUCB should completely redesign its
curriculum to focus on improving generic skills that PROG measures;
nevertheless, the university needs to better demonstrate how generic skills,
which it intends to develop as expressed in its mission statement and LGs, are
nurtured. Making and using rubrics to measure these skills is one option.
NUCB’s AOL has indeed developed rubrics to measure skills to achieve learning
goals, though the committee struggles with implementation because rubrics can
require substantial amount of time and effort to make and use. Administering a
test that measures NUCB generic skills (e.g., global perspective) is another
option. Riasec, the company that administers PROG, has flexibly dealt with
measuring generic skills that PROG cannot measure. In order to measure
students’ global perspectives, for example, the company has engaged 735
professionals (aged between 25 and 49), who worked for global companies and
managed foreign subordinates, to take the competency portion of the exam. These
data can be used to make comparisons with students to know how similar these
students are with global professionals: in other words, how global their
perspectives are (Kawaijuku & Riasec, 2014).
Although this paper discusses the case of a particular Japanese university, the
issue of developing and measuring generic skills are applicable to other contexts.
Any university has to identify generic skills needed by their students and its
institutional role in promoting generic skills. It then needs to develop or
implement an effective set of tools to measure students’ generic skills.
Conclusion
This study quantitatively examined how much students improved generic skills
at a Japanese university during the first two years of higher education and
qualitatively explored possible reasons for such results. The findings show that
Japanese university students did not improve their generic skills by much
during the first two years of higher education, arguably because students study
little during this time. This study suggests that offering more courses with active
learning approaches to intrinsically motivate students to spare more time for
learning can contribute to improving generic skills.
Acknowledgement
We are indebted to Mr. Hirotaka Nishio at NUCB for his support in this
research. We are also grateful to Mr. Ezra Anton Greene at University of British
Columbia for editing this paper.
References
Arum, R. and Roksa, J. (2011). Academically adrift: Limited learning on college campuses.
Chicago and London.
Barkley, E. F. (2010). Student engagement technique: A handbook for college faculty. New
York: Jossey-Bass
Clouder, L., Broughan, C., Jewell, S. & Steventon, G. (2012). Improving Student
Engagement and Development through Assessment. London and New York:
Routledge.
Douglas, J., Thomson, G. & Zhao, C. (2012). The Learning Outcomes Race: the Value of
Self-Reported Gains in Large Research Universities. Higher Education 64(3): 317-
335.
Hardison, C. M. and Vilamovska, A. (2009). The Collegiate Learning Assessment: Setting
standards for performance at a college or university. Santa Monica: Rand Education.
Ito, H. (2014a). Assessing an assessment tool of higher education: The case of PROG in
Japan. International Journal of Evaluation and Research in Education 3(1): 1-10.
Ito, H. (2014b). What’s wrong with learning for the Exam? An assessment-based
approach for student engagement. Journal of Education and Learning 3(2): 145-152.
Ito, H. (2014c). Shaping the First-Year Experience: Assessment of the Vision Planning
Seminar at the Nagoya University of Commerce and Business. International
Journal of Higher Education 3(3): 1-9.
Kawaijuku and Riasec. (2013). PROG: Progress Report on Generic Skills. Tokyo: Kawaijuku
and Riasec.
Kuh, G. D. (2012). What we are learning about student engagement. Change 35: 28.
Kushimoto, T. (2012). Outcome Assessment and its role in self-reviews of undergraduate
education: in the context of Japanese Higher Education Reforms Since 1990s.
Higher Education 59: 589-598.
McIntyre, J., Todd, N., Huijser, H., and Tehan, G. (2012). Building pathways to academic
success. A Practical Report. The International Journal of the First Year in Higher
Education 3(1): 109-118.
McVeigh, B. J. (2002). Japanese higher education as myth. New York and London: An East
Gate Book.
National Survey of Student Engagement. (2011). How much time college students spend
studying varies by major and corresponds to faculty expectations, survey finds.
Available at:
http://nsse.iub.edu/NSSE_2011_Results/pdf/NSSE_2011_Press_Release.pdf.
Peters, R. A. (2011). Enhancing academic achievement by identifying and minimizing the
impediments to active learning. Public Administration Quarterly 35(4): 466-493.
Pike, G. R. (2011). Assessing the Generic Outcomes of College: Selections from
assessment measures. San Francisco: Jossey-Bass.
Riasec. (2012). Measuring Generic Skills. Tokyo: Riasec.
Sambell, K., McDowell, L. and Montgomery, C. (2012). Assessment for Learning in Higher
Education. London and New York: Routledge.
Shavelson, R. (2009). Measuring College Learning Responsibility: Accountability in a New Era.
Palo Alto: Stanford University Press.
Tsuji, T. (2013). Nagze nihon no daigakusei wa sekai de ichiban benkyo shinai no ka [Why
Japanese university students study the least in the world]. Tokyo: Toyo keizai
shinbunsya.