Escolar Documentos
Profissional Documentos
Cultura Documentos
Journal of
Applied
Linguistics
Article
doi : 10.1558/japl.2005.2.1.75
LONDON
76
1 Introduction
Applied linguists involved in Language for Academic Purposes, Language for
Specific Purposes, and Language for Research Purposes frequently consult
content-area specialists as part of preliminary needs analyses for advanced-level
discipline-specific courses with a language focus (e.g. Dudley-Evans & St. John,
1998; Flowerdew & Peacock, 2001a; Swales, 2004). Content-area specialists
often assist applied linguists in:
(a) defining language-learning, content-learning, and strategy-learning
objectives;
(b) identifying core disciplinary texts;
(c) specifying discipline-specific tasks;
(d) creating instructional materials;
(e) designing assessment instruments for such courses.
Although some applied linguists develop long-term collaborative working
relationships with content-area specialists, it is more often the case that applied
linguists gather information from specialists (along with other stakeholders)
early in course-development stages and then work on their own to develop,
implement, and assess their courses (cf. Flowerdew, 1993; Johns, 1997). Even
when repeated needs analyses are conducted as part of an ongoing process of
course development (as recommended by Tudor, 1996, cited in Flowerdew &
Peacock, 2001b), the actual exchange of information in these consultations is
often unidirectional; that is, applied linguists solicit information from contentarea specialists but the reverse rarely occurs.
In this article, we report on a sustained interdisciplinary effort between applied
linguists and chemistry faculty in which there has been a commitment to an
interchange of ideas, joint problematization, and collaboration in action (see
Candlin & Sarangi, 2004), with both parties providing ongoing contributions
to the development and assessment of an advanced-level Write Like a Chemist
course2 offered by university chemistry faculty for chemistry majors. Key to the
project has been the cooperation and collaboration between applied linguists
and chemistry faculty (see Dudley-Evans & St. John, 1998). Our approach to
disciplinary writing, in general (e.g. genre analyses, corpus analyses, course
design, textbook development, instruction), and writing assessment, more
specifically, could not have been accomplished without the equally important
but distinct contributions of chemistry faculty and applied linguists.
We focus here on one component of a much larger project, specifically our
interdisciplinary efforts to form valid writing assessment instruments. The
assessment component of the project represents a particularly public and procedural set of collaborative activities and provides a strong example of patterns of
77
78
79
80
efficacy of Write Like a Chemist materials. As such, there was a pressing need
for a rating system that could be:
(a) introduced to Write Like a Chemist pilot faculty during summer
training sessions for later use;
(b) used by project team members to evaluate pre- and post-assessment
written tasks submitted by students at pilot institutions;
(c) consulted by external evaluators (all chemistry faculty) at the end
of the first and second year of piloting to assess pilot students postcourse writing abilities.
The assessment-design goals guiding collaboration between the chemists and
applied linguists on the project team were the following:
(1) Development of a valid set of analytic rating scales to assist instructors
in providing feedback on major course writing assignments, including a
paper modeled after a journal article.
(2) Development of a valid set of holistic rating scales for evaluating student
writing outcomes across many instructional sites, socializing chemists to
read student writing in similar ways, and assessing the overall project.
(3) Establishment of an agreed-upon set of student writing samples to serve
as scale anchors.
While working toward these outcomes, the importance of unambiguous rating
descriptors, which could be easily understood and used by chemistry faculty
with little experience assessing student writing for pedagogical purposes,
became clear. With carefully worded descriptors, we could ensure efficient
discussions of student writing between applied linguists and chemistry faculty
at different stages of the project. Furthermore, with the guidance of carefully
articulated rating scales, we could assist chemistry faculty in reading student
papers in a more time-efficient manner, an issue which emerged fairly early
in collaborative efforts. Our target outcomes and associated aims facilitated
dialogue among project participants and guided the chemists, in particular,
in discussing, describing, and evaluating student writing with a consistent
terminology and shared purpose.
2.3 Discipline-specific writing assessment issues
Concerns about discipline-specific writing assessment have emerged in both
the applied linguistics and chemical-education literatures. Maintaining our
commitment to joint problematization signified that neither of these research
traditions should be valued over the other. As such, although the applied
81
82
83
84
(1) What assessment purposes are the rating scales meant to serve?
(2) How many levels of performance could reliably be distinguished?
(3) Should scores be weighted or not?
(4) How should the content and wording of scale descriptors be determined?
(For an overview of these and other key factors in scale design, see Bachman
& Palmer, 1996; Hudson, 2005; Lumley, 2002; McNamara, 1996; North &
Schneider, 1998; Weigle, 2002.) Despite the fact that these basic scale-design
issues are well documented, a specific challenge in this project was to identify
assessment criteria that were meaningful not only to the applied linguists, but
also (and more importantly) to the chemistry faculty who would ultimately be
responsible for assessing student performance.
3.3 Development of early criteria and scales
In the first two years of the project (before the nationwide pilot), the leading
chemist in the project developed a set of analytic rating scales for each major
writing assignment with the help of the applied linguistics graduate student
who co-taught the course. The scales consisted of a series of simple statements
reflecting analytic criteria (e.g. Uses correct grammar, tense, voice) and a maximum possible score for each criterion (see Appendix B).
These early rating scales indicated the perceived importance of general academic and chemistry-specific writing conventions, as well as accurate science.
For example, in the rating scale for the journal article assignment, roughly half
of the grading criteria could be classified as general academic writing conventions (i.e. criteria that are common in university composition instruction):
(1) Shows coherent organization within and between sections, using appropriate transitional devices.
(2) Is properly formatted throughout, including in-text citations and references.
Other grading criteria reflected chemistry-specific writing conventions
(including conciseness, a feature that has become a formal convention of
chemistry writing) and chemists understanding of standard academic writing
conventions:
(3) Includes properly formatted tables, schemes, figures, and NMR data, and
refers to them appropriately.
85
(4) Uses language and level of detail appropriate for an expert audience, with
special attention to wordiness.
Finally, the fact that chemists also view accurate subject-area knowledge as
essential in effective writing is reflected in an additional criterion as well as the
not trivial number of points assigned to it:
(5) Provides clear and correct scientific information throughout the paper.
Although somewhat rudimentary, these early rating scales proved invaluable as
a starting point for more clearly articulated rating scales that would also hold
meaning for project-external audiences.
3.4 Development of more refined scales
The demands of a grant-supported project, as well as the fact that chemists
would be the evaluators of students writing abilities, required the development
of more refined assessment scales, with distinct and meaningful score points
that chemistry faculty would find acceptable, understandable, and easy to
use. Building upon earlier efforts (described above), the applied linguists first
developed a general analytic rating scale that combined performance criteria
typical of general writing assessment and criteria thought to be specifically
applicable to chemistry writing (see Figure 1):
Criteria typical of general writing
assessment
The immediate goal was to produce a working model for assessing the writing of chemistry students more generally, and then to systematize the rating
procedures for each major writing assignment in the course more specifically
(e.g. writing the methods section of a data-driven paper modeled after a
journal article). This early scale underwent numerous revisions, during which
chemistry colleagues played a central role in three areas of the ongoing scaledesign process:
(1) Specifying the relative importance of discipline-specific subject matter
and writing conventions.
86
87
(1) The use of the criterion Purpose (i.e. the authors goals for writing and
success in achieving those goals) was dropped from the Organization &
Purpose criterion because chemistry colleagues believed that this notion
could not be rated independently, but rather was reflected in a students
performance across all other criteria.
(2) Fluency was removed from the Fluency & Mechanics criterion and
instead incorporated into a new and entirely separate Conciseness and
Fluency criterion, further exemplifying the value placed on the economic
use of words by chemists.
(3) The criterion Grammar was paired with Mechanics to capture surfacelevel syntactic and punctuation errors.
It is instructive to note here that grammar became a site of extended discussions. Although the Grammar & Mechanics criterion might be presumed to
include all linguistic structures, this criterion was influenced by how one might
view certain syntactic errors of a fairly general nature. For example, after some
months debate, it was decided that students use of passive voice would be
assessed as part of Scientific Conventions rather than Grammar & Mechanics
because what raters would assess would not be the correct formation of the
passive (not normally a problem for students at this level of instruction) but
rather its conventional use in the appropriate sections of the targeted genres
(e.g. describing laboratory procedures in the Methods section of a research
article; see also Swales, 1990, 2004).
Concurrent with discussions of overarching scoring criteria, the project
team worked together to draft and revise scale descriptors for each score
point in the various rating scales being developed.6 These discussions and the
continual fine-tuning of the rating scales (by chemistry faculty and applied
linguists, in turn) provided the project team with the insights needed to
finalize a more detailed set of analytic rating scales. A key element of the
applied linguists work here was to ensure that the chemists developed a
sense of ownership of the descriptors and the resulting set of analytic rating
scales. Over time, the various rating scales made use of a fairly common set
of phrases (including the resurrection of Purpose) to describe scoring details:
audience and purpose, organization, writing conventions, grammar and
mechanics, and scientific content.
88
4 Socialization process
The need for a valid holistic assessment scale, along with a set of anchor
benchmark papers at each scale point, led to an extended socialization
process with the chemists. This section outlines steps taken by the applied
linguists to socialize the chemists in their rating decisions and to refine the
holistic rating scale (while incorporating the descriptive language used by the
chemists). We provide a detailed accounting of the socialization process in
part as a response to the paucity of such discussions in the applied linguistics
literature. It is our assumption that applied linguists who engage in other
interdisciplinary efforts will be able to adapt many of the ideas and procedures presented here. Our detailed discussion may also serve as valuable
89
90
(2) Sort the high band of papers to see if distinctions could be made, and
described, between papers rated at 5 and 6.
(3) Identify distinctions within the lowest band of papers, basically separating papers rated as 1s and 2s.
Prior to the first meeting, 11 papers were selected from the mid-range group of
previously sorted journal article papers to establish an initial sample of potential
3s and 4s. The chemists were asked to read 6 of the 11 papers before coming to
the first meeting. They were instructed to read each paper, decide if it should
be rated as a 3 or a 4 (out of a possible, but as yet undefined 6 points), and jot
down comments on aspects of the paper that helped with scoring decisions, all
within a 10-minute per paper limit (if possible). They were asked to refrain from
consulting rating scales that had been created earlier in the project and not to
confer with one another to ensure that diverse views would be heard.
The first meeting began by recording each raters scores for three of the six
papers read before the session. Each paper was discussed and raters were asked
why they had assigned the scores that they had. Comments were noted on the
blackboard for everyone to see and recorded for future reference. This sequence
was repeated for the remaining three papers read before the meeting. The five
remaining (unread) papers were then distributed and the read-score-recorddiscuss cycle was repeated; the raters were given 30 minutes to read and rate all
five papers. The imposition of such a tight timeline (essentially six minutes per
paper) stemmed from the fact that the chemists had related that they had spent
around 20 minutes to rate each 56 page paper assigned for the first meeting,
even though they had been instructed to read the papers in the most holistic
fashion (to decide whether the papers were strong or weak samples within the
group).9 The imposed time limit helped the chemistry raters learn, over time,
to read student papers more holistically. As a result of the first meeting, we
began to identify not only papers that could be used as benchmarks, but also
the characteristics of papers in this middle score range that could be used to
refine scale descriptors.
In the second training session, we repeated the read-score-record-discuss
sequence to identify benchmarks and characteristic features for the upper half
of the score bands (4, 5, and 6 scores). The chemistry raters spent a great deal
of time reaching agreement on the qualities of papers falling at the highest end
of the scale. Although the raters identified several papers that were potential
benchmark 4s during the second rating session, just one newly introduced
paper was deemed worthy of a 5. (Another paper that had been assigned a 5
in the first session was reintroduced in the second session to see if its earlier
score would stand. It did.) None of the papers introduced in this session were
thought to be representative of the top band of the 6-point scale. By the end
91
92
In fact, it was sustained input from the chemists that allowed us to disambiguate the different levels of student performance. The applied linguists role
during the socialization process intentionally remained that of a guide rather
than an authority on language testing. The applied linguists interceded only
when necessary to keep discussion moving and to facilitate more time-efficient
scoring lest the chemists lose motivation to continue participating in this
aspect of the project. For similar reasons, the applied linguists refrained from
offering any theoretical insights into the nature or quality of student writing
during these sessions. Instead, the focus was on the chemists ways of making
sense of and talking about students writing performance.
93
94
themselves before sending them on to the rest of the group for a final review. A
key modification to note in the revised analytic rating scale (see Appendix E)
involves the chemists decision to quantify the number of errors permissible at
each score point, a decision that applied linguists may at first glance find too
restrictive, but which may be a logical outcome of the chemists disciplinary
culture, with its exacting standards of precision and accuracy.11 That the chemists felt comfortable crafting an analytic scale in this way highlights a key goal
of the assessment process undertaken. The chemists on the team had become
skilled not only in evaluating the writing of the chemistry students, but also
at explaining and refining specific criteria to evaluate student writing in their
content-area domain.
6 Conclusion
Through our involvement in designing assessment instruments for the Write
Like a Chemist project, we have learned a number of valuable lessons about
interdisciplinary collaboration that have implications for applied linguists
working with content experts from other disciplines. As mentioned in our
introduction, our experience has confirmed the importance of a long-term
association, rather than a short-term consultation, between content-area specialists and applied linguists. Sufficient time must be allotted for planning,
discussions, negotiations of meaning, and the rethinking of original plans. Time
should also be set aside to allow participants to formulate and voice opinions,
consider the views of others, and then reformulate opinions in light of others
views. The process of developing resources and instruments for other-group
needs is by necessity a dynamic one, and therefore one deserving of a careful,
measured approach. Time and (seemingly infinite) patience are needed for
the drafting and redrafting of relevant documents, with additional time set
aside to debate single words and phrases, if necessary. Equally important is
the establishment of agreed upon procedures and timelines, accompanied by
the flexibility and willingness of all participants to make changes in response
to unanticipated occurrences.
Our project had one central lesson for the applied linguists in particular. What
applied linguists assume is characteristic of effective language use might not be
viewed as equally relevant by content-area specialists. In our case, the applied
linguists had to remain open to negotiating a shared understanding of writing
assessment criteria, rather than presuming that, as applied linguists, they had a
monopoly on expertise in this area. Our collaboration led to an explicit shared
vocabulary that all participants felt comfortable using in project discussions.
The need for shared terminology pertained not only to the technical aspects of
assessment, but also to descriptions of student performance. By approaching
95
96
Appendix A
Boesten, W. H. J., Seerden, J.-P. G., de Lange, B., Dielemans, H. J. A., Elsenberg, H. L. M.,
Kaptein, B., Moody, H. M., Kellogg, R. M. and Broxterman, Q. B. (2001) Asymmetric
Strecker synthesis of -amino acids via a crystallization-induced asymmetric transformation using(R)-phenylglycine amide as chiral auxiliary. Organic Letters 3: 11214.
Dellinger, B., Pryor, W. A., Cueto, R., Squadrito, G. L., Hegde, V. and Deutsch, W. A.
(2001) Role of free radicals in the toxicity of airborne fine particulate matter. Chemical
Research in Toxicology 14: 13717.
Demko, Z. P. and Sharpless, K. B. (2001) Preparation of 5-substituted 1H-tetrazoles from
nitriles in water. The Journal of Organic Chemistry 66: 794550.
Jozefaciuk, G., Muranyi, A. and Fenyvesi, E. (2003) Effect of randomly methylated cyclodextrin on physical properties of soils. Environmental Science & Technology 37:
30127.
Llompart, M., Pazos, M., Landn, P. and Cela, R. (2001) Determination of polychlorinated
biphenyls in milk samples by saponification-solid-phase microextraction. Analytical
Chemistry 73: 585865.
Plaper, A., Jenko-Brinovec, S., Premzl, A., Kos, J. and Raspor, P. (2002) Genotoxicity
of trivalent chromium in bacterial cells. Possible effects on DNA topology. Chemical
Research in Toxicology 15: 9439.
Vesely, P., Lusk, L., Basarova, G., Seabrooks, J. and Ryder, D. (2003) Analysis of aldehydes
in beer using solid-phase microextraction with on-fiber derivatization and gas chromatography/mass spectrometry. Journal of Agricultural and Food Chemistry 51: 69414.
97
Appendix B
Grading Criteria for Journal Article Paper
Possible
Points
15
10
10
15
15
10
10
10
100
Your Score
Organization
of Text
Conciseness &
Fluency
Science Content
Audience
Grammar &
Mechanics
Scientific
Conventions
ySeveral errors.
yOverall impression
adversely affected.
yFrequent grammatical
errors, and/or
yFrequent mechanical
errors.
yReader distracted from
content.
yConstant grammatical
errors, and/or
yConstant mechanical
errors.
yContent difficult to
understand as a result of
these errors.
yMany errors.
yScientific value of paper
would be disregarded.
yMove structure followed yMove structure followed yMove structure followed yMove structure is not
yMove structure is not
in all key sections.
in most key sections.
in most key sections.
followed in several key
followed.
yLogical progression of yProgression of ideas
yProgression of ideas
sections.
yProgression of ideas
ideas through majority
unclear in a few
unclear in several
unclear throughout.
yProgression of ideas
of text.
instances.
instances.
unclear in many instances.
98
Creating and Validating Assessment Instruments
Appendix C
99
Appendix D
Holistic rating scale developed
socialization
sessions.
Appendixafter
D: Holistic
Rating Scale
Developed after Socialization Sessions
Presentation of science is correct, clear, logical, and sophisticated for course level.
Move structures are followed correctly in all sections.
Writing flows; wording is concise and appropriate.
Graphics are correctly formatted and seamlessly integrated with the text.
Few, if any, grammatical or mechanical errors are present.
Few, if any, errors are made in the use of scientific conventions or terminology.
Presentation of science is mostly correct, clear, and logical, but lacks some sophistication.
Move structures are followed in all sections, though some moves are underdeveloped or
problematic.
Writing is generally concise and appropriate, though some areas are wordy or awkward.
Graphics are formatted correctly, but may not be well integrated with the text.
A handful of grammatical or mechanical errors are present.
A handful of errors are made in the use of scientific conventions and/or terminology.
Presentation of science is generally correct, but often lacks clarity and/or sophistication.
Move structures are generally followed, though some moves are missing or misplaced.
Writing is often wordy and awkward.
Graphics contain some formatting errors and are not well integrated with the text.
Grammatical and mechanical errors are noticeable and, at times, distracting.
Errors in scientific conventions and/or terminology are noticeable and, at times, distracting.
Presentation of science evinces some lack of understanding on the part of the author.
Move structures are generally followed, but several are missing, misplaced, and/or
underdeveloped.
Writing is consistently wordy and awkward.
Graphics contain some formatting errors and are poorly integrated with the text.
Grammatical and mechanical errors regularly cause the reader distraction.
Errors in scientific conventions and/or terminology are frequent and regularly cause the
reader distraction.
Presentation of science evinces general lack of understanding on the part of the author.
An attempt has been made to follow move structures, but most are missing, misplaced,
and/or underdeveloped.
Writing is uniformly wordy and awkward.
Graphics are poorly designed and formatted, and disconnected from the text.
Grammatical and mechanical errors cause the reader serious distraction.
Errors in scientific conventions and/or terminology are very frequent and cause the reader
serious distraction.
The author does not understand the science and therefore cannot present it in any
meaningful way.
The author has not attempted to follow move structures.
Problems with wording and terminology are the rule, rather than the exception.
Graphics, if used, are poorly designed and formatted, and disconnected from the text.
Grammatical and mechanical errors make reading the text difficult.
Errors in scientific conventions and/or terminology predominate and make understanding
the paper difficult.
Audience &
Purpose
Organization
Writing
Conventions
Grammar &
Mechanics
Scientific
Content
Wordiness and/or
errors in level of
detail, style, or
formality occur in a
handful (2-3) of
instances.
A handful (3-5) of
errors are made in the
use of writing
conventions.
A handful (3-5) of
grammatical or
mechanical errors are
present.
Presentation of
science is generally
complete and correct,
but one element is
missing, problematic,
or weakly developed.
Presentation of science
is complete, correct,
clear, and logical. Level
of science conveys an
understanding that is
sophisticated for course
level.
(STRONGEST)
6
Presentation of
science is generally
correct, but two
elements are missing,
problematic, or
weakly developed.
Grammatical and
mechanical errors are
noticeable (6-8) and,
at times, distracting.
Errors in writing
conventions are
noticeable (6-8) and,
at times, distracting.
Wordiness and/or
errors in level of
detail, style, or
formality are
noticeable (4-5), and,
at times, distracting.
Presentation of
science contains
several errors. Three
elements are missing,
problematic, or
weakly developed.
Grammatical and
mechanical errors are
frequent (9-10) and
regularly distracting.
Presentation of
science is generally
incorrect. Four
elements are missing,
problematic, or
weakly developed.
Grammatical and
mechanical errors are
consistent (11-12) and
seriously distracting.
Errors in writing
conventions are
consistent (11-12) and
make the writing
appear unprofessional.
One (sub)move is
missing or
underdeveloped.
(Sub)moves may be
out of sequence; extra
moves may be present.
Errors in writing
conventions are
frequent (9-10) and
regularly distracting.
Wordiness and/or
inappropriate level of
detail, style, or
formality are
consistent (8-9) and
seriously distracting.
3
Wordiness and/or
errors in level of
detail, style, or
formality are frequent
(6-7) and regularly
distracting.
Presentation of
science conveys little
scientific
understanding. Five
elements are missing,
problematic, or
weakly developed.
Grammatical and
mechanical errors are
common (>12) and
limit the readers
ability to understand
the material.
Errors in writing
conventions are
common (>12). The
writing is
unprofessional.
(WEAKEST)
1
Wordiness and/or
inappropriate level of
detail, style, or
formality are common
(10) and cause the
reader to dismiss the
work.
Score
44
100
Creating and Validating Assessment Instruments
Appendix E
Notes
1 The authors gratefully acknowledge James K. Jones (applied linguist) and Molly
Costanza-Robinson (chemist) for their participation in the process of creating and
validating instruments discussed in this article. We thank four anonymous reviewers and
the editors of JAL for their thoughtful comments on an earlier version of this paper.
2 Our work has been supported by the US National Science Foundation, with grants
(DUE 0087570 and DUE 0230913) received by authors Marin S. Robinson, Chemistry,
and Fredricka L. Stoller, Applied Linguistics, Northern Arizona University. Note that
any opinions, findings, conclusions, and recommendations expressed in this article are
those of the authors and do not necessarily reflect the views of the National Science
Foundation.
3 For an overview of Write Like a Chemist materials, see www4.nau.edu/chemwrite
4 Write Like a Chemist materials were piloted in eight US colleges and universities in
20042005 and are being piloted in another eight US institutions in 20052006.
5 The target language use domain concept is attributable to assessment research by
Bachman and Palmer (1996).
6 The wordings of scale descriptors were a source of continual discussion and revision.
In one 15-week period alone, approximately 45 versions of the different analytic scales
were generated and discussed, with changes to the wording of multiple descriptors being
made in each revision.
7 Initial classifications were based partially on the chemist and applied linguists recollections of each students overall performance in the course, in large part because they
remembered students paper topics.
8 The applied linguistics graduate student who team-taught the course in the first two years
of the project also participated in these sessions. Because we were primarily concerned
with the chemists interpretations of student written performance, we report only their
views here.
9 This revelation translated into a concern about the practicality of the analytic rating
scales for different writing tasks that had been created in the months preceding the
socialization sessions.
10 Although interrater reliability for rank-order data is typically estimated using Spearmans
rho, use of the Fishers Z-transformation of Pearson product-moment correlations is
necessary when averaging agreement scores among three or more raters.
11 At the time of writing this article, the chemists are reconsidering whether this approach
is practical or valid for writing assignments of varying lengths and complexity.
102
References
Bachman, L. and Palmer, A. (1996) Language Testing in Practice. New York: Oxford
University Press.
Beall, H. and Trimbur, J. (2001) A Short Guide to Writing about Chemistry. (Second
edition) New York: Longman.
Bressette, A. R. and Breton, G. W. (2001) Using writing to enhance the undergraduate
research experience. Journal of Chemical Education 78: 16267.
Brown, A. (1995) The effect of rater variables in the development of an occupationspecific language performance test. Language Testing 12: 115.
Candlin, C. N. and Candlin, S. (2003) Health care communication: a problematic site for
applied linguistics research. In M. McGroarty (ed.) Annual Review of Applied Linguistics
13454. New York: Cambridge University Press.
Candlin, C. N. and Maley, Y. (1997) Intertextuality and interdiscursivity in the discourse
of alternative dispute resolution. In B.-L. Gunnarson, P. Linell and B. Nordberg (eds)
The Construction of Professional Discourse 20122. London: Longman.
Candlin, C. N. and Sarangi, S. (2004) Making applied linguistics matter. Journal of Applied
Linguistics 1(1): 18.
Coppola, B. P. and Daniels, D. S. (1996) The role of written and verbal expression in
improving communication skills for students in an undergraduate chemistry program.
Language and Learning Across the Disciplines 1: 6786.
Dodd, J. S. (ed.) (1997) The ACS Style Guide. (Second edition) Washington, DC: American
Chemical Society.
Douglas, D. (2000) Assessing Language for Specific Purposes. New York: Cambridge
University Press.
Douglas, D. (2001) Language for specific purposes assessment criteria: where do they
come from? Language Testing 18: 17185.
Dudley-Evans, T. and St. John, M. J. (1998) Developments in English for Specific Purposes:
a multi-disciplinary approach. New York: Cambridge University Press.
Ebel, H. F., Bliefert, C. and Russey, W. E. (2001) The Art of Scientific Writing: from student
reports to professional publications in chemistry and related fields. (Second edition) New
York: John Wiley.
Ferris, D. R. and Hedgcock, J. S. (2005) Teaching ESL Composition: purpose, process, and
practice. (Second edition) Mahwah, NJ: Lawrence Erlbaum.
Flowerdew, J. (1993) Content-based language instruction in a tertiary setting. English for
Specific Purposes 12: 12138.
Flowerdew, J. and Peacock, M. (eds) (2001a) Research Perspectives on English for Academic
Purposes. New York: Cambridge University Press.
Flowerdew, J. and Peacock, M. (2001b) The EAP curriculum: issues, methods, and
challenges. In J. Flowerdew and M. Peacock (eds) Research Perspectives on English for
Academic Purposes 17794. New York: Cambridge University Press.
Gordon, N. R., Newton, T. A., Rhodes, G., Ricci, J. S., Stebbins, R. G. and Tracy, H. J.
(2001) Writing and computing across the USM chemistry curriculum. Journal of
Chemical Education 78: 535.
Hamp-Lyons, L. (2003) Writing teachers as assessors of writing. In B. Kroll (ed.) Second
Language Writing: research insights for the classroom. New York: Cambridge University
Press.
Henning, G. and Davidson, F. (1987) Scalar analysis of composition ratings. In K. M.
Bailey, T. L. Dale, and R. T. Clifford (eds) Language Testing Research: selected papers
from the 1986 colloquium. Monterey, CA: Defense Language Institute.
Hudson, T. (2005) Trends in assessment scales and criterion-referenced language assessment. In M. McGroarty (ed.) Annual Review of Applied Linguistics 20527. New York:
Cambridge University Press.
Hyland, K. (2002) Teaching and Researching Writing. London: Longman.
Hyland, K. (2003) Second Language Writing. New York: Cambridge University Press.
Hyland, K. (2004) Disciplinary Discourses: social interactions in academic writing. Ann
Arbor: University of Michigan Press.
Jacoby, S. and McNamara, T. (1999) Locating competence. English for Specific Purposes 18:
21341.
Johns, A. M. (1997) Text, Role, and Context: developing academic literacies. New York:
Cambridge University Press.
Klein, B. and Aller, B. M. (1998) Writing across the curriculum in college chemistry: a
practical bibliography. Language and Learning Across the Disciplines 2: 2535.
Kovac, J. and Sherwood, D. W. (2001) Writing Across the Chemistry Curriculum: an
instructors guide. New York: Prentice Hall College Division.
Kuldell, N. (2003) Read like a scientist to write like a scientist: using authentic literature in
the classroom. Journal of College Science Teaching XXXIII(2): 325.
Lumley, T. (1998) Perceptions of language-trained raters and occupational experts in
a test of occupational English language proficiency. English for Specific Purposes 17:
34767.
Lumley, T. (2002) Assessment criteria in a large-scale writing test: what do they really
mean to the raters? Language Testing 19: 24676.
McNamara, T. (1996) Measuring Second Language Performance. New York: Longman.
Nature (2001) Learning to speak and write. Nature 411: 1.
North, B. and Schneider, G. (1998) Scaling descriptors for language proficiency scales.
Language Testing 15: 21763.
Oliver-Hoyo, M. T. (2003) Designing a written assignment to promote the use of critical
thinking skills in an introductory chemistry course. Journal of Chemical Education 80:
899903.
Paulson, D. R. (2001) Writing for chemists: satisfying the CSU upper-division writing
requirement. Journal of Chemical Education 78: 10479.
104
Sarangi S. and Candlin, C. N. (2003) Trading between reflexivity and relevance: new challenges for applied linguistics. Applied Linguistics 24: 27185.
Sarangi, S. and Roberts, C. (eds) (1999) Talk, Work, and Institutional Order: discourse in
medical, mediation, and management settings. Berlin: Mouton de Gruyter.
Shibley, I. A., Milakofsky, L. M. and Nicotera, C. L. (2001) Incorporating a substantial
writing assignment into organic chemistry: library research, peer review, and assessment. Journal of Chemical Education 78: 503.
Smith, S. (2003a) The role of technical expertise in engineering and writing teachers
evaluations of students writing. Written Communication 20: 3780.
Smith, S. (2003b) What is good technical communication? A comparison of the standards
of writing and engineering instructors. Technical Communication Quarterly 12: 724.
Stoller, F. L., Jones, J. K., Costanza-Robinson, M. S. and Robinson, M. S. (2005)
Demystifying disciplinary writing: a case study in the writing of chemistry. Across the
Disciplines: interdisciplinary perspectives on language, learning, and academic writing.
Retrieved 30 May 2005. http://wac.colostate.edu/atd/lds/stoller.cfm
Swales, J. M. (1990) Genre Analysis: English in academic and research settings. Cambridge:
Cambridge University Press.
Swales, J. M. (2004) Research Genres: exploration and applications. Cambridge: Cambridge
University Press.
Wardle, E. A. (2004) Can cross-disciplinary links help us teach academic discourse in
FYC? Across the Disciplines: interdisciplinary perspectives on language, learning, and
academic writing, 1. Retrieved 10 January 2004. http://wac.colostate.edu/atd/articles/
wardle2004/
Weigle, S. C. (2002) Assessing Writing. New York: Cambridge University Press.
Whelan, R. J. and Zare, R. N. (2003) Teaching effective communication in a writing-intensive analytical chemistry course. Journal of Chemical Education 80: 90406.