Você está na página 1de 15

MODELS OF SKILLED READING

Context-driven "top-down" models


Stimulus-Driven bottom-up models
Whole word models
Component letter models
Syllabic units
Multilevel and parallel coding systems models
Parallel coding systems models
Activation or logogen models
Interactive-activation and connectionist models
Lexical search models
Summary

Models of Skilled Reading


Research with skilled readers has been primarily concerned with the question
of how one recognizes and identifies printed words.
This focus is well motivated, since reading is critically dependent on facility in
word recognition.
Each of the models is defined in terms of the assumptions it incorporates in
addressing one or more of the following related questions:
(1) whether words are recognized by accessing whole word representations in
the mental lexicon, or sub-word representations such as features, letters or
syllables;
(2) whether words are ultimately identified through direct access or through
phonologically mediated access to word meanings;
(3) whether word recognition entails serial or parallel (simultaneous) processing
of letters;
(4) whether recognition is primarily a context-driven "top down," stimulus-driven
"bottom-up" or interactive process;
(5) whether recognition entails the use of a single mechanism for accessing the
lexicon, or multiple mechanisms for doing so; and
(6) whether word recognition takes place through activation or through "search"
processes.

Context-driven "top-down" models


Context-driven models of word recognition assume that higher level
contextual information can directly affect the way lower level stimulus information
is perceived and interpreted.
The prototypical context-driven model is that proposed by Smith (1971). It is
the representation which uniquely define printed words in memory are the
abstract features (lines, curves, angles, etc.) which define the letters in those
words, and there are presumed to be separate, "functionally equivalent feature
lists" for letters appearing in different cases and fonts.
When a printed word is encountered, features are extracted in all letter
positions simultaneously, and word recognition occurs when a critical set of
features is successfully matched with its counterpart in memory.
Feature extraction is a selective process insofar as it is determined both by
implicit knowledge of orthographic structure and by the ability to use linguistic
context to predict words in the text.
The cat chased the _______________.
Within this view, word recognition is largely a matter of confirming one's
predictions and neither letter recognition nor phonological recoding (accessing
word names) is entailed.

Modular process is relatively autonomous, that is, it is "not controlled by higher


level processes or supplemented by information from knowledge structures not
contained in the module itself."
Perfettie (1985,1992) and Stanovich (1980,1990) have, in fact, shown taht most
contextual effects are comprehension effects which take place after words have
already been identified.
Smith's other claim, that familiarity with orthographic redundancy aids word
recognition, has been validated many times over.
Moreover, it has been repeatedly demonstrated that skilled readers have difficulty
recognizing printed words appearing in different or even mixed cases and fonts,
which is consistent with Smith's assumption that word recognition entails the use
of functionally equivalent feature lists.

Stimulus-driven "bottom-up" models


A basic assumption of stimulus-driven models is that word recognition
depends primarily on information contained in the stimulus, the actual printed
word, and not on the linguistic context.
A second assumption is that recognition takes place in discrete, hierarchically
ordered and noninteractive stages.
All bottom-up models postulate a sensory stage in which visual features are
extracted:
a. recognition stage - a representation of the word is accessed
b. interpretative stage - the word's meaning is accessed

Whole word models


Whole word models of word recognition commonly assume that printed words
are represented mentally as psychologically indivisible wholes and that a word is
recognized by its pattern of features.
Johnson's (1977) pattern unit model is prototypical. This model postulates
that stimulus features are extracted from all letter positions in parallel, but the
letters themselves are not perceived because their collective features are
assigned a unitary encoding, during the sensory stage of processing.
Word superiority effect (Reicher, 1969) - a highly reliable phenomenon
whereby a letter embedded in a word seen only briefly (work) can be verified
more accurately than when it is embedded in a nonword.
E.g. "Did the word contain a k or d?"
Backward masking conditions suggesting that it may not be a true perceptual
effect, but, rather, a short term memory effect facilitated by one's ability to
remember the name of the stimulus word.
E.g. When the stimulus word is obliterated by noise patterns shortly after viewing

Component letter models


Component letter models postulate that printed words are represented as
uniquely ordered arrays of graphemes, and that all of a word's letters must be
recognized if that word is to be recognized.
For example, Gough (1972) suggests that feature extraction and letter
recognition take place through serial processing (letter by letter), and word
identification takes place through phonemic recording of each letter in turn, using
grapheme-phoneme correspondence (GPC) rules to access word names and
meanings.
In contrast, Masaro (1975) suggests that feature extraction and letter
recognition take place through parallel processing, and that one uses implicit
knowledge of orthographic redundancy to facilitate perception of letters not fully
processed (e.g., medical letters masked by adjacent letters).
This primary recognition process becomes input to a secondary recognition
process which accesses word meanings directly, rather than thorough
phonological mediation.
However, the serial processing component of Gough's (1972) model is
questioned by studies demonstrating that words are not readily recognized when
their letters are presented in tandem.
These findings are more in keeping with Massaro's suggestion that a word's
letters are processed in parallel, which is a widely accepted view.
Support for the direct access view comes from studies demonstrating that
word meanings are accessed even when phonological coding is impaired.
Also supportive of a direct access model is the fact that one can readily
distinguish the meanings of homophones such as new and knew from the
differences in their spelling.
However, other studies have shown that homophony slows both semantic
and lexical judgments.
E.g.
"Are pear and pair both fruits?"
"Is brane a word?"
The common observation that skilled readers can decode a pseudowords
better than less skilled readers has also been cited as evidence for Gough's
(1972) suggestion that GPC (grapheme-phoneme correspondence) rules are
used to identify real words. Yet, real words tend to be identified more rapidly than
pseudowords which is more in keeping with a direct access view.
The fact that GPC rules fail with a large number of words militates against
against any strong version of the phonological recording theory, and some have
taken such inconsistency as evidence for the co-existence of both direct and
phonologically mediated access mechanisms.
E.g.
have, put, bough, cough

Syllabic units
In this model the units of recognition are not single letters but, rather,
phonologically defined syllables called Vocalic Center Group (VCG) [Spoehr and
Smith, 1973].
Following Hansen and Rogers (1965), the VCG is defined as a vowel or
vowel digraph flanked by consonants or consonant clusters.
E.g.
ou and ea
The word identification process begins with feature extraction and letter
recognition (in parallel), followed by rule based parsing that tentatively isolates
VCG units, which are phonologically recoded to recover the word's name.
If this process fails, then the word is parsed again, according to the Vocalic
Center Groups model, until it is identified.
E.g.
FATHER ---> FAT/HER ---> FA/THER

Multilevel and parallel coding systems models


These models differ from those already discussed because each postulates
more than one unit of recognition rather than a single unit, and each incorporates
alternative vehicles for word identification.

LaBerge and Samuels' multilevel coding model


In the LaBerge and Samuel (1974) model, there are hierarchically ordered
"codes" (representations) for features, letters, spelling, patterns, and words.

Perceptual learning - lower order codes are integrated and unitized (perceived as
a unit) to form a new set of codes at each successive level. It entails focal
attention, which is conceived as a limited cognitive resource that cannot be
allocated to two processes simultaneously.
As one's ability to recognize unitized codes at a given level becomes
automatized, attention is redeployed to the task of unitizing codes at the next
level.

LaBerge and Samuels (1974)


- demonstrated that shifts of attention from familiar stimuli (e.g., b-d) to
unfamiliar stimuli ( ) exacted a cost in speed of processing on initial learning
trials, but not on later trials.
- claim that lower level codes are gradually integrated into higher level codes
is given some support in studies showing that disparities in speed of processing
long versus short words are substantial in children in the lower grades, but not in
children in the upper grades.

Parallel coding systems models


The prototypical parallel systems model is Coltheart's (1978) dual route
model.

Colthearts (1978)
- postulates two such systems, one that accesses lexical representations
directly, using word specific associates, and another that accesses them
indirectly, through the use of grapheme-phoneme correspondence (GPC) rules.
Both of these systems are activated by a letter string, and depending on the
nature of that string, one or the other system may bring about identification.

Two different types of evidence have been cited in support of the dual route
model.
One type comes from studies of brain damaged patients, who have suffered
what appears to be selective loss of either the direct visual or phonologically
mediated access routes.
Deep dyslexics
- one group of patients appear to be able to read most words, but have very
limited ability to read pseudowords, make many semantic confusion errors
(e.g., calling cat - kitten and orchestra - symphony), and have difficulty reading
functors (e.g., if, and but).
- this symptoms pattern suggests that the direct route is intact in these
patients, while the phonologically mediated route is impaired.
Surface dyslexics
- by contrast, often decode pseudowords and regularly spelled words (e.g.,
cat, fat) more readily than they can decode irregularly spelled words (e.g., epoch,
ache), and often regularize words with exceptional pronunciations (e.g., have,
put) suggesting that they have lost word specific connections that allow them to
use the direct access route.
Because of the strengths and weaknesses observed in these acquired
dyslexia patients are relative rather than absolute performance is never totally
deficient nor totally adequate, critics have suggested that their performance
patterns on decoding and naming tasks may simply reflect different types and
levels of impairment of a single, lexically based access mechanism, rather than
selective impairment of one of two separate mechanisms.
A second type of evidence for the dual route model comes from naming tasks
with skilled readers. It has been consistently found that skilled readers are able to
name printed words faster than they decode (sound out) pseudowords and that
they name high frequency words faster than low frequency words.
This suggests that familiar words "use" the direct route, while less familiar
words and pseudowords use the assembled route. It has also been consistently
found that regular words are, in general, named faster than exception words.
But, whereas high frequency regular and exception words are both named
with equal speed, low frequency, regular words are named faster than low
frequency, exception words.
The direct route always "wins the race" with the indirect route in the case of
high frequency words. Because low frequency words are less familiar, the
indirect route is better able to compete with the direct route, and this conflict
affects the exception words more than the regular words.
Glushko (1979)
- found that a pseudoword such as tave, which is spelled similarly to the
exception word have, takes longer to name than a pseudoword such as feal,
whose real word "neighbors" all have regular pronunciations (e.g., real, heal).
- also found that regular words whose neighbors include words with
inconsistent pronunciations.
Seidenberg, et.al. (1984)
found that low frequency, regular/consistent words were named faster than
low frequency regular/inconsistent words.
Glushko (1979) and others
- have interpreted these results as evidence for a single mechanism for
lexical access that identifies a word by "synthesizing patterns of activation" from
other words with similar spellings (e.g., identifies fat by analogy with cat and fan).

The implication is that this mechanism mimics the apparent rule-based


properties said to be characteristics of the GPC mechanism.

Activation or logogen models


Most of the models discussed thus far imply that a printed word acts as a
stimulus that energizes or activates a code or set of codes representing that word
in memory. However, none of these models has been explicit in describing the
activation processes. The class of models we now illustrate is more explicit in
doing so.

Morton's logogen model


Activation models are basically fashioned after a model of word
recognition initially proposed by Morton (1969). Morton coined the term logogen
(from the Greek word logos) to characterize an inferred neural entity that
represents a printed word.
Logogens function as threshold-type detection devices that incrementally
register information derived from both sensory input and linguistic context. And
like the neurons, they fire when a criterion threshold has been reached.
When this occurs, the word is recognized and its meaning is accessed. In
their resting state logogens have threshold values for each represented word that
are determined by their frequency of occurence in print.
Logogens representing high frequency words have lower thresholds of
activation than do logogens representing lower frequency words.
Linguistic context can also serve to lower or raise threshold values,
depending on whether or not it contains information that is related to a word's
meaning(s). Word recognition is believed to result from the interaction of stimulus
and contextual information, and a word's meaning becomes available when a
logogen is activated.
Support for Morton's (1969) model came largely from studies demonstrating
that highly constraining contexts can facilitate word recognition, while
incongruent contexts tend to impede recognition.
However, except for some late work which suggests that logogens may be
sensitive to morphemes rather than words (e.g., walk/ing), Morton, himself,
provided limited documentation of his model.

Interactive-activation and connectionist models


Morton's model was pivotal in framing later models that provided greater
specification of how logogens might work - the interactive-activation model of
McClelland and Rumelhart (1981).
In the latter model, word recognition is believed to result from the interaction
of competing excitatory and inhibitory activations from interconnected logogen
type detectors (nodes) corresponding with features, letters and words. Each
node is believed to have a "resting level" threshold that depends on frequency of
activation over time.
The activation level of a given node is said to be determined in part by
excitation from the stimulus word and in part by excitatory and inhibitory
activations from neighboring words. The greater the overlap in spelling, the
greater the activations stimulated by given neighbors.
Thus, in mathematical terms, word recognition is determined by the algebraic
summation of excitatory and inhibitory inputs which yield a net activation value
that is a simple weighted average of these inputs.
Because the excitatory activations prompted by a printed word stimulus are
typically greater than the inhibitory activations prompted by neighboring words,
the logogen for that word fires and the word is recognized.
The cardinal evidence for this connectionist model (the term reflecting its
structural features) come primarily from impressive computer simulations of
some of the major word recognition phenomena documented in empirical
research, the most notable being the word superiority effect, in which letters
embedded in words are verified more accurately than those embedded in
nonwords.
Moreover, the model rather handily accommodates Glushko's (1979) finding
the pseudowords spelled like words with inconsistent pronunciations (feal/real).

Seidenberg and McClelland (1989)


- radically revised version of the interactive-activation model was recently
proposed, dispenses with word level nodes altogether, substituting a parallel
distributed processing (PDP) format, whereby nodes for three letter spelling
patterns are directly connected to nodes for the phoneme values of those
spelling patterns (e.g., the word heat is represented -he, hea, eat, at-, the lines
with spaces conforming with word boundaries).
- they were able to simulate an even broader range of word recognition
phenomena: for example, the interaction between word frequency and regularity
in word spellings, documented by Seidenberg, et.al. (1984); the pseudoword and
regular/exception word pronunciation effects documented by Glushko (1979);
and so forth.
- conclude that words are identified using a single nonlexical coding
mechanism that makes use of distributed representations which encode the
spelling-sound redundancies inherent in the orthography

McClelland and Rumelhart (1981) and Seidenberg and McClelland (1989)


- included only four-letter words; thus their results may not readily be
generalized to more complex words.
Moreover, in a recent critique of the PDP model, Besner, Twilley, McCann,
and Seergobin (1990) compared the Seidenberg and McClelland (1989)
simulations with performance of skilled readers on tasks that were not reported
by these investigators and found that the two data sets did not correspond as
closely as they should have.
For example, the model did not name pseudowords as well as skilled
readers. Based on such findings, among others, Besner, et.al. (1990) concluded
that the standard dual route model still provides a better account of most word
recognition phenomena than does the PDP model.

Lexical search models


Search models
- was initially proposed by Forester (1976) and later extended by Taft (1979).
- postulates that word identification is the culmination of an active and
ordered search for lexical addresses where information about given words is
stored.
In the current model, a word's lexical address is assumed to be located in an
orthographic access file, in one of several bins containing representations of
words with similar orthographic descriptions.
The bin containing a word's lexical address is located with the aid of an access
code, which is defined as the first syllable in a stem morpheme.

Basic Orthographic Syllable Structure (BOSS)


- is isolated through left-to-right iterative parsing that maintains orthotactic
and morphological integrity.
In prefixed words, parsing commences after the prefix has been stripped
away.
Thus, the BOSS of prefix is fix, while the BOSS of vodka is vod (vo/dka and
vodk/a are both illegal). Similarly, The BOSS of sunset is sun because the
parsing process gives morphemes preeminent status. When the bin containing
the lexical address of the stimulus word is located, all codes in the bin are
matched with that word in order of their frequency.
When the appropriate code is found, the word's lexical address is accessed
and a full orthographic description of the stem morpheme becomes available. A
post access "check" is made to compare this representation with the stimulus
word, and, if they match, the word is identified and its meaning is accessed.
Support for the search model came initially from Forster and Chambers'
(1973) observation that real words can be identified more rapidly than
pseudowords, and that high frequency words can be identified more rapidly than
low frequency words.
Moreover, most of the research conducted to evaluate the search model has
been devoted to validation of the access code, and such research provides only
indirect support for the model. And while there is considerable support for the
idea that skilled readers are sensitive to both orthotactically and morphologically
defined units in complex words (orthotactic parsing - vod/ka; morphological
parsing - sun/set; walk/ing); such findings do not constitute prima facie evidence
for either access codes or search processes. In sum, the evidence for the BOSS
unit is equivocal.
Summary
In this chapter we have reviewed the history of writing, and we have
discussed the kinds of complex abilities that humans have that make it possible
for them to read written language.
We have traced some of the accounts that describe children's acquisition of
reading, and we have summarized some of the most prominent models of
reading process itself.
Our earliest evidence of writing dates from as recently as the fourth
millennium B.C. Literacy has, however, changed the human species dramatically
by providing us with an objective means of considering our own language and
thought and of connecting ourselves to other words of knowledge, across time
and space.
Spoken language is the fountainhead, as the great linguist Edward Sapir
noted, from which all other forms of language flow. Our writing is based upon an
alphabetic principle: each written letter is meant to stand for a sound in the
spoken word.
Study of the reading process involves, among other things, answering a
number of questions about how printed words are recognized.
For example, we know that words are not recognized on the basis of shape
cues or supraletter patterns, that word recognition depends initially on letter
recognition, and that a word's letters are recognized in parallel rather than
through left to right serial processing.
While the skilled reader may use both stimulus and contextual information in
identifying printed words, the weight given these two types of information
appears to be asymmetric, insofar as skilled word identification has been shown
to be an automatic, rapidly executed and modular process, regardless of whether
words are identified in isolation or in sentence contexts.
There is, in addition, good reason to believe that skilled readers have
internalized unitized representations of redundant spelling patterns that they can
use to decode unfamiliar letter strings; they seem also to be sensitive to both
orthotactic and morphological boundaries in identifying familiar strings.

Você também pode gostar