Você está na página 1de 37

Conceptual semantics and cognitive linguistics

RAY JACKENDOFF

Abstraci
Conceptual semantics shares many ofits goals with cognitive linguistics, but
is also concerned with more "Chomsky an" goals of discovering the extent
ofa specialized Universal Grammar in the human mind. It is committed to
the existence ofan autonomous syntax, though one that interacts richly with
meaning in a way that is potentially congenial to the findings of cognitive
linguistics. A number of the criticisms of conceptual semantics leveled by
Goldberg, Dean and Taylor (this issue) can be defused by closer attention
on one hand to the text of my writings and on the other hand to the
linguistic data.

I am delighted to have this opportunity to relate my work to cognitive


linguistics, and I am grateful to the editors of this Journal for making it
possible. As the other contributors to this issue have observed, there are
significant convergences between the approaches, worth exploring. In
fact, the position stated in the second paragraph of my Semantics and
Cognition, "to study semantics of natural language is to study cognitive
psychology", sounds like a trumpet call right out of cognitive linguistics.
However, äs is usual in a scientific enterprise, the similarities rapidly
come to be treated äs ground and the divergences emerge äs figural.
Having observed differences, the question is how to proceed. I don't
want to descend to the level of "my theory can handle X and yours
can't". Rather, Fd like to talk about why Fve made certain general
choices in my approach, to what extent these choices are close variants
of approaches in cognitive linguistics, and to what extent they constitute
Substantive differences.
I don't see the issue äs "Is Jackendoff a cognitive linguist? And if not,
why can't he see the light?" Cognitive linguists should be among the first
to recognize that Stereotyping is not always a useful strategy. I have been

Cognitive Linguistics 7-1 (1996), 93-129 0936-5907/96/0007-0093


© Walter de Gruyter
Brought to you by | University of Illinois Chicago (University of Illinois Chicago)
Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
94 Ray Jackendoff

criticized for being too old-fashionedly formal (Deane [this issue], Wilks
1992) and for not being formal enough (Verkuyl 1985, ter Meulen 1994);
for being not concerned enough with semantics (Taylor [this issue]) and
for being too concerned with semantics (Marantz 1992); for believing in
autonomous syntax (Goldberg [this issue]) and for not believing enough
in autonomous syntax (Emonds 1991); for believing in innateness
(E. Bates, personal communication, several times) and for not believing
in strong enough innateness (Fodor 1994). Life is short, you can't do
everything, read everything, make everyone happy with your work. As
I. F. Stone once put it, "all you have to contribute is the purification of
your own vision, and the addition of that to the sum total of other
visions."

1. Ideology
The most important thing I learned from Chomsky2 is that it is of interest
to approach language in terms of the following questions:
(1) What does a person have to know (alternatively, what Information
does a person need in long-term memory) in order to use language
in the way he or she does, in particular being able to create and
understand an unlimited number of utterances?
(2) What preexisting abilities does it take in order to acquire this knowl-
edge (or Information in long-term memory)?
(3) What parts of these preexisting abilities are not simply a consequence
of having a large general-purpose brain? That is, what parts of the
knowledge required for the acquisition of language are specific to
language?
The answer to question (1) is provided by a theory of grammar
(including lexicon), say a generative grammar or a cognitive grammar,
it doesn't matter which for the moment. Everyone in linguistics (though
regrettably not in other fields) agrees that the grammar is quite a compli-
cated accumulation of Information.
Likewise, everyone agrees that there must be some preexisting ability
in the mind of the child that Supports the acquisition of language. This
may be äs simple äs Skinnerian association, some more elaborate connec-
tionist net, or perhaps Piagetian mechanisms of assimilation and
accommodation.
The main place where people disagree is in their answer to question (3).
There is a strong disposition to believe that there is nothing special about
language acquisition that cannot follow from more general processes of
learning. Chomsky, of course, has insisted for decades that there is a

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 95

highly articulated System of preexisting linguistic Information, Universal


Grammar, that is present in the brain by virtue of the human genome
and the wonders of embryology, which makes it possible to acquire the
grammar of a human language.
One need not agree with Chomsky's theory of grammar in order to
believe in some form of Universal Grammar. For instance, it might be
that Universal Grammar prescribes Langacker-style grammars äs the
proper form in which children encode their growing knowledge of
English. That is, claims about the form of particular grammars and
Claims about the existence of a rieh Universal Grammar are partly
independent.
A variety of evidence persuades me that there is a Universal Grammar
of nontrivial interest. For one thing, there is a great deal of neurological
evidence that the brain consists of an assemblage of specialists whose
locations and connections are fairly uniform across individuals, and that
task-specific deficits result from damage to particular brain areas.
Language seems no exception to this (even if, äs Deane [this issue] points
out, some of the evidence has perhaps been overinterpreted). In addition,
one can run through all the Standard arguments about poverty of the
Stimulus, creation of creoles, critical periods, and so forth.
To my mind, though, the most compelling evidence comes from a
"transcendental" argument that I like to couch äs the Paradox of
Language Acquisition. Every normal child acquires the mental grammar
of his or her own language within seven or eight years, with little direct
instruction or consultation. Yet the Community of linguistics, working
collectively, with oodles of cross-linguistic evidence, has not succeeded
in writing the grammar of a single language, despite decades of research,
backed by centuries of traditional grammar. Since linguists are presuma-
bly bringing all their highly developed general purpose problem solving
ability to bear on the problem, such ability is evidently not enough.
Children must know something unconsciously that we linguists can't
know consciously, namely a search space of hypotheses. They don't have
to decide between generative grammar and cognitive grammar and LFG
and HPSG and tagmemics, let alone alternatives yet to be developed.
They know which one is right, even if we don't. So—even if every bit
of, say, Langacker's (1987, 1991) grammar is correct, the fact that it
took till 1991 to find it and till 20?? to convince every one eise demonstrates
conclusively that there must be a rieh Universal Grammar! (See
Jackendoff 1993 for a much more extended version of this argument.)
Now this does not preclude the possibility that Universal Grammar is
built in part äs a specialization of preexisting brain mechanisms. That's
characteristic of evolutionary engineering. For example, lots of mental

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
96 Ray Jackendoff

phenomena, such äs the organization of the visual field, music, and motor
control, involve hierarchical part-whole relations. So it's no surprise that
language does too. This doesn't mean language is derived or "built
metaphorically" from any of these other capacities. Rather, like fingers
and toes, they may be distinct specializations of a common evolutionary
ancestor.
So far, I follow Chomsky—though, I hope, with a less severe way of
stating the position (for example, he has a somewhat nihilistic view on
the evolution of language, äs pointed out by Pinker and Bloom [1990]
and Dennett [1995]). I also follow Chomsky in believing in the existence
of a syntactic component within Universal Grammar and within particu-
lar grammars, an issue I take up in the next section. Finally, I take from
Chomsky (at least the early Chomsky), äs well äs from my other most
important teachers, Morris Halle and Edward Klima, an appreciation
for rigorous analysis of sufficient data—including analysis of what does
not happen—and for formalisms whose degrees of freedom are appro-
priate to the phenomena they are intended to account for. I take up this
point in section 4.
I diverge from Chomsky's practice3 in not treating syntax äs the princi-
pal generative component of grammar, from which phonology and mean-
ing are interpreted. Rather, I treat phonology, syntax, and conceptual
structure äs parallel generative Systems, each with its own properties, the
three kept in registration with one another through sets of "correspon-
dence rules" or "interface rules". This view begins to develop in my work
in Jackendoff (1978, 1983), and comes into füll clarity in Jackendoff
(1987), where it is applied to the architecture of the mind äs a whole
and termed "representation-based modularity". (See the diagram in
Deane's paper.)
I also diverge sharply from Chomsky's practice in that I act on the
belief that a significant theory of meaning can be developed along the
lines of inquiry in (l)-(3) above. The resulting theory of meaning is in
many respects congenial to cognitive linguistics. Following the gestalt
psychologists, especially Koffka (1935), Michotte (1954), Michotte,
Thines, and Crabbe (1964), äs well äs more modern perceptual psychol-
ogy such äs Hochberg (1978) and Neisser (1967), I maintain that a
theory of meaning must pertain to the "behavioral reality" of the organ-
ism, not to physical reality—to the world äs it appears to the organism,
or more properly, the world äs it is constructed by the organism's mind.
This view appears incipiently in Jackendoff (1976) and comes to fruition
in Jackendoff (l 981 a, 1983, 1991 a); these last three contain detailed
comparison to "objectivist" philosophy, parallel to the discussion in
Lakoff and Johnson (1980) and Lakoff (1987).4

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 97

One of the earliest threads of this theory was the conceptual parallelism
between spatial and nonspatial semantic fields; this parallelism drives
concomitant grammatical and lexical paralielisms, though with less regu-
larity. This idea, derived in large part from Gruber (1965), appeared in
Jackendoff (1969, 1972), and in detail in Jackendoff (1976). The fields
whose parallelism is discussed in Jackendoff (1983) are almost exactly
those presented in Lakoff's (1991) treatment of the invariance hypothesis;
there is also substantial overlap with Talmy's (1978) work. (Unlike
Lakoff, however, I do not consider these paralielisms to be metaphors,
for reasons discussed briefly in Jackendoff [1976] and in much more
detail in Jackendoff and Aaron [1991] and Jackendoff [l992a].)
This part of my work has led to a focus on spatial language, a
preoccupation found throughout cognitive linguistics, Unlike (most?)
cognitive linguists, I have attempted to connect spatial language to psy-
chological theories of spatial cognition and visual processing, the most
notable of which for my purposes are Marr (1982) and Kosslyn (1980).
The foundations of spatial semantics are laid out in Jackendoff (1976,
1983); the connection with vision comes to the fore in Jackendoff (1987,
to appear b), and Landau and Jackendoff (1993).
Another aspect of my work has been the treatment of categorization,
worked out in detail in Jackendoff (1983). Here I mounted a sustained
attack on necessary and sufficient conditions; I proposed a variety of
mechanisms to account for fuzziness, graded concepts, stereotypes,
family-resemblance phenomena, and basic level categories, making use
not just of Rosch's work but of a broad variety of research in linguistics,
psychology, philosophy, and Computer science. The most novel innova-
tion, the notion of conditions interacting äs preference rules, was based
on the analysis of visual perception in Wertheimer (1923), and was
imported into linguistic theory via the theory of musical cognition
(Lerdahl and Jackendoff 1983). It appears to be a ubiquitous mental
mechanism, and possibly one that is better modeled by connectionist-
style computation than by Turing-style computation.
There has been a certain amount of productive contact between my
work and cognitive linguistics. In particular, I have been deeply influenced
by Talmy's (1978, 1980, 1983, 1985 [1988]) views on space, aspectuality,
lexicalization patterns, and force dynamics. My work on the parallel
between the description of pictures and of beliefs (Jackendoff 1975, 1983),
was an important source for Fauconnier's (1984) theory of mental
spaces—which in turn fed back into my (1992b) treatment of binding
and that of Csuri (forthcoming). Another piece on binding, Jackendoff
et al. (1993), was inspired by Fillmore's (1992) discussion of hörne.
Goldberg's work (1992a, 1992b, 1993) on datives, resultatives, and the

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
98 Ray Jackendoff

woy-construction has cross-fertilized mine (1990), with benefits on both


sides. And recently, more general alliances have developed between my
approach and construction grammar, äs Goldberg (this issue) observes.

2. Syntax
So why do I persist in hanging on to an autonomous syntax? Much of
my work, all the way back to my dissertation (1969), has been devoted
to showing that various phenomena considered purely syntactic in
Chomskian circles are really reflections of underlying semantic/conceptual
regularities. Why don't I go all the way, äs Goldberg (this issue) urges,
and claim that all syntactic phenomena are semantically motivated?
First notice that there are a lot of possible intermediate positions
between the Chomskian view that syntax is the central generative compo-
nent of language and the (stereotypical) cognitive grammar view that
there is no independent syntax at all. One is äs extreme äs the other. In
trying to honestly find the appropriate middle ground, we may well find
that there's hardly anything left of pure syntax, and that cognitive gram-
mar is nearly right. Or we may not.
To gain some perspective, it's useful to consider phonology. Most of
phonology—the feature content of Segments, the constitution of syllables,
the placement of stress, whether inflections are prefixal, suffixal, infixal,
or reduplicative—has nothing whatsoever to do with meaning; these
processes just chug along determining the form of phonological struc-
tures. Here and there, though, aspects of phonology are recruited for
semantic purposes. For example, the use of stress for contrast and empha-
sis, overlaid on canonical stress patterns, is probably universal. Much
less sytematic is the semantic relevance of segmental content, äs in onoma-
topeia or very occasional correlations such äs gl- for actions involving
light sources (gleam, glitter, glow, glare, etc.) and -itter for rapid jerky
actions (glitter again,y/Yter, skitter^flitter). I don't think anyone nowadays
would claim that such unsystematic examples undermine the essential
independence of phonology from semantics.
I am inclined to think the same is true for the relation of syntax and
semantics, though less radically so. Here are some aspects of syntax that
seem pretty much independent of semantics.
First, aspects of phrase structure: linear order is sometimes semantically
perspicuous, for example in the use of first position for topic, But lots
of the basic scut work in linear ordering doesn't seem to show any
semantic significance, for instance the choice between verb-initial and
verb-final VPs. Because this is uniform across the language, it cannot be
used to make semantic distinctions. Yet Speakers must know this fact in

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 99

order to satisfy the goal of being able to use the language properly. A
language like German, with V2 main clauses and verb-final subordinate
clauses (mostly), makes this aspect of syntax a bit more complex.
Similarly, the relative ordering of determiner, adjective, and noun in the
noun phrase carries no semantic weight in English, because it's not
distinctive; in Romance, on the other band, where both A-N and N-A
are possible, some semantic distinction can be carried by the choice.
Likewise, the difference between prepositions and postpositions from one
language to another carries no semantic weight. A different sort of
example concerns whether the language requires subjects in tensed clauses
(the so-called null subject parameter): English does, Spanish doesn't.
For a more complex example, languages diifer s to whether they allow
ditransitive VPs: English does, French doesn't. The possibility of having
such a position is a syntactic fact; the uses to which this position are put
are semantic. For example, in English, ditransitive VPs are used most
prominently for a variety of beneficiary + theme constructions (so-
called fo-datives and/or-datives), s discussed by Goldberg (1992a) and
many previous authors. But the same syntax is also recruited for
theme + predicate NP, hence the ambiguity of Make me a milkshake!
(Poof! You're a milkshake!). And ditransitive syntax also appears with a
number of verbs whose semantics is at the moment obscure, for instance
/ envy you your security, they denied Bill his pay, the book cost Harry 55,
and the lollipop lasted me two hours. At the same time, there are verbs
that fit the semantics of the dative but cannot use it, for example
tell/*explain Bill the answer (the latter seems to be a common error
among non-native Speakers).
In addition, other languages permit other (different?) ranges of seman-
tic roles in the ditransitive pattern. Here are some examples from Icelandic
(4) and Korean (5) (thanks to Joan Maling and Soowon Kim,
respectively):
(4) a. peir leyndu Olaf sannleikanum.
they concealed [from] Olaf(ACc) the-truth(DAr)
They concealed the truth from Olaf/
b. Sjorinn svipti hanni manni sinum.
the-sea deprived her(ACC) husband(DAT) her
'The sea deprived her of her husband.'
(Zaenen et al. 1985)
(5) a. John-un kkoch-ul hwahwan-ul yekk-ess-ta.
John-τορ flowers-ACC wreath-ACC tie-past-ind
'John tied the flowers into a wreath.'

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
100 Ray Jackendoff

b. John-un cangcak-ul motakpul-ul ciphi-es-ta.


John- logs-ACC campfire-ACC burn
'John buraed the logs down into a campfire.'
This distribution suggests that there is a purely syntactic fact about
whether a particular language has ditransitive VPs; in addition, // it has
them, the language has to specify for what semantic purposes (what
three-argument verbs and what adjunct constructions) they are recruited.
It is not that French can't say any of these things; it just has to use
different syntactic devices. That is, although the grammar of English no
doubt contains the form-meaning correspondence for datives described
by Goldberg, the availability of that form äs a target for correspondences
seems an independent, purely syntactic fact, also part of the grammar
of English.
If I were to consider myself a construction grammarian, then, I would
take it that among the most general constructions, the bread-and-butter
of the syntax, would be constructions that are purely syntactic, the
counterpart of phrase structure rules. This would not preclude specializing
these syntactic constructions for various semantic purposes. However, to
the extent that a construction is available in Universal Grammar to be
selected by languages of the world, but serves different semantic purposes
in those languages, I am inclined to call the construction itself purely
syntactic.
This is not to deny that there are other syntactic constructions that
come with intrinsic semantic baggage, for instance the more ...the more ...
and perhaps the English tag question—the sorts of phenomena for which
Fillmore and Kay (1993; Fillmore et al. 1988) have been most persuasive.
It is just that not all constructions are necessarily like that.
Another aspect of grammatical structure that seems to be part of
syntax proper is case marking. In particular, nominative and accusative
case (so-called structural cases) can be associated with just about any
semantic role that can appear in subject and object position respectively.
To hold that nominative case is meaningful but polysemous is more or
less vacuous. A better characterization is that it is the "everything eise"
case assigned to subjects after both the meaningful and the arbitrary
("quirky") case assignments have been factored out. Other cases, especi-
ally dative, generally show a variety of semantic roles, parallel to the
indirect object position in (say) Icelandic. (See Yip et al. 1987.) Perhaps
in some languages such äs Finnish, where there is a rieh ränge of cases,
some cases can be assigned single meanings. But that is not typical.
Consider further the Situation in Finnish, in which nominative case
appears on the first argument or adjunct that has not been lexically case

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 101

marked. In the normal Situation it appears on the subject, äs in (6a). But


if the subject is "quirky case marked" (6b), or there is no subject (6c),
nominative appears on the object, or, if there is no object (6d), on an
adjunct which otherwise (6a-c) is accusative.
(6) a. Lüsa muisti matkan vuoden.
Liisa(NOM) remembered the-trip(ACC) a-year(ACC)
b. Lapsen täytyy lukea kirja kolmannen kerran.
the-child(GEN) must read the-book(NOM) third time(ACc)
c. muista matka vuodenl
remember the-trip(NOM) a-year(ACc)
d. leutaan koko ilta.
read-PASs whole night(NOM)
'People were reading for one evening.'
(Maling 1993)
Again, there seems nothing semantic about the difference in case on the
object s in (6a) and (6c), nor on the adjuncts in (6b) and (6d). But the
grammar of Finnish, in order to account for Speakers' knowledge, must
somewhere state this fact—not äs a form-meaning correspondence, but
just äs a weird fact about form.
What about grammatical gender? Goldberg cites Dixon's (1982)
famous case of women, fire, and dangerous things forming a natural
form-class. Does that mean that all grammatical gender can be so treated?
In the familiär European languages, grammatical gender classes are a
hodgepodge of semantic classes, phonological classes, and brüte force
memorization. But from the syntactic point of view we want to be able
to say that, no matter for what crazy reason a noun happens to be
feminine gender, it triggers the very same agreement phenomena in its
modifiers and predicates. And the syntax of the language, not the seman-
tics, has to say over what constituents agreement is realized. Do prenomi-
nal adjectives show gender agreement? Do predicate adjectives? Do
determiners? Do verbs? That is, like indirect objects, gender and gender
agreement in a particular language are syntactic constancies partially
(and occasionally largely) correlated with some semantic or phonological
distinction.
Other odds and ends: does a language permit preposition stranding
under w/i-movement and topicalization (English does, French doesn't)?
If there is a verb Inversion construction for questions and so forth, does
it invert any verb (äs in German), or just auxiliaries (äs in English)?
Goldberg also raises the count-mass distinction. This is actually a
somewhat different case, since the distinction between count and mass is

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
102 Ray Jackendoff

indubitably conceptual, and it correlates semantically with aspectual


distinctions in the verbal System (Verkuyl 1972, 1989; Talmy 1978;
Hinrichs 1985; and many others, including Jackendoff [to appear a]).
The issue here is whether this conceptual distinction is driven perceptu-
ally, and the evidence is that it often is—when dealing with physical
objects and substances. But—threats are count, advice is mass; utterances
are count, Information is mass; speech, sound, and noise are ambi-
dextrous. This suggests that, once we get outside the domain of physical
objects, the choice of conceptualizing something äs count or mass is
somewhat more arbitrary.
I have left for the end of this section the most fundamental distinction
in syntax, that among parts of speech. To be sure, there are at least three
strong implications between semantic categories and syntactic categories:
(7) a. A concept denoting an object or a substance is always expressed
by a noun.
b. A verb always expresses a state or event concept.
c. An adjective always expresses a property concept.
However, not all nouns name objects or substances. Some name events
(earthquake, concert); soine name properties (redness, size, a bummer)\
some name regions of space (place, region, trajectory, neighborhood) or
time (wefc, period, duration). Some prepositional phrases express spatial
regions, and some temporal regions; some idiomatic PPs express proper-
ties (in thepink, out ofluck); some prepositions express abstract relations
(for no reason), and some are just grammatical glue (believe in cognitive
grammar, afraid ofchange).
To my way of thinking, a Statement äs general äs Langacker's (1991:
15), "a noun profiles (i.e. designates) a region in some domain, where a
region is defined abstractly äs a set of interconnected entities", does not
convey any useful distinction. What is the notion of "region" that distin-
guishes the noun region from the PP in the park but assimilates it to dog
and concert!
Moreover, Langacker denies that any words can be just grammatical
glue, insisting for instance (1987: section 6.2.2) that the of between a
noun and its complement is meaningful—it denotes an intrinsic relation-
ship between the two entities. Notice, however, that of is necessary to
express a relation between destruction and the city in (8a), but that the
same relation is expressed by "direct object of" in (8b), and also by the
genitive in (8c).
(8) a. the destruction of the city
b. destroy the city
c. the city's destruction

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 103

How do you know which way to express the relationship? What's the
difference? In (8a) and (8c) the head is a noun, in (8b) a verb—and the
generalization is that anytime you have a noun or adjective head, what-
ever the semantic relation to a post-head complement, you need a preposi-
tional phrase rather than a direct object When the modifier comes before
the noun, it has to be genitive. So of in (8a) is fulfilling an "everything
eise" role similar to that of accusative case in VPs—not a natural semantic
class, but a natural syntactic one. Put differently, of in such cases is
performing the same role in syntax that the meaningless theme vowel
does in, say, Latin verb inflection: it is there to fulfill the form.
This is not to deny that the strong correlations between objects and
nouns and between actions and verbs are necessary for the child to get
grammatical acquisition off the ground. It's just that once the grammati-
cal system gets going, it in part cuts loose of its semantic connections: a
noun is now a kind of word that has grammatical gender and gets case-
marked, a preposition is a kind of word that cannot take a subject but
does permit a direct case-marked object, and so on.
This is also not to deny that much that has passed for syntax in the
generative literature undoubtedly is a reflection of semantic regularities.
My early arguments (1969) that quantifier scope is not represented
directly in syntax hold against contemporary LF (logical form) in
Government and Binding theory (Chomsky 1981) äs well äs they did
against Lakoff's (1970 [1965]) proposals. I am in agreement with Deane
(1991) and Erteschik-Shir and Lappin (1979) that many extraction con-
straints have a semantic basis (see Csuri [to appear] for related argu-
ments). And I agree with Fauconnier (1984) and Van Hoek (1992) that
much of the theory of anaphora does not belong in syntax (see Culicover
and Jackendoff [1995]). As I said at the outset of this section, I am
looking for the proper middle ground. And I should point out that I
have given the very same list of syntactic phenomena to Chomskian
syntacticians who accused me of wanting to do away with syntax.

3. Conceptual structure
A theorist who accepts the goal of accounting for Speakers' knowledge,
and who also Claims that meaning deeply influences choice of syntactic
structure, is thereby obliged to search for a theory of meaning that
predicts the desired effects, where "desired effects" include not only
judgments of grammaticality but such aspects of language use äs judg-
ments of inference, relevance to discourse, and relation to the perceived
world. Much of Semanticsand Cognition (1983 ) is devoted to an argument
that there is no privileged level of "linguistic semantics" at which specifi-

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
104 Ray Jackendoff

cally linguistic effects of meaning can be separated out from more general
cognitive effects such äs categorization and Interpretation of deixis.
Rather, all of these effects are localized in a level called conceptual
structure. My position is stated very clearly in Semantics and Cognition,
section 1.7 and chapter 6, and it recurs throughout the text. (See also
Jackendoff [1981 b, 1985].) In particular, chapter 8 emphasizes the notion
of preference rule Systems äs part of lexical decomposition in conceptual
structure—and their role in creating family resemblance phenomena.
Yet the deepest criticism leveled at me by both Deane and Taylor (this
issue) is that conceptual structure allegedly contains only the Information
relevant to syntax, and that it does not include preference rules. (Deane:
"What is at stake here is precisely Jackendoff's refusal to recognize the
existence of family-resemblance structure within the lexicon.") The Posi-
tion they criticize is indeed advocated by others. For instance, Bierwisch
(1986) argues that there must be a level of representation that specifically
encodes those aspects of semantic Information relevant to language;
Pinker (1989) works out in detail a theory of such a level. But this
position is not held by me; in fact, Bierwisch criticizes me for not holding
it. (In turn, Jackendoff [1981 b] criticizes Katz's [1972] view of "autono-
mous semantics", parallel to Bierwisch in some respects.) So why have
Deane and Taylor arrived at such a misunderstanding of where I stand?
I believe the explanation lies in the relationship of Semantic Structures
(1990) to Semantics and Cognition. A problem not resolved in Semantics
and Cognition was how prototype images (or better, image Schemata) can
be incorporated into word meanings, and more generally, how visual
inputs can be connected with conceptual structure so that it is possible
for us to talk about what we see (to use Macnamara's [1978] phrasing
of the problem). It was clear that no decomposition into algebraic primi-
tives could do the trick: artificial intelligence-style features like [HAS-A-
BACK] for chair are obviously too ad hoc. A potential solution arose upon
deeper study of Marr's (1982) theory of vision—the idea that visual
categories can be encoded in a level of 3D model structures, a viewpoint-
independent geometric representation that includes part-whole structure
and axis structure, and which can be parameterized to allow Variation in
shape and proportion among exemplars. It struck me that this was the
proper sort of format to encode just the aspects of word meaning that
are uncomfortable for Standard feature decomposition—just the kind of
work a prototype image is supposed to accomplish. Marr's theory was
detailed and motivated on perceptual grounds, not just äs a theory about
pictures in the head invented by philosophers and linguists. (By the way,
the 3D model is far from a "perceptual" structure, äs Taylor suggests I
intend; I have argued that it is one of the central Systems of cognition.

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 105

In the overall architecture of the mind, it is "closer" to visual perception


than conceptual structure and syntax are, but it is still deeply cognitive.
See the next section.)
l have since explored some of the machinery necessary to augment
Marr's 3D model for linguistic purposes, especially for encoding spatial
relations and frames of reference (Landau and Jackendoff 1993;
Jackendoff, to appear b). But unfortunately, this line of formal investiga-
tion within the vision Community seems largely to have died with Marr,
and it has been difficult to establish strong connections with subsequent
work.
This aspect of meaning was not present in the Semantics and Cognition
theory; it entered my work in Consciousness and the Computational Mind.
With this innovation, it became necessary to ask how much of the work
of semantics is encoded in the algebraic format of conceptual structure,
and how much in the geometric format of the three-dimensional model.
That is a question still unresolved, in large part because there is no
substantial theory of the latter format.
In Semantic Structures, the work to which Deane and Taylor have paid
the most attention, the connection with the visual System was not a focus
of attention. Neither was the complex encyclopedic structure of lexical
items that is involved in categorization, including prototypes, family
resemblances, preference rules, and the like. The main focus, rather, was
how conceptual structure plays a role in determining syntactic argument
structure. As it turned out, for the phenomena I was looking at, the parts
of conceptual structure relevant to syntax were fairly limited. I quote a
passage from the end of Semantic Structures which Deane has quoted
only in part:
The fact that almost every word meaning has fuzzy boundaries and various sorts
of indeterminacies does not threaten a theory of lexical decomposition, despite
frequent Claims to the contrary. It is however necessary to develop a theory of
decomposition in which such indeterminacies are part of the fabric of conceptual
structure. The descriptive innovations suggested [very sketchily] in section 1.7 go
a long way toward accounting for the variety of these phenomena.
On the other band, such conceptual indeterminacies seem to play a relatively
minor role in the relation between conceptual structure and syntax. That is, the
correspondence rules between these two levels of representation make reference
primarily (and perhaps exclusively) to those aspects of conceptual structure that
are more or less discrete and digital. This is why the present work, concerned
most directly with the syntax-semantics correspondence, has not made much
reference to formal devices such äs preference rules, graded conditions, and 3D
model stereotypes. By contrast, S&C [Semantics and Cognition] and C&CM
[Consciousness and the Conceptual Mind], which sought to establish the overall

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
106 Ray Jackendoff

texture of conceptual structure, had to discuss and motivate these innovations


extensively. (Jackendoff 1990: 283-284)

The presuppositions of this passage make it clear that conceptual struc-


ture is taken to include all sorts of fuzzy, stereotype, and preference rule
phenomena, but that the aspects of it that relate to syntactic argument
and adjunct structure do not involve such phenomena. It is therefore
perfectly clear what my position is, and that it is not what Deane and
Taylor believe it is.
Now perhaps Deane and Taylor wish that I had continued to write
more about stereotypes and fuzzy categories and the like, and that I had
pushed that general line in Semantic Structures. It just so happens that
this is not what I chose to write about. There may be an empirical
problem: I may be wrong in believing that the semantics-syntax corre-
spondence rules make little reference to preference rules, graded condi-
tions, and the like. But that is not what they criticized me for: they
criticized me for a view I explicitly did not hold.
A view that I do hold is that conceptual structure is the central
representation that interacts most directly with language; that is, if there
is a syntactic phenomenon dependent on meaning, that aspect of meaning
must be present in conceptual structure, not just in the 3D model. This
does not mean I believe that conceptual structure contains only semantic
Information that affects syntax, äs Taylor Claims I believe (turning my
one-way implication into a two-way one). Nor do I think semantic theory
should be concerned primarily or exclusively with the aspects of concep-
tual structure that determine syntax. It's just that this is what Semantic
Structures happens to be about. You can't do everything all at once.5
Section 6.1 will discuss Taylor's analysis of run vs. jog, in which he
Claims the semantic difference between them makes a syntactic difference.
His parting remark, however, requires comment here. As he correctly
observes, I claim that, at the coarsest analysis in conceptual structure,
the analysis relevant to syntax, these verbs share the feature of expressing
locomotion along a path. His comment is, "On the encyclopedist view
that I would advocate, ..., such 'features' would not be the building
blocks out of which complex concepts are formed; the features merely
represent the commonalities that Speakers perceive amongst different
kinds of entities." If locomotion is not a building block of complex
concepts, what sort of thing is?
Perhaps Taylor thinks complex concepts are not made out of funda-
mental building blocks. But if so, then in what terms are complex concepts
to be stated and analyzed? Worse, what terms do language learners have
available out of which to construct complex concepts? Interestingly,

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 107

Fodor (1975), like Taylor, believes that even the simplest aspects of
Standard lexical decomposition are untenable. Following this assumption
through to its logical conclusion, he shows that lexical concepts are
unlearnable. He then puts bis money where his mouth is, and embraces
the (to me incredible) position that all lexical concepts, including exotica
such äs telephone and carburetor, are innate. If Taylor disagrees with
Fodor's final position, he owes us a rebuttal of Fodor's learnability
argument.
Alternatively, perhaps Taylor thinks complex concepts are built out of
fundamental building blocks, but not the ones I (and everyone eise) has
suggested, including such psychologically plausible units äs "self-
initiated" and "locomotion" and "human being". In this case he owes
us an account of these building blocks and how encyclopedic Information
is built out of them.
In defense of a theory with fundamental building blocks, it should be
remembered that, in a representational theory, the only way Speakers can
perceive commonalities among entities is for the representations of those
entities in speaker's minds to have common parts. That's the whole point
of trying to formulate representations that capture linguistic and percep-
tual regularities. If run and jog have this perceived commonality, they
must somewhere in their representations share a common element, which
they share also with sprint, lope, and walk. I find it hard to imagine what
Taylor's encyclopedist view is, if he denies this—though again it appears
to verge on Fodor's (1983) view of cognition äs being an impenetrable
mystery (and perhaps Chomsky's too). Good heavens, äs Jerry would say.

4. Notation
The notation I have adopted for conceptual structure is built up of
conceptual constituents, notated äs square brackets, containing complexes
of features and functions. The particular way these are arranged on the
page is not of too much import to me; it happens to be convenient to
me to write them the way I write them. However, in every case where it
is possible, I consider whether the features and functions I have chosen
äs primitives do indeed express the linguistically and conceptually signifi-
cant generalizations in the data.
The constant issue of justification has led over the years to changes in
the notation and to the set of postulated primitives. For instance, a major
advance between Jackendoff (1976) and Jackendoff (1983) was the inno-
vation of a path constituent in sentences of motion: instead of (9a) I
proposed (9b).

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
108 Ray Jackendoff

(9) The train went from NY to LA.


a. [GO ([TRAIN], [NY], [LA])]
FROM ([NY])
[Event GO TRAIN], _Path TO ([LA]) _

This change makes it possible to extract the semantic roles of preposi-


tional phrases, combining them freely with verbs of motion, extension,
and orientation. Still, the function is treated äs a primitive. In
Jackendoff (1991 b) this is further expanded äs (10).
(10) [pathTo ([X])] is decomposed äs
Id DIR
, i.e., a one-dimensional piece of space
that terminates at X
This brings out its similarity with the inchoative (State X comes about),
which receives the analysis (11).
(H) is decomposed äs
IdDIR
SituationBDBY ( [Situation^ ] ) , i.e., a one-dimensional directed
Situation that terminates at X
These reanalyses are part of a far-ranging treatment of the part-whole
distinction and the treatment of boundaries; this treatment makes it possible,
for instance, to formalize the parallelisms between the count-mass distinc-
tion and the event-process distinction, and to explain how the word end
can be applied to entities äs different äs a trolley-car and a speech.
In Jackendoff (to appear a) the analysis goes a Step further, proposing
that the function GO, previously considered primitive, should be decom-
posed äs in (12).
(12) [sitGO «ThingX], [pathP]);
('X traverses P over time period T')
is decomposed äs
[id]e [P, ld DIR]* [Id DIR]°

([Thing X]» [Space Öd]); [Time Öd]

The idea behind (12) is that, at each 0-dimensional point during the time
period T, X occupies a point of the path P, and this correlation determines
the part-structure of the overall event. Using this notation, it proves
possible to relate motion formally to extension (Talmy's [1994] "fictive

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 109

motiorT), to covering and filling relations, to "incremental themes", and


even to distributive quantification.6 Equally important, the notation per-
mits a formal derivation of the correlation between boundedness of
arguments and telicity of events (Verkuyl 1972, 1989; Dowty 1979; Tenny
1987, 1992; Krifka 1992; and many others).
It should be clear from my practice, then, that I am committed to a
formalism which captures lexical, conceptual, and inferential relation-
ships. Deane, however, criticizes my formalism on the grounds that it
obscures intuitive relationships, for instance that between with of theme,
with of accompaniment, and with of instrumental. Assuming such intuitive
relationships can be justified, I agree that this should be the responsibility
of the formalism; if the formalism makes it impossible to express such
relationships, then it must be changed. I am inclined to go along with
Deane's intuitions on with (for other cases see the next section), but
again, it's not possible to do everything at once, and other issues had
higher priority in Semantic Structures. As can be seen from the revisions
illustrated above, the proper changes are not always obvious, and they
often involve rethinking major aspects of the system.
However, my impression is that Deane is not advocating just a revision
of particular features and functions in my notation for conceptual struc-
ture. Rather, he wishes to replace the notation altogether with "image-
based" representations, which "in most cognivist analyses are most sim-
ilar to an intermediate level of representation in Marr's (1982) analysis
of visual perception, the '2£D sketch'". From bis examples later on in
the paper, I take these image-based analyses to be embodied in
Langacker-style notation (for convenience, 11 call this L-notation). Let
me examine this possibility.
First it is important to disabuse oneself of the idea that L-notation
bears any resemblance to Marr's 2|D sketch. Marr develops the 2^D
sketch äs a bitmap of the visual field in which each point is annotated
äs to relative depth and orientation. It distinguishes surfaces, but not
objects. Moreover, it is strictly dependent on the point of view of the
viewer. Marr emphasizes that this level is not useful for object categoriza-
tion, for it does not yet factor out the difference in the way objects look
depending on their orientation and distance relative to the viewer. Nor
does it yet factor out the differences in appearance among objects of the
same category. These tasks are left to the 3D model structure, which
includes a viewpoint-independent characterization of object shape and a
degree of schematization, so that different objects may be compared.7
Now L-notation is highly schematic; it represents objects by squares
and circles, for instance. Objects do not receive different representations
depending on their position relative to the viewer. And in particular,

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
110 Ray Jackendoff

L-notation presumes the notion of object äs basic—not the notion of an


oriented surface. Thus, if anything, it is more like a 3D model than a
2^D sketch. Of course, it is written on paper, so it is two-dimensional. |
But it's the functional properties, not the way it is drawn on the page, j
that are relevant to the Claims it makes. |
Let's examine some Standard elements of L-notation and see how well !j
they fulfill the function of representing the visual field. Consider the
Standard use of the annotations tr and Im for trajector and landmark |
respectively. The visual field does not contain little marks tr and Im, so !
these must, if anything, be features associated with visual images. Next
consider the Standard practice of drawing a profiled entity with a heavier
line than other entities. Figural objects in the visual field may be more
vivid, but they don't have heavier outlines. Rather, the heavier line is a
way of notating a feature of an entity among others in a configuration.
Or consider the notation for the plural: five or six small circles contained j
!
in a big circle. Seventeen dogs don't look like five or six circles in a big
circle—or like five or six dogs in a big circle, or like one dog connected
by a dotted line to a small circle that is one of five within a big circle.
Rather, the notation for the plural is essentially symbolic, not iconic. i
A process involving a trajector doesn't look like a circle marked tr
connected to a wavy line terminating in an arrow (Langacker 1991:
112)—it looks like a boy dancing or water dripping or a tree growing or
one of a zillion other things. Moreover, a process does not look like the
meaning oiany, which is also symbolized by a squiggly arrow (Langacker
1991: 139). The "mental contact and other pivotal aspects of the expect
relationship" do not look like a dashed arrow (Langacker 1991: 461).
And the difference between destroy and destruction does not look like
the difference between a broken-line ellipse and a heavy ellipse (Langacker
1991: 24-25).
Consider also Deane's treatment of rise, fall, on, and onto. In his
diagrams, the notation TR is situated in some position relative to a line,
which is close to the notation LM. Nothing says that TR is supposed to
be in contact with the line (äs opposed to merely near it); nothing teils
us that by contrast LM is not supposed to be near the line, it is supposed
to be the line. If you know what the diagram is supposed to mean, it is
sort of iconic, so it appeals to common sense in a way that the austerity
of features and functions does not. But at bottom it is no more explicit,
and no more psychologically real, than feature and function notation.
In short, although in some instances L-notation gives the impression
of being visually iconic, it is deeply symbolic. I have no objection in
principle to using circles, squares, and arrows instead of square brackets,

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 111

parentheses, and functions. We should just be very clear about their


Status.8

5. Polysemy
Deane repeatedly takes me to task on my treatment of polysemy. When
confronted with apparently different senses of a lexical item which bear
some intuitive relationship, his preference seems to be that all these senses
should follow from a single stipulated prototype, through general prin-
ciples of extension that create family resemblance structures. (This posi-
tion is similar to Ruhl's [1989], who, however, is willing to posit an
underlying abstract sense rather than sticking to a concrete prototype.)
I agree in principle. However, this preference in many cases comes into
conflict with the need to account for the language user's ability to get all
the details of syntax and semantics right for each sense. Let me lay out
some cases, mostly not new, that illustrate the complexity of the problem.
First, some baseline judgments: gooseberry and strawberry have nothing
t o do with geese and straw. Yet given the Chance to create folk etymolo-
gies, people will invent a story that relates them, Overcoming the strong
Intuition that there must be a relation is not easy. And äs lexical semanti-
cists we must be careful not to succumb to the temptation to insist on
closer relations than there really are. How can you teil when to give up
and call some putative explanation a "gooseberry etymology"? I don't
know in general; I only know that it is a possible last resort.
Consider the extreme, uncontroversial cases. I think everyone agrees
that the bank in river bank and savings bank represent separate, homony-
mous concepts, not a single polysemous concept. On the other end, I
think everyone agrees that the italicized phrases in (13) derive their
reference by a general process: no one thinks that the mental lexicon lists
harn sandwich äs potentially referring to a person or Lakoff äs referring
to a book or John äs referring to a car.
(13) a. [One waitress to anothen]
The harn sandwich in the corner wants some more coffee.
b. Plato is on the top shelf next to Lakoff.
c. John got a dent in his left fender.
It's the cases between these two extremes that are of interest. One that
I find uncontroversial is the sort of multistep chaining of association
found in cardinal, described in I can't remember what source. It seems
that the earliest meaning is 'principal', äs in cardinalpoints ofthe compass
and cardinal virtues. From there it extended to the high-ranking church
official, then to the traditional color of his robes, then to the bird of that

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
112 Ray Jackendoff

color. All of these senses are still present in English, but I wouldn't want
to say they constitute a single polysemous word—especially the first and
last member. Frankly, I have sort of the same feeling about Lakoff's
(1987: 104-109) analysis of the Japanese classifier hon: there is a moti-
vated route from candles to Zen contests and to TV programs, but I find
it hard to think of them s constituting a unified category. The reason,
I think, is that the basis for connection changes enroute, so that one
cannot trace the connection without knowing the intermediate steps. So
what should we say about these cases? They are more closely related
than plain homonyms, but not closely related enough to be said to form
a unified concept. Fll call them "opaquely chained concepts".
Moving again toward the other end of the spectrum, we find a quite
different Situation in the related senses of open in The door is open, The
door will open, and John will open the door, which are related by highly
productive lexical relations, encoded in my notation s in (14):
(14) a.
b. X openv = [sitiNCH ([sirtBE ([ X ],
c. Yopen v X =
[sitCAUSE ([Y], [sitiNCH ([SitBE ([X], kropertyOPEN] )])])]

This chaining, unlike the chaining ofcardinal, is semantically transparent,


in that the original concept is contained in both the derivative concepts.
This sort of transparent chaining is a Standard case of polysemy.
One might instead, following Deane's (apparent) preference, say that
these three uses are not separate senses that must be learned individually,
but rather that they form a single sense that can be extended from a
simple core prototype (14a). This treatment runs into problems, though,
when we Start dealing with long branching chains. An example is the
word smoke. The core is presumably the noun smoke, the wispy substance
that comes out of fires. From this we get at least the senses in (15).
(15) a. asmokeN = "wispy substance"
b. X bsmokev = "X gives off asmoke"
c. Υ csmokev X = "Y causes X to bsmoke, where X is a pipe,
cigar, etc., by putting in the mouth and puffing, taking asmoke
into the mouth, etc."
d. Υ dsmokev = "Y csmokes something"
e. Υ esmokev X = "Y causes asmoke to go into X, where X is a
ham, herring, etc., by hanging X over a fire in an enclosure"
Although all the Steps in (15) are transparent, I would be reluctant to
say that these five uses together form a single concept for which Asmoke
is the prototype. In particular, there is really no relation between Asmoke

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 113

and ^moke other than the presence of^smoke äs a character in the action.
One reason for this is that the Steps of the chain each add idiosyncratic
Information. For instance, you can't csmoke a herring by setting it on
fire and putting it in your mouth and puffing, and you can't esmoke a
cigar by putting it in a smoky room.
Because of these multiple branches and idiosyncrasies, it makes more
sense to say there are five senses arranged in two transparent chains
branching outward from Csmoke (a-b-c-d and a-e). They are not any
more closely related to each other than they are to, say, smoky and
smoker (the latter meaning either a person who csmokes or a vessel in
which one esmokes things). And in a morphologically richer language
than English they might not be phonologically identical (say outsmoke
for [15b] and ensmoke for [15e]). In other words, even if all these senses
are related, the language user must learn them individually. This treat-
ment is not unlike Lakoff's (1987) approach to over.
A check on the validity of such putative relationships is the extent to
which they occur in other lexical items. Here are the linkages for smoke,
listing some other words that show the same relationships among their
senses:
(16) asmoke->bsmoke (V = "give off N"):
steam, sweat, smell, flower (not very productive)
bsmoke->csmoke (V = "cause to V"):
open, break, roll, freeze (quite productive)
csmoke->dsmoke (V = "V something"):
eat, drink, read, write (rather productive)
asmoke-»esmoke (V = "put N into/onto something")·'
paint, butter, water, powder, steam (rather productive)
Notice that each relationship affects a different group of verbs. Each
pairwise relationship is semiproductive, but the entire branching structure
of senses for smoke is unique. This means that no general process of
extension from a prototype can predict all the linkages—they're different
for every word.
I am more suspicious of claiming such a linking relationship among
senses when it is impossible to find other lexical items that participate in
the same relationship. An example is the two senses of into discussed
by Deane:
(17) a. aintoX = 'path terminating in the interior of X'
(run into the room)
b. binto X = "path terminating in violent contact with X"
(run into the wall)

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
114 Ray Jackendoff

l am happy to treat these senses äs related only in that they both express
termination (the -to portion) (Jackendoff 1990: 108). In my analysis, the
morpheme in in binto is like the goose in gooseberry; it has nothing to do
with normal in. Deane asks for more: he wishes to analyze binto äs "path
which would have carried an object to the inside of X if the boundary
had not impeded it". Note first that there is something not quite right
about this analysis, since you can crash into a mirror, even though you
could never fit inside it. But what is more troubling is that there are no
other prepositions that have this counterfactual sense. For instance, you
cannot crash under a table by landing violently on top of it—though you
would have ended up under the table if its boundary had not impeded
your motion. Consequently, there is nothing to gain by extracting out
this relation between the two intos äs a subregularity. Moreover, there is
no general process of extension of a prototype that will get to binto from
Jnto, or that will get both from a more abstract sense they have in
common.
A somewhat similar though perhaps more controversial case conceras
some of the senses of over discussed by Lakoff (1987).
(18) a. aover
X = "location in an upward direction from X"
(The doud is over the house)
b. bover X = "path passing through location in upward direction
from X"
(The plane flew over the house)
c. cover X = "location at end of path passing through location
in upward direction from X"
(Sausalito is over the bridge)
d. dover = "path of rotating motion in a vertical plane"
(Turn over the page, The pole feil over)
Abstracting away from many details of contact vs. noncontact and cover-
ing vs. noncovering, to some of which we return presently, I want to
focus on the relation between dover, the "reflexive sense", and the rest.
The other English prepositions that have such reflexive senses are around
(turn around), out (spread out), and perhaps in (gather in).9 Now, the
analysis suggested by Lakoff is that in aover, one part of the object moves
bover another part. Although this explication may work for turn over the
page, it does not generalize to the pole feil over, which shares only the
rotating motion—so Lakoff most posit yet another sense. More tellingly,
this relationship applies only to these four prepositions, so one wonders
whether it qualifies äs a subregularity. It's better than the case of into,
but just barely. It's sort of like the marginal subregularity that produces
the six past tense forms bought, brought, caught, fought, sought, and

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 115

taught from wildly different present tense forms. Consequently, again


there is no general process of extension from a prototype that will
automatically derive reflexive over to the other senses. It has to be listed
äs a separate sense.
In the cases of polysemy discussed so far, one sense is clearly derived
from another, and the source appears äs a core inside the more highly
elaborated derivative. A different sort of case arises when two senses are
of approximately equal complexity, but differ by a feature, in particular
a feature that designates semantic field. Here are some Standard examples.
(19) (Spatial location and motion)
a. The messenger is in Istanbul,
b. The messenger went from Paris to Istanbul.
c. The gang kept the messenger in Istanbul.
(20) (Possession)
a. The money is Fred's.
b. The inheritance finally went to Fred.
c. Fred kept the money.
(21) (Ascription of properties)
a. The light is red.
b. The light went/changed from green to red.
c. The cop kept the light red.
(22) (Scheduling activities)
a. The meeting is on Monday.
b. The meeting was changed from Tuesday to Monday.
c. The chairman kept the meeting on Monday.
As has been remarked at least äs far back äs Gruber (1965), the basic
conceptual algebra underlying these four fields (äs well äs others) is
essentially the same. The question I want to ask, though, is how the
lexicon is organized so that the italicized words in (19)-(22) can cross
field boundaries. Does keep, for instance, have a single sense that is
neutral äs to what field it appears in? Or does it have four related senses?
I am inclined to take the latter route, because of the syntactic and
lexical peculiarities that appear äs one crosses from one field to another.
For instance, in three out of the four fields above, keep requires a direct
object and a further complement; but in the possessive field, keep takes
only a single argument, the direct object. Go appears in three of the four
fields, but we cannot say *The meeting wentfrom Tuesday to Monday in
the scheduling sense—we can only say was changed/moved. Change, in
turn, only appears with the Standard syntax in ascription and scheduling;
in the spatial field we have to say change position/flocation and in the
possessional field we have to say change hands. We also have to know

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
116 Ray Jackendoff

that in English we say on Monday rather than in Monday, the latter


certainly a logical possibility. All these little details have to be learned;
they cannot be part of the general mapping that relates these fields to
each other. This means that each word must specify in which fields it
appears and what peculiarities it has in each. If this be polysemy, so be it.
I have described these s senses related by feature Variation. This differs
from the Standard cognitive linguistics claim that (20)-(22) are derived
from (19), the spatial field, by metaphor or image-schema transformation.
The two views are compared in (23).
(23) a. (Cognitive linguistics)
to in (19) = το to in (20)=FPoss(TO)
where το is a spatial path-schema, and F is a function that
maps the field of spatial Images into possessional Images,
b. (Conceptual semantics)
to in (19)=TOSpatial to in (20)=TOposs
where το is a path function that is field neutral, and the
subscripts specialize it for a field.
That is, cognitive linguistics tends to view cross-field parallelisms s
derivational, while I view them s parallel instantiations of a more
abstract Schema (Jackendoff 1976, 1983: chapter 10, 1992a; Jackendoff
and Aaron 1991; Fauconnier and Turner 1994 propose a somewhat
similar view s a "generic space"). In the former approach, the polysemy
of keep is another multiply branching chain, sort of like smoke\ in the
latter, the polysemy is the result of a feature Variation, with no fully
specified sense s core.
A final case is quite different. It is exemplified by Deane's analysis of
over, which I will try to simplify a little here. The basic story is that there
seem to be two distinct spatial relations expressed by over in (24).
(24) a. The blimp is over the field.
(Vertical Separation: note that blimp cannot be in contact
with field.)
b. He laid a cloth over the table.
(Two-dimensional covering relationship: note that cloth is in
contact with table.)
These senses seem to be mutually exclusive: the blimp, which does not
cover the field, must be vertically separated from it; the cloth, which does
cover the table, need not be vertically separated from it. Yet both senses
are satisfied by (25), in which the awning is both vertically separated
and (potentially) covering the patio.

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 117

(25) The awning hangs over the patio.


Is (25) thereby ambiguous between the two senses of over! We sense it
is not.10
Deane asserts that my notation forces me to keep these conditions äs
separate senses, resulting in a spurious homonymy. But in fact, this sort
of Situation is precisely the one that motivates preference rule Systems
(Jackendoff 1983 [Semantics and Cognition]: 135-137, 149-151; 1990
[Semantic Structures]: 35-36). In such situations, there are two or more
conditions on a categorization judgment, neither of which is necessary,
but either of which is sufficient. These conditions together form a prefer-
ence rule System. The prototype of the category is an instance that satisfies
all (or the maximal possible set) of the conditions; instances satisfying
fewer conditions are more marginal. In this account, the prototype is an
epiphenomenon of the interaction of the preference rules. Moreover, the
possible deviations from the prototype are not produced by a general
concept-independent process of concept extension; rather they are speci-
fied by the item-particular conditions.
Deane is correct that I have not fully integrated preference rule Systems,
stressed in Semantics and Cognition, with the argument structure Systems,
stressed in Semantic Structures. That does indeed remain for future
research. I hope I'm not expected to be the only one undertaking it.
Traditional terminology does not give us a useful term to describe this
Situation. Over is not polysemous, since (25) is not ambiguous. But it is
not exactly monosemous either, since it describes mutually exclusive
situations in (24a) and (24b), which branch in different directions from
the prototype. Maybe we should call it "semipolysemous". Whatever.
A brief digression: if spatial relations are encoded by image-schemata
such äs L-notation, how are we to encode a semipolysemous word such
äs over! There is no single picture that can stand for all the cases. A
single picture that shows a two-dimensional trajector covering and verti-
cally separated from a two-dimensional landmark will not teil us which
conditions are significant, nor which can be dropped when. Alternatively,
one might (and Deane does) propose two or more pictures, one of which
shows a one-dimensional trajector vertically separated from the land-
mark, the other of which shows a two-dimensional covering relationship
(and how does it show that vertical Separation is immaterial?). But then
(25), satisfying both pictures, is ambiguous. The best that can be done
with images, I think, is to connect the two pictures in a preference rule
system, exactly parallel to my approach.
Returning to the main issue of this section, Fve gone through a number
of different sorts of cases where questions of polysemy arise. My assess-

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
118 Ray Jackendoff

ments are not specific to conceptual semantics; it seems to me that any


reasonable theory of the lexicon will have to make these distinctions. In
fact, parallel machinery is invoked in conceptual semantics and cognitive
linguistics. There is sometimes a tendency in cognitive linguistics, though,
to assume that much of this machinery will be rendered unnecessary by
assuming a general process of extension from a prototype. I have tried
to show that, in the cases discussed here, such a strategy won't work.
This is not to say that there aren't any general processes of concept
extension in language. In fact, there are without question general pro-
cesses such äs metonymy, grinding (Goldberg [this issue]), metaphor,
and the "reference transfers" illustrated in (13), which apply productively
to any old conceptually appropriate expression, so that the derived mean-
ings do not have to be listed in the lexicon. However, Fve taken pains to
show that the cases discussed in this section are not like this: we still
need lexical polysemy (and semipolysemy) of various sorts.

6. Ending
I have tried to use the commentaries in this issue äs a vehicle to raise
questions about the relationship of conceptual semantics to cognitive
linguistics. I have tried to show that in many respects the differences are
not äs great äs the commentators have suggested, often turning only on
the question of what phenomena one chooses to work on. In the respects
where the approaches genuinely diverge, I have tried to show the basis
behind my choices. I hope this can lead to a richer and more productive
dialogue between the two schools of thought, with mutual benefit.

7. Appendix: Two specific analyses


The commentaries present a number of extended criticisms of my analysis
of specific phenomena. I have put off for this appendix a discussion of
two of these, Taylor's treatment of run versus jog and Deane's discussion
of force dynamics.

7.1. Running andjogging


The difference between running and jogging, which is treated extremely
sketchily in two paragraphs of Semantic Structures (1990: 34), forms the
main subject of Taylor's commentary. Upon rereading, I can see that
these paragraphs, if taken out of context, may be interpreted äs Taylor
takes them. I'm sorry. Of course he is right, and all of the extra stuff
beyond the perceptual-motor differences must be represented in the mean-
ing of these words. But how? In what format? I don't have a formalizable

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 119

theory of these aspects of meaning yet, and I don't consider English-


language descriptions of Taylor's sort an adequate theory. At the moment,
the most promising approach seems to me to be Pustejovsky's (1991)
notion of the "generative lexicon", in which words contain "qualia struc-
ture" that, in nouns, pertains to function, origin, construction, and so
forth, and in verbs might pertain to manner, result, intention, and so
forth. But Pustejovsky hasn't yet treated verbs.
Now, is there actually a syntactic difference between run and jog, äs
Taylor claims? I assume the Standard position (Chomsky 1957) that
Colorless green ideas sleep furiously is a conceptual violation but not a
syntactic one. In this light, it can be seen that most of the differences
Taylor cites between run and jog are not syntactic. For instance, the
selectional restriction that people can jog but (say) cats can't is a concep-
tual condition in my framework (see discussion of selectional restrictions
in Semantic Structures, section 2.3). Similarly, the fact that running may
be competitive but jogging usually is not is conceptual; and the fact that
against X and neck and neck imply competition is conceptual. Hence, Bill
jogged against Harry and Bill and Harryjogged neck and neck are syntacti-
cally perfectly well formed; their oddness is conceptual, not syntactic. In
other examples, such äs He jogged a mile in ten minutes, he jogged into
the road, the children have been jogging around all morning, I simply don't
see the anomaly that Taylor sees, and certainly not ungrammaticality.
The purpose clause in he jogged to catch the bus may be pragmatically
odd, but he jogged to get some exercise is fine.
I did overlook one case that does show a syntactic difference, namely
run/*jog a race. A preliminary exploration suggests that this transitive
use of run is highly limited: I can't find any objects to substitute other
than names of races, e.g., run the NCAA championship 100-meters.
Although sprinting is often competitive, I do not find sprint a race
acceptable; and although sprint the NCAA championship 100-meters is
perhaps all right, The NCAA championship 100-meters was sprinted today
is horrible. To my ear, fly and swim yield results parallel to sprint (with
appropriate kinds of races substituted). The next closest construction
might be something like skate a competition or play an audition (but not
*perform an audition). In short, there is indeed a transitive frame in which
the verb names an activity and the object names a competitive event in
which one performs that activity, so there is some semantic regularity.
On the other band, verbs vary äs to how comfortable they are in the
construction, even when the pragmatics are perfect, so a complete account
of this possibility evidently involves some raw syntactic stipulation on
the part of individual verbs. For example, you certainly cannot *compete

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
120 Ray Jackendoff

a race, which would be expected to be all right if the transitivity followed


from the sense of competition.
Of course, there is the use of run for machines and Computer programs
and businesses, in which it can be either transitive or intransitive. In this
use, run is clearly not a verb of locomotion, and nobody expects it to
fall in with jog. But my guess is that you can't get this sense out of a
prototype that also includes running down the street—anymore than a
jog in the road has much to do with jogging. There are also the uses of
run in fixed expressions such äs run up against aproblem and run off with
another man: these have to be learned äs units, and the verb, while in a
motivated relationship to the verb of locomotion, is not referring to
running per se any more. Note, however, that the syntax is the ordinary
syntax of the verb.
The upshot, I think, is that nothing Taylor has offered presents a
serious challenge for my view, especially for the view I actually hold
rather than the one he has attributed to me.

7.2. Force dynamics


Semantic Structures takes up and develops Talmy's (1985 [1988]) broad-
based account of force dynamics. In the course of adapting it to the
conceptual semantics formalization, I found it of interest to reconfigure
the basic notions of Talmy's System, while preserving the same essential
generalizations. I assumed that most readers of Semantic Structures would
not be too concerned with a detailed comparison. However, the audience
of the present article may find it interesting. Example (26) summarizes
the fundamental parameters of the two Systems.
(26) a. Talmy:
i. Distinction between two opposed force entities:
Agonist and Antagonist
ii. Intrinsic force tendency of Agonist:
toward action or toward rest
iii. Balance of strengths:
Agonist stronger or Antagonist stronger
iv. Resultant of force interaction:
Agonist action or Agonist rest
b. Jackendoff:
i. Distinction between two opposed force entities:
Antagonist (=Agent) and Agonist (=Patient)
ii. Patient action desired by Antagonist11
iii. Success of Antagonist:
+ (success) vs. — (failure) vs. u (indeterminate)

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 121

Let's see how these are äquivalent. First, compare (26a.iii) to (26b.iii).
If the Agonist is strenger, then the Antagonist fails; if the Antagonist is
stronger, then the Antagonist succeeds; if the relative strengths are inde-
terminate, then the Antagonist's success is indeterminate. So these distinc-
tions carry the same Information. Next, let us convert (26a.ii) to the
point of view of the Antagonist. Whatever the Agonist's tendency is, the
Antagonist's is the opposite; thus (26a.ii') carries the same Information
äs (26a.ii).
(26) a. ii.' Intrinsic force tendency of Antagonist on Agonist:
toward Agonist's rest or toward Agonist's action
So suppose the Agonist is trying to rest; then the Antagonist must be
trying to get the Agonist to move. Now if the Antagonist is stronger, the
result will be that the Agonist moves; if the Antagonist is weaker, the
Agonist will rest. Alternatively, suppose that the Agonist is trying to
move; then the Antagonist must be trying to get the Agonist to rest. If
it happens that the Antagonist is stronger, the Agonist will rest; if the
Antagonist is weaker, the Agonist will move. This whole set of circum-
stances can be simplified: If the Antagonist is stronger, the Agonist does
what the Antagonist wants, if the Antagonist is weaker, the Agonist does
something eise. In other words, we can predict the outcome by specifying
what the Antagonist wants and whether or not the Antagonist is success-
ful. In other words, (26a) and (26b) make the same distinctions, and in
fact Talmy's fourth degree of freedom (26a.iv) can be dropped out of
the equation.
The reason I adopted (26b) äs the basic formulation is that it conforms
better to the syntax of force-dynamic verbs. First of all, there is no
syntactic distinction based on whether the Agonist wants to rest or move:
this comes out of the content of the complement of the verb. Second,
the complement might stipulate any of the possibilities in (27).
(27) a. What the Agonist wishes to do (=[26a.ii])
b. What the Antagonist wishes to do
c. What the Antagonist wishes the Agonist to do (=[26b.ii])
d. What the Agonist wishes the Antagonist to do
In fact, the syntax of force-dynamic verbs looks äs though the comple-
ment most closely expressions (27c). For example, in Johnforced Harry
to leave the room, to leave the room is what John wanted Harry to do,
not what Harry wanted to do. Alternatively, to leave the room could be
the result of the force-dynamic event rather than the action desired by
Harry. But in an event like John pressured Harry to leave the room, where
we don't know the result, the complement still expresses what John

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
122 Ray Jackendoff

wanted Harry to do. So for generality, I take the complement to


express (26c).
Moreover, in force-dynamic verbs like prevent and keep, we have
complements with/r<wi, äs in John prevented/kept Harry from leaving the
room. Following Gruber (1965), I assume this is the negative from found
in the spatial field in expressions like stay away from the cookie jar
(related to but not the same äs the/rö/w of Source). Then the form of
the complement correctly predicts that what John wants is for Harry not
to leave the room.
In other words, analysis (26b) has degrees of freedom equivalent to
those in (26a), but it rearranges them in such a way that they conform
to the syntactic expression of force-dynamic verbs with VP complements.
I take this to be an advantage of my analysis over Talmy's that at the
same time preserves his basic insight. (Further changes, involving the
analysis of helping and letting, go beyond what I want to deal with here.)
Now let us see what Deane says about my analysis. First, he objects
that I have couched it in terms of features and functions, whereas Talmy's
theory is "implicitly analog", a "kinesthetic representation". In fact,
Talmy's notation is just more L-notation. It encodes no analogue varia-
tions in direction or quantity of force. It consists of a lot of diacritics
positioned about a few shapes; the diacritics are defined essentially äs
(26a). Hence Talmy's representations are äs symbolic äs mine.
No doubt 3D models and body representations contain force vectors
with analogue variations in direction and quantity (äs I have suggested
in my discussions of these representations, cited above). But äs far äs
language is concerned, Talmy's set of diacritics—or mine—seems to be
sufficient. This is just the sort of thing one would expect in my level of
conceptual structure, which levels out many of the distinctions of direc-
tion and quantity found in the 3D model structure. And it is just äs
Talmy's theory of grammaticalization predicts.
Deane then criticizes me on the grounds that my account does not
motivate the use of the prepositions to and from. In fact, äs I have just
shown, my account, not Talmy's, is the one that allows the use of the
prepositions to be semantically motivated. Notice that in all of Deane's
examples that involve both an Agonist and an Antagonist, the comple-
ment indicates the Antagonist's desired outcome. (Examples [28d-g] are
causatives corresponding to Deane's noncausative examples [65]-[66];
the noncausative cases are not strictly relevant to the argument.)
(28) a. Harry talked Sam into going away. (=[66a])
(Sam's tendency is to stay)

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 123

b. Harry talked Sam out of going away. (=[66b]>


(Sam's tendency is to go away)
c. I eased/tricked/lured him into accepting the Status quo. (-
[69])
(His tendency is not to accept the Status quo)
d. Those events incline Sam toward going away.
(Sam's natural inclination is to stay)
e. Those events incline Sam against going away.
(Sam's natural inclination is to go away)
f. Hard times turned Sam to robbery.
(Sam's natural inclination was to be honest)
g. Maturity turned Sam away from the crimes of his youth
(Sam's natural inclination was to be criminal)
To be sure, my account does not go beyond saying that to and into
are positive and from and against are negative; Deane is correct to point
out a number of further subtleties. But that does not undermine the basic
integrity of my analysis. There is plenty of room to work in a more
detailed treatment of the prepositions that brings them closer to spatial
prepositions, for example treating them äs circumstantial path-functions
(Jackendoff 1983: chapter 10).
Notice, though, that the ränge of such prepositions is pretty limited:
there are no complements with onto, up, down, forward, between, and so
forth. Rather, äs observed above, the direction of force is boiled down
simply to Opposition; it has no other direction. So one does not want to
go too far in invoking spatial metaphors, nor in claiming that force-
dynamic analysis directly represents force in space. As often happens,
it's getting late, and I leave the refinements for my good friend, Future
Research.

Received 17 January 1995 Brandeis University

Notes
1. I am grateful to Ludmila Lescheva for a great deal of discussion useful in the prepara-
tion of this paper, and to Paul Deane, Adele Goldberg, and Ron Langacker for
clarifications that influenced its final form. This is not to say that any of them necessar-
ily agrees with me in the end.
This research was supported in part by NSF Graut IRI 92-13849 to Brandeis
University, and in part by Keck Foundation funding of the Brandeis Center for
Complex Systems.
2. Invoking Chomsky this early in the paper will no doubt raise lots of red flags. I do not
want to address the very real sociological reasons for which the very mention of the
name provokes widespread apoplexy—and not only in cognitive linguistics. I do want

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
124 Ray Jackendoff

to address what I think are bis most important contribution to linguistics and the
mental sciences, and to my thinking in particular. I hope readers can overcome what-
ever automatic aversive reactions they might have, long enough to pay attention to the
intellectual content.
3. And, I suspect, from bis implicit belief—though not from bis expressed belief when he
is pushed against the wall.
4. I differ from Lakoff and Johnson, however, in that I do not even take for granted our
perceptual experience of the external world and even of our own bodies. Perceptnal
psychology has shown that these too must be the result of elaborate constructive
processes in the brain. (See references in Jackendoff 1991 a.)
5. Deane criticizes me for leaving the issue of "deciding exactly which of the rules should
be collapsed and how" for future work, then adds condescendingly, "—äs seems to be
the policy throughout Semantic Structures". Anyone who writes in any field knows
that you are always left with loose ends, in no matter how large a volume. Give me a
break: I was trying to be honest about it.
6. In turn, this relationship is likely the key to an account of distributive prepositions
such äs all over and all along. As Deane points out, Semantic Structures treats these äs
simply düfering from their nondistributive counterparts over and along by a feature,
missing the relation to quantification and hence to the meaning of all Again, it's one
ofthose things that had to be left for later; in this case rectification has begun.
7. Deane presents evidence that there is sometimes a linguistic conflation between the
upward direction and the axis pointing away from the Speaker, which coincide in the
retinal image. He takes this äs evidence that language has access to the 2£D sketch.
However, the 2JD sketch, which encodes relative depth, does not so clearly conflate
these two axes. I would prefer an account in terms of reference frame transformations
in the 3D model, for which I don't know of any appropriate formal theory at the
moment.
8. It is perhaps worth mentioning that Langacker (p.c.) concurs with this characteriza-
tion. To the extent that L-notation is visually iconic, he considers this a useful heuristic
but not a theoretical commitment to their being formally image schematic. On the
other hand, Deane (p.c.) does hold such a commitment, one evidently found also in
Dewell (1994), for instance.
9. Interestingly, German makes do with the prefix um- 'around' for both over and around
in this 'reflexive' sense.
10. Other examples suggest that there are further conditions in over. One condition can be
readily paraphrased by across, äs in Deane's example live over/across the border. In
addition, äs observed by Vandeloise (1984), verticality can be dropped, and replaced
by visual occlusion, äs in Lei's put some wallpaper over this ugly paint. The proper
interaction of conditions is beyond the scope of this commentary.
11. I am going to say "desired" here for convenience. The locution must be watered down
appropriately for the inanimate case, to "action on part of Agonist that would take
place if Antagonist prevailed".

References
Bierwisch, Manfred
1986 On the nature of semantic form in natural language. In Klix, F. and
H. Hagendorf, (eds.), Human Memory and Cognitive Capabilities:
Mechanisms and Performances. Amsterdam: Elsevier/North-Holland,
765-784.

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 125
Chomsky, Noam
1957 Syntactic Structures. The Hague: Mouton.
1981 Lectures on Government and Binding. Dordrecht: Foris.
Csuri, P.
toappear Generalized referential dependencies. Unpublished Ph.D. dissertation,
Brandeis University, Waltham, MA.
Culicover, P., and Ray Jackendoff
1995 Something eise for the binding theory. Linguistic Inquiry 26: 249-275.
Deane, Paul
1991 Limits to attention: A cognitive theory of island phenomena. Cognitive
Linguistics 2: 1-64.
this issue On Jackendoff *s conceptual semantics.
Dennett, D. C.
1995 Darwin 's Dangerous Idea. New York: Simon and Schuster.
Dewell, Robert B.
1994 Over again: Image-schema transformations in semantic analysis. Cognitive
Linguistics 5: 351-380.
Dixon, R. M. W.
1982 Where Have All the Adjectives Gonet Berlin: Walter de Gruyter.
Dowty, D.
1979 Word Meaning and Montague Grammar. Dordrecht: Reidel.
Emonds, Joseph E.
1991 Subcategorization and syntax-based theta-role assignment. Natural
Language and Linguistic Theory 9: 369-429.
Erteschik-Shir, N., and S. Lappin
1979 Dominance and the functional explanation of island phenomena.
Theoretical Linguistics6:41-85.
Fauconnier, Gilles
1984 Mental Spaces. Cambridge, MA: MIT Press.
Fauconnier, Gilles, and M. Turner
1994 Conceptual projection and middle spaces. La Jolla, Department of
Cognitive Science, University of California, San Diego.
Fillmore, Charles
1992 The Grammar of Home. Linguistic Society of America Presidential Address.
Fillmore, Charles and Paul Kay
1993 Construction Grammar Coursebook. University of California, Berkeley:
Copy Central.
FÜlmore, Charles, Paul Kay, and Catherine O'Connor
1988 Regularity and idiomaticity in grammatical constructions: The case of
let ahne. Language 64: 501-538.
Fodor, Jerry A.
1975 The Language ofThought. Cambridge, MA: Harvard University Press.
1983 Modularity ofMind. Cambridge, MA: MIT Press.
1994 Paper given at State University of New York Buffalo Summer Institute on
Cognitive Science.
Goldberg, Adele
1992a The inherent semantics of argument structure: The case of the English
ditransitive construction. Cognitive Linguistics 3: 37-74.
1992b In support of a semantic account of resultatives. (CSLI Technical Report
No. 163.) Stanford: Center for the Study of Language and Information.

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
126 Ray Jackendoff

1993 Making one's way through the data. In Aisina, A. (ed.), Complex Predicates.
Stanford, CA: CSLI Publications.
this issue Jackendoff and construction-based grammar.
Gruber, J. S.
1965 Studies in Lexical Relations. (Doctoral dissertation, MIT.) Reprinted by
Indiana University Linguistics Club, Bloomington, IN.
[1976] [Reprinted äs part of Lexical Structures in Syntax and Semantics.
Amsterdam: North-Holland.]
Hinrichs, E.
1985 A compositional semantics for Aktionsarten and NP reference in English.
Unpublished doctoral dissertation, Ohio State University.
Hochberg, J. E.
1978 Perception. (2nd edition.) Englewood Cliffs, Prentice-Hall.
Jackendoff, Ray
1969 Some rules of semantic Interpretation for English. Unpublished doctoral
dissertation, MIT.
1972 Semantic Interpretation in Generative Grammar. Cambridge, MA: MIT
Press.
1975 On belief-contexts. Linguistic Inquiry 6(1): 53-93.
1976 Toward an explanatory semantic representation. Linguistic Inquiry 7(1):
89-150.
1978 Grammar äs evidence for conceptual structure. In Halle, M., J. Bresnan,
and G. Miller (eds.), Linguistic Theory and Psychological Reality,
Cambridge, M A: MIT Press, 201-228.
1981a Senso e referenza in una semantica basata sulla psicologia. Quaderni di
Semantica 3: 3-24.
[1984] [English Version: Sense and reference in a psychologically based semantics.
In Bever, T., J. Carroll, and L. Miller (eds.), Talking Minds: The Study of
Language in the Cognitive Sciences, Cambridge, MA: MIT Press, 49-72.]
1981b On Katz's autonomous semantics. Language 57:425-435.
1983 Semantics and Cognition. Cambridge, MA: MIT Press.
1985 Multiple subcategorization and the theta-criterion: The case of climb.
Natural Language and Linguistic Theory 3: 271-296.
1987 Consciousness and the Computational Mind. Cambridge, M A:
Bradford/MIT Press.
1990 Semantic Structures. Cambridge, M A: MIT Press.
1991a The problem of reality. Noüs 25(4): 411-433. (Also in Jackendoff [1992c])
1991b Parts and boundaries. Cognition 41: 9-45.
[1992] [Reprinted in Levin, B. and S. Pinker (eds.), Lexical and Conceptual
Semantics, Cambridge, M A: Blackwell, 9-45.]
1992a Word meanings and what it takes to learn them: Reflections on the Piaget-
Chomsky debate, in Jackendoff 1992c.
[1994] [Also in Overton, W. and D. Palermo (eds.), The Nature and Ontogenesis of
Meaning, 129-144. Hillsdale, NJ: Erlbaum.]
I992b Mme. Tussaud meets the binding theory. Natural Language and Linguistic
Theory 10(1): 1-31.
1992c Languages ofthe Mind: Essays on Mental Representation. Cambridge, M A:
MIT Press.
1993 Patterns in the Mind: Language and Human Nature. London: Harvester
Wheatsheaf; New York: Basic Books.

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptttal semantics and cognitive linguistics 127
to appear a The proper treatment of measuring out, telicity, and perhaps even quantifi-
cation in Engüsh. Natural Language and Linguistic Theory.
to appear b The architecture of the linguistic-spatial Interface. In Bloom, P.,
M. Peterson, L. Nadel, and M. Garrett (eds.), Language and Space,
Cambridge, MA: MIT Press.
Jackendoff, Ray and David Aaron
1991 Review article on Lakolf, G. and M. Turner, More Than Cool Reason: A
Field Guide to Poetic Metaphor. Language 67(2): 320-338.
Jackendoff, Ray, J. Maling, and Annie Zaenen
1993 Home is subject to principle A. LinguisticInquiry 24( l): 173-177.
Katz, Jerrold J.
1972 Semantic Theory. New York: Harper and Row.
Koifka, Kurt
1935 Principles of Gestalt Psychology. New York: Harcourt, Brace, and World.
Kosslyn, S.
1980 Image and Mind. Cambridge, MA: Harvard University Press.
Krifka, M.
1992 Thematic relations äs links between nominal reference and temporal consti-
tution. In Sag, L and A. Szabolcsi (eds.), Lexical Matters. Stanford, CA:
CSLI Publications, 29-54.
LakofF, George
1970 Jrregularity in Syntax. New York: Holt, Rinehart, and Winston.
(Publication of 1965 doctoral dissertation.)
1987 Women, Fire, and Dangerous Things. University of Chicago Press, Chicago.
1990 The invariance hypothesis: Is abstract reasoning based on image-schemas?
Cognitive Linguistics l: 39-74.
Lakoff, George and Mark Johnson
1980 Metaphor s We Live By. Chicago: University of Chicago Press.
Landau, Barbara and Ray Jackendorf
1993 "What" and "where" in spatial language and spatial cognition. Behavioral
andBrain Sciences 16(2): 217-238.
Langacker, Ronald
1987 Foundations of Cognitive Grammar. Vol. 1. Stanford: Stanford University
Press.
1991 Foundations of Cognitive Grammar. Vol. 2. Stanford: Stanford University
Press.
Lerdahl, Fred and Ray Jackendoff
1983 A Generative Theory ofTonal Music. Cambridge, MA: MIT Press.
Macnamara, J.
1978 How do we talk about what we see? Unpublished manuscript, McGill
University.
Maling, J.
1993 Of nominative and accusative: The hierarchical assignment of grammatical
case in Finnish. In Holmberg, A. and U. Nikanne (eds.), Case and other
Functional Categories in Finnish Syntax. Berlin: Mouton de Gruyter, 49-74.
Marantz, A.
1992 The way-construction and the semantics of direct arguments in English: A
reply to Jackendoff. In Stowell, T. and E. Wehrli (eds.), Syntax and
Semantics. Vol. 26. Syntax and the Lexicon. New York: Academic Press,
179-188.

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
128 Ray Jackendoff

Marr, David
1982 Vision. San Francisco, CA: Freeman.
Michotte, A.
1954 La perception de la causalite. (2nd edition.) Louvain: Publications
Universitaires de Louvain. [English translation: New York: Basic Books.]
Michotte, A., G. Thines, and G. Crabbe
1964 Les complements amodaux des structures perspectives. Louvain: Publications
Universitaires de Louvain.
Neisser, U.
1967 Cognitive Psychology. Englewood Cliffs: Prentice-HaU.
Pinker, S.
1989 Learnability and Cognition: The Acquisition of Argument Structure.
Cambridge, MA: Bradford/MIT Press.
Pinker, S. and Paul Bloom
1990 Natural language and natural selection. Behavioral and Brain Sciences 13:
707-726.
Pustejovsky, James
1991 The generative lexicon. Computational Linguistics 17: 409-441.
Ruhl, C.
1989 On Monosemy: A Study in Linguistic Semantics. Albany: SUNY Press.
Talmy, Leonard
1978 The relation of grammar to cognition—A synopsis. In Waltz, D. (ed.),
Theoretical Issues in Natural Language Processing 2. New York: Association
for Computing Machinery, 14-24.
1980 Lexicalization patterns: Semantic structure in lexical forms. In Shopen, T.
et al., (eds.), Language Typology and Syntactic Description, Vol. 3. New
York: Cambridge University Press.
1983 How language structures space. In Pick, H. and L. Acredolo (eds.), Spatial
Orientalion: Theory, Research, and Application. New York: Plenum.
1985 Force dynamics in language and thought. In Paper s Front the Twenty-First
Regional Meeting, Chicago Linguistic Society. Chicago, IL: University of
Chicago Press.
[ 1988] [Republished in Cognitive Science 12(1): 49-100.]
1995 Fictive motion. In Bloom, P., M. Peterson, L. Nadel, and M. Garrett (eds.),
Language and Space. Cambridge, MA: MIT Press.
Taylor, J.
(this issue) On Running and Jogging.
Tenny, C.
1987 Grammaticalizing aspect and affectedness. Unpublished Ph.D. dissertation,
Department of Linguistics and Philosophy, MIT.
1992 The aspectual interface hypothesis. In Sag, I. and A. Szabolcsi (eds.), Lexical
Matters, Stanford, CA: CSLI Publications, 1-28.
ter Meulen, Alice
1994 Representing meaning: magic or logic? (Review of Jackendoff 1990)
Semiotica 99: 211-215.
Vandeloise, Claude
1986 L'espace enfranfais. Paris: Editions du Seuil.
Van Hoek, Karen
1992 Paths through conceptual structure: constraints on pronominal anaphora.
Unpublished doctoral dissertation, University of California at San Diego.

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM
Conceptual semantics and cognitive linguistics 129
Verkuyl, H.
1972 On the Compositional Nature ofthe Aspects. Dordrecht: Reidel.
1985 On semantics without logic (Review of Jackendoff 1983). Lingua 68:59-90.
1989 Aspectual classes and aspectual composition. Linguistics and Philosophy
12(1): 39-94.
Wertheimer, M.
1923 Laws of organization in perceptual forms.
[1938] [Reprinted in Ellis, W. D. (ed.), A Source Book of Gestalt Psychology.
London: Routledge and Kegan Paul, 71-88.]
Wilks, Y.
1992 Review of Jackendoff 1990. Computational Linguistics 18:95-97.
Yip, M., J. Maling, and Ray Jackendoff
1987 Case in tiers. Language 63(2): 217-250.
Zaenen, Annie, J. Maling, and H. Thrainsson
1985 Case and grammatical functions: The Icelandic passive. Natural Language
and Linguistic Theory 3: 441-484.

Brought to you by | University of Illinois Chicago (University of Illinois Chicago)


Authenticated | 172.16.1.226
Download Date | 7/27/12 9:23 PM

Você também pode gostar