Escolar Documentos
Profissional Documentos
Cultura Documentos
RAY JACKENDOFF
Abstraci
Conceptual semantics shares many ofits goals with cognitive linguistics, but
is also concerned with more "Chomsky an" goals of discovering the extent
ofa specialized Universal Grammar in the human mind. It is committed to
the existence ofan autonomous syntax, though one that interacts richly with
meaning in a way that is potentially congenial to the findings of cognitive
linguistics. A number of the criticisms of conceptual semantics leveled by
Goldberg, Dean and Taylor (this issue) can be defused by closer attention
on one hand to the text of my writings and on the other hand to the
linguistic data.
criticized for being too old-fashionedly formal (Deane [this issue], Wilks
1992) and for not being formal enough (Verkuyl 1985, ter Meulen 1994);
for being not concerned enough with semantics (Taylor [this issue]) and
for being too concerned with semantics (Marantz 1992); for believing in
autonomous syntax (Goldberg [this issue]) and for not believing enough
in autonomous syntax (Emonds 1991); for believing in innateness
(E. Bates, personal communication, several times) and for not believing
in strong enough innateness (Fodor 1994). Life is short, you can't do
everything, read everything, make everyone happy with your work. As
I. F. Stone once put it, "all you have to contribute is the purification of
your own vision, and the addition of that to the sum total of other
visions."
1. Ideology
The most important thing I learned from Chomsky2 is that it is of interest
to approach language in terms of the following questions:
(1) What does a person have to know (alternatively, what Information
does a person need in long-term memory) in order to use language
in the way he or she does, in particular being able to create and
understand an unlimited number of utterances?
(2) What preexisting abilities does it take in order to acquire this knowl-
edge (or Information in long-term memory)?
(3) What parts of these preexisting abilities are not simply a consequence
of having a large general-purpose brain? That is, what parts of the
knowledge required for the acquisition of language are specific to
language?
The answer to question (1) is provided by a theory of grammar
(including lexicon), say a generative grammar or a cognitive grammar,
it doesn't matter which for the moment. Everyone in linguistics (though
regrettably not in other fields) agrees that the grammar is quite a compli-
cated accumulation of Information.
Likewise, everyone agrees that there must be some preexisting ability
in the mind of the child that Supports the acquisition of language. This
may be äs simple äs Skinnerian association, some more elaborate connec-
tionist net, or perhaps Piagetian mechanisms of assimilation and
accommodation.
The main place where people disagree is in their answer to question (3).
There is a strong disposition to believe that there is nothing special about
language acquisition that cannot follow from more general processes of
learning. Chomsky, of course, has insisted for decades that there is a
phenomena, such äs the organization of the visual field, music, and motor
control, involve hierarchical part-whole relations. So it's no surprise that
language does too. This doesn't mean language is derived or "built
metaphorically" from any of these other capacities. Rather, like fingers
and toes, they may be distinct specializations of a common evolutionary
ancestor.
So far, I follow Chomsky—though, I hope, with a less severe way of
stating the position (for example, he has a somewhat nihilistic view on
the evolution of language, äs pointed out by Pinker and Bloom [1990]
and Dennett [1995]). I also follow Chomsky in believing in the existence
of a syntactic component within Universal Grammar and within particu-
lar grammars, an issue I take up in the next section. Finally, I take from
Chomsky (at least the early Chomsky), äs well äs from my other most
important teachers, Morris Halle and Edward Klima, an appreciation
for rigorous analysis of sufficient data—including analysis of what does
not happen—and for formalisms whose degrees of freedom are appro-
priate to the phenomena they are intended to account for. I take up this
point in section 4.
I diverge from Chomsky's practice3 in not treating syntax äs the princi-
pal generative component of grammar, from which phonology and mean-
ing are interpreted. Rather, I treat phonology, syntax, and conceptual
structure äs parallel generative Systems, each with its own properties, the
three kept in registration with one another through sets of "correspon-
dence rules" or "interface rules". This view begins to develop in my work
in Jackendoff (1978, 1983), and comes into füll clarity in Jackendoff
(1987), where it is applied to the architecture of the mind äs a whole
and termed "representation-based modularity". (See the diagram in
Deane's paper.)
I also diverge sharply from Chomsky's practice in that I act on the
belief that a significant theory of meaning can be developed along the
lines of inquiry in (l)-(3) above. The resulting theory of meaning is in
many respects congenial to cognitive linguistics. Following the gestalt
psychologists, especially Koffka (1935), Michotte (1954), Michotte,
Thines, and Crabbe (1964), äs well äs more modern perceptual psychol-
ogy such äs Hochberg (1978) and Neisser (1967), I maintain that a
theory of meaning must pertain to the "behavioral reality" of the organ-
ism, not to physical reality—to the world äs it appears to the organism,
or more properly, the world äs it is constructed by the organism's mind.
This view appears incipiently in Jackendoff (1976) and comes to fruition
in Jackendoff (l 981 a, 1983, 1991 a); these last three contain detailed
comparison to "objectivist" philosophy, parallel to the discussion in
Lakoff and Johnson (1980) and Lakoff (1987).4
One of the earliest threads of this theory was the conceptual parallelism
between spatial and nonspatial semantic fields; this parallelism drives
concomitant grammatical and lexical paralielisms, though with less regu-
larity. This idea, derived in large part from Gruber (1965), appeared in
Jackendoff (1969, 1972), and in detail in Jackendoff (1976). The fields
whose parallelism is discussed in Jackendoff (1983) are almost exactly
those presented in Lakoff's (1991) treatment of the invariance hypothesis;
there is also substantial overlap with Talmy's (1978) work. (Unlike
Lakoff, however, I do not consider these paralielisms to be metaphors,
for reasons discussed briefly in Jackendoff [1976] and in much more
detail in Jackendoff and Aaron [1991] and Jackendoff [l992a].)
This part of my work has led to a focus on spatial language, a
preoccupation found throughout cognitive linguistics, Unlike (most?)
cognitive linguists, I have attempted to connect spatial language to psy-
chological theories of spatial cognition and visual processing, the most
notable of which for my purposes are Marr (1982) and Kosslyn (1980).
The foundations of spatial semantics are laid out in Jackendoff (1976,
1983); the connection with vision comes to the fore in Jackendoff (1987,
to appear b), and Landau and Jackendoff (1993).
Another aspect of my work has been the treatment of categorization,
worked out in detail in Jackendoff (1983). Here I mounted a sustained
attack on necessary and sufficient conditions; I proposed a variety of
mechanisms to account for fuzziness, graded concepts, stereotypes,
family-resemblance phenomena, and basic level categories, making use
not just of Rosch's work but of a broad variety of research in linguistics,
psychology, philosophy, and Computer science. The most novel innova-
tion, the notion of conditions interacting äs preference rules, was based
on the analysis of visual perception in Wertheimer (1923), and was
imported into linguistic theory via the theory of musical cognition
(Lerdahl and Jackendoff 1983). It appears to be a ubiquitous mental
mechanism, and possibly one that is better modeled by connectionist-
style computation than by Turing-style computation.
There has been a certain amount of productive contact between my
work and cognitive linguistics. In particular, I have been deeply influenced
by Talmy's (1978, 1980, 1983, 1985 [1988]) views on space, aspectuality,
lexicalization patterns, and force dynamics. My work on the parallel
between the description of pictures and of beliefs (Jackendoff 1975, 1983),
was an important source for Fauconnier's (1984) theory of mental
spaces—which in turn fed back into my (1992b) treatment of binding
and that of Csuri (forthcoming). Another piece on binding, Jackendoff
et al. (1993), was inspired by Fillmore's (1992) discussion of hörne.
Goldberg's work (1992a, 1992b, 1993) on datives, resultatives, and the
2. Syntax
So why do I persist in hanging on to an autonomous syntax? Much of
my work, all the way back to my dissertation (1969), has been devoted
to showing that various phenomena considered purely syntactic in
Chomskian circles are really reflections of underlying semantic/conceptual
regularities. Why don't I go all the way, äs Goldberg (this issue) urges,
and claim that all syntactic phenomena are semantically motivated?
First notice that there are a lot of possible intermediate positions
between the Chomskian view that syntax is the central generative compo-
nent of language and the (stereotypical) cognitive grammar view that
there is no independent syntax at all. One is äs extreme äs the other. In
trying to honestly find the appropriate middle ground, we may well find
that there's hardly anything left of pure syntax, and that cognitive gram-
mar is nearly right. Or we may not.
To gain some perspective, it's useful to consider phonology. Most of
phonology—the feature content of Segments, the constitution of syllables,
the placement of stress, whether inflections are prefixal, suffixal, infixal,
or reduplicative—has nothing whatsoever to do with meaning; these
processes just chug along determining the form of phonological struc-
tures. Here and there, though, aspects of phonology are recruited for
semantic purposes. For example, the use of stress for contrast and empha-
sis, overlaid on canonical stress patterns, is probably universal. Much
less sytematic is the semantic relevance of segmental content, äs in onoma-
topeia or very occasional correlations such äs gl- for actions involving
light sources (gleam, glitter, glow, glare, etc.) and -itter for rapid jerky
actions (glitter again,y/Yter, skitter^flitter). I don't think anyone nowadays
would claim that such unsystematic examples undermine the essential
independence of phonology from semantics.
I am inclined to think the same is true for the relation of syntax and
semantics, though less radically so. Here are some aspects of syntax that
seem pretty much independent of semantics.
First, aspects of phrase structure: linear order is sometimes semantically
perspicuous, for example in the use of first position for topic, But lots
of the basic scut work in linear ordering doesn't seem to show any
semantic significance, for instance the choice between verb-initial and
verb-final VPs. Because this is uniform across the language, it cannot be
used to make semantic distinctions. Yet Speakers must know this fact in
order to satisfy the goal of being able to use the language properly. A
language like German, with V2 main clauses and verb-final subordinate
clauses (mostly), makes this aspect of syntax a bit more complex.
Similarly, the relative ordering of determiner, adjective, and noun in the
noun phrase carries no semantic weight in English, because it's not
distinctive; in Romance, on the other band, where both A-N and N-A
are possible, some semantic distinction can be carried by the choice.
Likewise, the difference between prepositions and postpositions from one
language to another carries no semantic weight. A different sort of
example concerns whether the language requires subjects in tensed clauses
(the so-called null subject parameter): English does, Spanish doesn't.
For a more complex example, languages diifer s to whether they allow
ditransitive VPs: English does, French doesn't. The possibility of having
such a position is a syntactic fact; the uses to which this position are put
are semantic. For example, in English, ditransitive VPs are used most
prominently for a variety of beneficiary + theme constructions (so-
called fo-datives and/or-datives), s discussed by Goldberg (1992a) and
many previous authors. But the same syntax is also recruited for
theme + predicate NP, hence the ambiguity of Make me a milkshake!
(Poof! You're a milkshake!). And ditransitive syntax also appears with a
number of verbs whose semantics is at the moment obscure, for instance
/ envy you your security, they denied Bill his pay, the book cost Harry 55,
and the lollipop lasted me two hours. At the same time, there are verbs
that fit the semantics of the dative but cannot use it, for example
tell/*explain Bill the answer (the latter seems to be a common error
among non-native Speakers).
In addition, other languages permit other (different?) ranges of seman-
tic roles in the ditransitive pattern. Here are some examples from Icelandic
(4) and Korean (5) (thanks to Joan Maling and Soowon Kim,
respectively):
(4) a. peir leyndu Olaf sannleikanum.
they concealed [from] Olaf(ACc) the-truth(DAr)
They concealed the truth from Olaf/
b. Sjorinn svipti hanni manni sinum.
the-sea deprived her(ACC) husband(DAT) her
'The sea deprived her of her husband.'
(Zaenen et al. 1985)
(5) a. John-un kkoch-ul hwahwan-ul yekk-ess-ta.
John-τορ flowers-ACC wreath-ACC tie-past-ind
'John tied the flowers into a wreath.'
How do you know which way to express the relationship? What's the
difference? In (8a) and (8c) the head is a noun, in (8b) a verb—and the
generalization is that anytime you have a noun or adjective head, what-
ever the semantic relation to a post-head complement, you need a preposi-
tional phrase rather than a direct object When the modifier comes before
the noun, it has to be genitive. So of in (8a) is fulfilling an "everything
eise" role similar to that of accusative case in VPs—not a natural semantic
class, but a natural syntactic one. Put differently, of in such cases is
performing the same role in syntax that the meaningless theme vowel
does in, say, Latin verb inflection: it is there to fulfill the form.
This is not to deny that the strong correlations between objects and
nouns and between actions and verbs are necessary for the child to get
grammatical acquisition off the ground. It's just that once the grammati-
cal system gets going, it in part cuts loose of its semantic connections: a
noun is now a kind of word that has grammatical gender and gets case-
marked, a preposition is a kind of word that cannot take a subject but
does permit a direct case-marked object, and so on.
This is also not to deny that much that has passed for syntax in the
generative literature undoubtedly is a reflection of semantic regularities.
My early arguments (1969) that quantifier scope is not represented
directly in syntax hold against contemporary LF (logical form) in
Government and Binding theory (Chomsky 1981) äs well äs they did
against Lakoff's (1970 [1965]) proposals. I am in agreement with Deane
(1991) and Erteschik-Shir and Lappin (1979) that many extraction con-
straints have a semantic basis (see Csuri [to appear] for related argu-
ments). And I agree with Fauconnier (1984) and Van Hoek (1992) that
much of the theory of anaphora does not belong in syntax (see Culicover
and Jackendoff [1995]). As I said at the outset of this section, I am
looking for the proper middle ground. And I should point out that I
have given the very same list of syntactic phenomena to Chomskian
syntacticians who accused me of wanting to do away with syntax.
3. Conceptual structure
A theorist who accepts the goal of accounting for Speakers' knowledge,
and who also Claims that meaning deeply influences choice of syntactic
structure, is thereby obliged to search for a theory of meaning that
predicts the desired effects, where "desired effects" include not only
judgments of grammaticality but such aspects of language use äs judg-
ments of inference, relevance to discourse, and relation to the perceived
world. Much of Semanticsand Cognition (1983 ) is devoted to an argument
that there is no privileged level of "linguistic semantics" at which specifi-
cally linguistic effects of meaning can be separated out from more general
cognitive effects such äs categorization and Interpretation of deixis.
Rather, all of these effects are localized in a level called conceptual
structure. My position is stated very clearly in Semantics and Cognition,
section 1.7 and chapter 6, and it recurs throughout the text. (See also
Jackendoff [1981 b, 1985].) In particular, chapter 8 emphasizes the notion
of preference rule Systems äs part of lexical decomposition in conceptual
structure—and their role in creating family resemblance phenomena.
Yet the deepest criticism leveled at me by both Deane and Taylor (this
issue) is that conceptual structure allegedly contains only the Information
relevant to syntax, and that it does not include preference rules. (Deane:
"What is at stake here is precisely Jackendoff's refusal to recognize the
existence of family-resemblance structure within the lexicon.") The Posi-
tion they criticize is indeed advocated by others. For instance, Bierwisch
(1986) argues that there must be a level of representation that specifically
encodes those aspects of semantic Information relevant to language;
Pinker (1989) works out in detail a theory of such a level. But this
position is not held by me; in fact, Bierwisch criticizes me for not holding
it. (In turn, Jackendoff [1981 b] criticizes Katz's [1972] view of "autono-
mous semantics", parallel to Bierwisch in some respects.) So why have
Deane and Taylor arrived at such a misunderstanding of where I stand?
I believe the explanation lies in the relationship of Semantic Structures
(1990) to Semantics and Cognition. A problem not resolved in Semantics
and Cognition was how prototype images (or better, image Schemata) can
be incorporated into word meanings, and more generally, how visual
inputs can be connected with conceptual structure so that it is possible
for us to talk about what we see (to use Macnamara's [1978] phrasing
of the problem). It was clear that no decomposition into algebraic primi-
tives could do the trick: artificial intelligence-style features like [HAS-A-
BACK] for chair are obviously too ad hoc. A potential solution arose upon
deeper study of Marr's (1982) theory of vision—the idea that visual
categories can be encoded in a level of 3D model structures, a viewpoint-
independent geometric representation that includes part-whole structure
and axis structure, and which can be parameterized to allow Variation in
shape and proportion among exemplars. It struck me that this was the
proper sort of format to encode just the aspects of word meaning that
are uncomfortable for Standard feature decomposition—just the kind of
work a prototype image is supposed to accomplish. Marr's theory was
detailed and motivated on perceptual grounds, not just äs a theory about
pictures in the head invented by philosophers and linguists. (By the way,
the 3D model is far from a "perceptual" structure, äs Taylor suggests I
intend; I have argued that it is one of the central Systems of cognition.
Fodor (1975), like Taylor, believes that even the simplest aspects of
Standard lexical decomposition are untenable. Following this assumption
through to its logical conclusion, he shows that lexical concepts are
unlearnable. He then puts bis money where his mouth is, and embraces
the (to me incredible) position that all lexical concepts, including exotica
such äs telephone and carburetor, are innate. If Taylor disagrees with
Fodor's final position, he owes us a rebuttal of Fodor's learnability
argument.
Alternatively, perhaps Taylor thinks complex concepts are built out of
fundamental building blocks, but not the ones I (and everyone eise) has
suggested, including such psychologically plausible units äs "self-
initiated" and "locomotion" and "human being". In this case he owes
us an account of these building blocks and how encyclopedic Information
is built out of them.
In defense of a theory with fundamental building blocks, it should be
remembered that, in a representational theory, the only way Speakers can
perceive commonalities among entities is for the representations of those
entities in speaker's minds to have common parts. That's the whole point
of trying to formulate representations that capture linguistic and percep-
tual regularities. If run and jog have this perceived commonality, they
must somewhere in their representations share a common element, which
they share also with sprint, lope, and walk. I find it hard to imagine what
Taylor's encyclopedist view is, if he denies this—though again it appears
to verge on Fodor's (1983) view of cognition äs being an impenetrable
mystery (and perhaps Chomsky's too). Good heavens, äs Jerry would say.
4. Notation
The notation I have adopted for conceptual structure is built up of
conceptual constituents, notated äs square brackets, containing complexes
of features and functions. The particular way these are arranged on the
page is not of too much import to me; it happens to be convenient to
me to write them the way I write them. However, in every case where it
is possible, I consider whether the features and functions I have chosen
äs primitives do indeed express the linguistically and conceptually signifi-
cant generalizations in the data.
The constant issue of justification has led over the years to changes in
the notation and to the set of postulated primitives. For instance, a major
advance between Jackendoff (1976) and Jackendoff (1983) was the inno-
vation of a path constituent in sentences of motion: instead of (9a) I
proposed (9b).
The idea behind (12) is that, at each 0-dimensional point during the time
period T, X occupies a point of the path P, and this correlation determines
the part-structure of the overall event. Using this notation, it proves
possible to relate motion formally to extension (Talmy's [1994] "fictive
5. Polysemy
Deane repeatedly takes me to task on my treatment of polysemy. When
confronted with apparently different senses of a lexical item which bear
some intuitive relationship, his preference seems to be that all these senses
should follow from a single stipulated prototype, through general prin-
ciples of extension that create family resemblance structures. (This posi-
tion is similar to Ruhl's [1989], who, however, is willing to posit an
underlying abstract sense rather than sticking to a concrete prototype.)
I agree in principle. However, this preference in many cases comes into
conflict with the need to account for the language user's ability to get all
the details of syntax and semantics right for each sense. Let me lay out
some cases, mostly not new, that illustrate the complexity of the problem.
First, some baseline judgments: gooseberry and strawberry have nothing
t o do with geese and straw. Yet given the Chance to create folk etymolo-
gies, people will invent a story that relates them, Overcoming the strong
Intuition that there must be a relation is not easy. And äs lexical semanti-
cists we must be careful not to succumb to the temptation to insist on
closer relations than there really are. How can you teil when to give up
and call some putative explanation a "gooseberry etymology"? I don't
know in general; I only know that it is a possible last resort.
Consider the extreme, uncontroversial cases. I think everyone agrees
that the bank in river bank and savings bank represent separate, homony-
mous concepts, not a single polysemous concept. On the other end, I
think everyone agrees that the italicized phrases in (13) derive their
reference by a general process: no one thinks that the mental lexicon lists
harn sandwich äs potentially referring to a person or Lakoff äs referring
to a book or John äs referring to a car.
(13) a. [One waitress to anothen]
The harn sandwich in the corner wants some more coffee.
b. Plato is on the top shelf next to Lakoff.
c. John got a dent in his left fender.
It's the cases between these two extremes that are of interest. One that
I find uncontroversial is the sort of multistep chaining of association
found in cardinal, described in I can't remember what source. It seems
that the earliest meaning is 'principal', äs in cardinalpoints ofthe compass
and cardinal virtues. From there it extended to the high-ranking church
official, then to the traditional color of his robes, then to the bird of that
color. All of these senses are still present in English, but I wouldn't want
to say they constitute a single polysemous word—especially the first and
last member. Frankly, I have sort of the same feeling about Lakoff's
(1987: 104-109) analysis of the Japanese classifier hon: there is a moti-
vated route from candles to Zen contests and to TV programs, but I find
it hard to think of them s constituting a unified category. The reason,
I think, is that the basis for connection changes enroute, so that one
cannot trace the connection without knowing the intermediate steps. So
what should we say about these cases? They are more closely related
than plain homonyms, but not closely related enough to be said to form
a unified concept. Fll call them "opaquely chained concepts".
Moving again toward the other end of the spectrum, we find a quite
different Situation in the related senses of open in The door is open, The
door will open, and John will open the door, which are related by highly
productive lexical relations, encoded in my notation s in (14):
(14) a.
b. X openv = [sitiNCH ([sirtBE ([ X ],
c. Yopen v X =
[sitCAUSE ([Y], [sitiNCH ([SitBE ([X], kropertyOPEN] )])])]
and ^moke other than the presence of^smoke äs a character in the action.
One reason for this is that the Steps of the chain each add idiosyncratic
Information. For instance, you can't csmoke a herring by setting it on
fire and putting it in your mouth and puffing, and you can't esmoke a
cigar by putting it in a smoky room.
Because of these multiple branches and idiosyncrasies, it makes more
sense to say there are five senses arranged in two transparent chains
branching outward from Csmoke (a-b-c-d and a-e). They are not any
more closely related to each other than they are to, say, smoky and
smoker (the latter meaning either a person who csmokes or a vessel in
which one esmokes things). And in a morphologically richer language
than English they might not be phonologically identical (say outsmoke
for [15b] and ensmoke for [15e]). In other words, even if all these senses
are related, the language user must learn them individually. This treat-
ment is not unlike Lakoff's (1987) approach to over.
A check on the validity of such putative relationships is the extent to
which they occur in other lexical items. Here are the linkages for smoke,
listing some other words that show the same relationships among their
senses:
(16) asmoke->bsmoke (V = "give off N"):
steam, sweat, smell, flower (not very productive)
bsmoke->csmoke (V = "cause to V"):
open, break, roll, freeze (quite productive)
csmoke->dsmoke (V = "V something"):
eat, drink, read, write (rather productive)
asmoke-»esmoke (V = "put N into/onto something")·'
paint, butter, water, powder, steam (rather productive)
Notice that each relationship affects a different group of verbs. Each
pairwise relationship is semiproductive, but the entire branching structure
of senses for smoke is unique. This means that no general process of
extension from a prototype can predict all the linkages—they're different
for every word.
I am more suspicious of claiming such a linking relationship among
senses when it is impossible to find other lexical items that participate in
the same relationship. An example is the two senses of into discussed
by Deane:
(17) a. aintoX = 'path terminating in the interior of X'
(run into the room)
b. binto X = "path terminating in violent contact with X"
(run into the wall)
l am happy to treat these senses äs related only in that they both express
termination (the -to portion) (Jackendoff 1990: 108). In my analysis, the
morpheme in in binto is like the goose in gooseberry; it has nothing to do
with normal in. Deane asks for more: he wishes to analyze binto äs "path
which would have carried an object to the inside of X if the boundary
had not impeded it". Note first that there is something not quite right
about this analysis, since you can crash into a mirror, even though you
could never fit inside it. But what is more troubling is that there are no
other prepositions that have this counterfactual sense. For instance, you
cannot crash under a table by landing violently on top of it—though you
would have ended up under the table if its boundary had not impeded
your motion. Consequently, there is nothing to gain by extracting out
this relation between the two intos äs a subregularity. Moreover, there is
no general process of extension of a prototype that will get to binto from
Jnto, or that will get both from a more abstract sense they have in
common.
A somewhat similar though perhaps more controversial case conceras
some of the senses of over discussed by Lakoff (1987).
(18) a. aover
X = "location in an upward direction from X"
(The doud is over the house)
b. bover X = "path passing through location in upward direction
from X"
(The plane flew over the house)
c. cover X = "location at end of path passing through location
in upward direction from X"
(Sausalito is over the bridge)
d. dover = "path of rotating motion in a vertical plane"
(Turn over the page, The pole feil over)
Abstracting away from many details of contact vs. noncontact and cover-
ing vs. noncovering, to some of which we return presently, I want to
focus on the relation between dover, the "reflexive sense", and the rest.
The other English prepositions that have such reflexive senses are around
(turn around), out (spread out), and perhaps in (gather in).9 Now, the
analysis suggested by Lakoff is that in aover, one part of the object moves
bover another part. Although this explication may work for turn over the
page, it does not generalize to the pole feil over, which shares only the
rotating motion—so Lakoff most posit yet another sense. More tellingly,
this relationship applies only to these four prepositions, so one wonders
whether it qualifies äs a subregularity. It's better than the case of into,
but just barely. It's sort of like the marginal subregularity that produces
the six past tense forms bought, brought, caught, fought, sought, and
6. Ending
I have tried to use the commentaries in this issue äs a vehicle to raise
questions about the relationship of conceptual semantics to cognitive
linguistics. I have tried to show that in many respects the differences are
not äs great äs the commentators have suggested, often turning only on
the question of what phenomena one chooses to work on. In the respects
where the approaches genuinely diverge, I have tried to show the basis
behind my choices. I hope this can lead to a richer and more productive
dialogue between the two schools of thought, with mutual benefit.
Let's see how these are äquivalent. First, compare (26a.iii) to (26b.iii).
If the Agonist is strenger, then the Antagonist fails; if the Antagonist is
stronger, then the Antagonist succeeds; if the relative strengths are inde-
terminate, then the Antagonist's success is indeterminate. So these distinc-
tions carry the same Information. Next, let us convert (26a.ii) to the
point of view of the Antagonist. Whatever the Agonist's tendency is, the
Antagonist's is the opposite; thus (26a.ii') carries the same Information
äs (26a.ii).
(26) a. ii.' Intrinsic force tendency of Antagonist on Agonist:
toward Agonist's rest or toward Agonist's action
So suppose the Agonist is trying to rest; then the Antagonist must be
trying to get the Agonist to move. Now if the Antagonist is stronger, the
result will be that the Agonist moves; if the Antagonist is weaker, the
Agonist will rest. Alternatively, suppose that the Agonist is trying to
move; then the Antagonist must be trying to get the Agonist to rest. If
it happens that the Antagonist is stronger, the Agonist will rest; if the
Antagonist is weaker, the Agonist will move. This whole set of circum-
stances can be simplified: If the Antagonist is stronger, the Agonist does
what the Antagonist wants, if the Antagonist is weaker, the Agonist does
something eise. In other words, we can predict the outcome by specifying
what the Antagonist wants and whether or not the Antagonist is success-
ful. In other words, (26a) and (26b) make the same distinctions, and in
fact Talmy's fourth degree of freedom (26a.iv) can be dropped out of
the equation.
The reason I adopted (26b) äs the basic formulation is that it conforms
better to the syntax of force-dynamic verbs. First of all, there is no
syntactic distinction based on whether the Agonist wants to rest or move:
this comes out of the content of the complement of the verb. Second,
the complement might stipulate any of the possibilities in (27).
(27) a. What the Agonist wishes to do (=[26a.ii])
b. What the Antagonist wishes to do
c. What the Antagonist wishes the Agonist to do (=[26b.ii])
d. What the Agonist wishes the Antagonist to do
In fact, the syntax of force-dynamic verbs looks äs though the comple-
ment most closely expressions (27c). For example, in Johnforced Harry
to leave the room, to leave the room is what John wanted Harry to do,
not what Harry wanted to do. Alternatively, to leave the room could be
the result of the force-dynamic event rather than the action desired by
Harry. But in an event like John pressured Harry to leave the room, where
we don't know the result, the complement still expresses what John
Notes
1. I am grateful to Ludmila Lescheva for a great deal of discussion useful in the prepara-
tion of this paper, and to Paul Deane, Adele Goldberg, and Ron Langacker for
clarifications that influenced its final form. This is not to say that any of them necessar-
ily agrees with me in the end.
This research was supported in part by NSF Graut IRI 92-13849 to Brandeis
University, and in part by Keck Foundation funding of the Brandeis Center for
Complex Systems.
2. Invoking Chomsky this early in the paper will no doubt raise lots of red flags. I do not
want to address the very real sociological reasons for which the very mention of the
name provokes widespread apoplexy—and not only in cognitive linguistics. I do want
to address what I think are bis most important contribution to linguistics and the
mental sciences, and to my thinking in particular. I hope readers can overcome what-
ever automatic aversive reactions they might have, long enough to pay attention to the
intellectual content.
3. And, I suspect, from bis implicit belief—though not from bis expressed belief when he
is pushed against the wall.
4. I differ from Lakoff and Johnson, however, in that I do not even take for granted our
perceptual experience of the external world and even of our own bodies. Perceptnal
psychology has shown that these too must be the result of elaborate constructive
processes in the brain. (See references in Jackendoff 1991 a.)
5. Deane criticizes me for leaving the issue of "deciding exactly which of the rules should
be collapsed and how" for future work, then adds condescendingly, "—äs seems to be
the policy throughout Semantic Structures". Anyone who writes in any field knows
that you are always left with loose ends, in no matter how large a volume. Give me a
break: I was trying to be honest about it.
6. In turn, this relationship is likely the key to an account of distributive prepositions
such äs all over and all along. As Deane points out, Semantic Structures treats these äs
simply düfering from their nondistributive counterparts over and along by a feature,
missing the relation to quantification and hence to the meaning of all Again, it's one
ofthose things that had to be left for later; in this case rectification has begun.
7. Deane presents evidence that there is sometimes a linguistic conflation between the
upward direction and the axis pointing away from the Speaker, which coincide in the
retinal image. He takes this äs evidence that language has access to the 2£D sketch.
However, the 2JD sketch, which encodes relative depth, does not so clearly conflate
these two axes. I would prefer an account in terms of reference frame transformations
in the 3D model, for which I don't know of any appropriate formal theory at the
moment.
8. It is perhaps worth mentioning that Langacker (p.c.) concurs with this characteriza-
tion. To the extent that L-notation is visually iconic, he considers this a useful heuristic
but not a theoretical commitment to their being formally image schematic. On the
other hand, Deane (p.c.) does hold such a commitment, one evidently found also in
Dewell (1994), for instance.
9. Interestingly, German makes do with the prefix um- 'around' for both over and around
in this 'reflexive' sense.
10. Other examples suggest that there are further conditions in over. One condition can be
readily paraphrased by across, äs in Deane's example live over/across the border. In
addition, äs observed by Vandeloise (1984), verticality can be dropped, and replaced
by visual occlusion, äs in Lei's put some wallpaper over this ugly paint. The proper
interaction of conditions is beyond the scope of this commentary.
11. I am going to say "desired" here for convenience. The locution must be watered down
appropriately for the inanimate case, to "action on part of Agonist that would take
place if Antagonist prevailed".
References
Bierwisch, Manfred
1986 On the nature of semantic form in natural language. In Klix, F. and
H. Hagendorf, (eds.), Human Memory and Cognitive Capabilities:
Mechanisms and Performances. Amsterdam: Elsevier/North-Holland,
765-784.
1993 Making one's way through the data. In Aisina, A. (ed.), Complex Predicates.
Stanford, CA: CSLI Publications.
this issue Jackendoff and construction-based grammar.
Gruber, J. S.
1965 Studies in Lexical Relations. (Doctoral dissertation, MIT.) Reprinted by
Indiana University Linguistics Club, Bloomington, IN.
[1976] [Reprinted äs part of Lexical Structures in Syntax and Semantics.
Amsterdam: North-Holland.]
Hinrichs, E.
1985 A compositional semantics for Aktionsarten and NP reference in English.
Unpublished doctoral dissertation, Ohio State University.
Hochberg, J. E.
1978 Perception. (2nd edition.) Englewood Cliffs, Prentice-Hall.
Jackendoff, Ray
1969 Some rules of semantic Interpretation for English. Unpublished doctoral
dissertation, MIT.
1972 Semantic Interpretation in Generative Grammar. Cambridge, MA: MIT
Press.
1975 On belief-contexts. Linguistic Inquiry 6(1): 53-93.
1976 Toward an explanatory semantic representation. Linguistic Inquiry 7(1):
89-150.
1978 Grammar äs evidence for conceptual structure. In Halle, M., J. Bresnan,
and G. Miller (eds.), Linguistic Theory and Psychological Reality,
Cambridge, M A: MIT Press, 201-228.
1981a Senso e referenza in una semantica basata sulla psicologia. Quaderni di
Semantica 3: 3-24.
[1984] [English Version: Sense and reference in a psychologically based semantics.
In Bever, T., J. Carroll, and L. Miller (eds.), Talking Minds: The Study of
Language in the Cognitive Sciences, Cambridge, MA: MIT Press, 49-72.]
1981b On Katz's autonomous semantics. Language 57:425-435.
1983 Semantics and Cognition. Cambridge, MA: MIT Press.
1985 Multiple subcategorization and the theta-criterion: The case of climb.
Natural Language and Linguistic Theory 3: 271-296.
1987 Consciousness and the Computational Mind. Cambridge, M A:
Bradford/MIT Press.
1990 Semantic Structures. Cambridge, M A: MIT Press.
1991a The problem of reality. Noüs 25(4): 411-433. (Also in Jackendoff [1992c])
1991b Parts and boundaries. Cognition 41: 9-45.
[1992] [Reprinted in Levin, B. and S. Pinker (eds.), Lexical and Conceptual
Semantics, Cambridge, M A: Blackwell, 9-45.]
1992a Word meanings and what it takes to learn them: Reflections on the Piaget-
Chomsky debate, in Jackendoff 1992c.
[1994] [Also in Overton, W. and D. Palermo (eds.), The Nature and Ontogenesis of
Meaning, 129-144. Hillsdale, NJ: Erlbaum.]
I992b Mme. Tussaud meets the binding theory. Natural Language and Linguistic
Theory 10(1): 1-31.
1992c Languages ofthe Mind: Essays on Mental Representation. Cambridge, M A:
MIT Press.
1993 Patterns in the Mind: Language and Human Nature. London: Harvester
Wheatsheaf; New York: Basic Books.
Marr, David
1982 Vision. San Francisco, CA: Freeman.
Michotte, A.
1954 La perception de la causalite. (2nd edition.) Louvain: Publications
Universitaires de Louvain. [English translation: New York: Basic Books.]
Michotte, A., G. Thines, and G. Crabbe
1964 Les complements amodaux des structures perspectives. Louvain: Publications
Universitaires de Louvain.
Neisser, U.
1967 Cognitive Psychology. Englewood Cliffs: Prentice-HaU.
Pinker, S.
1989 Learnability and Cognition: The Acquisition of Argument Structure.
Cambridge, MA: Bradford/MIT Press.
Pinker, S. and Paul Bloom
1990 Natural language and natural selection. Behavioral and Brain Sciences 13:
707-726.
Pustejovsky, James
1991 The generative lexicon. Computational Linguistics 17: 409-441.
Ruhl, C.
1989 On Monosemy: A Study in Linguistic Semantics. Albany: SUNY Press.
Talmy, Leonard
1978 The relation of grammar to cognition—A synopsis. In Waltz, D. (ed.),
Theoretical Issues in Natural Language Processing 2. New York: Association
for Computing Machinery, 14-24.
1980 Lexicalization patterns: Semantic structure in lexical forms. In Shopen, T.
et al., (eds.), Language Typology and Syntactic Description, Vol. 3. New
York: Cambridge University Press.
1983 How language structures space. In Pick, H. and L. Acredolo (eds.), Spatial
Orientalion: Theory, Research, and Application. New York: Plenum.
1985 Force dynamics in language and thought. In Paper s Front the Twenty-First
Regional Meeting, Chicago Linguistic Society. Chicago, IL: University of
Chicago Press.
[ 1988] [Republished in Cognitive Science 12(1): 49-100.]
1995 Fictive motion. In Bloom, P., M. Peterson, L. Nadel, and M. Garrett (eds.),
Language and Space. Cambridge, MA: MIT Press.
Taylor, J.
(this issue) On Running and Jogging.
Tenny, C.
1987 Grammaticalizing aspect and affectedness. Unpublished Ph.D. dissertation,
Department of Linguistics and Philosophy, MIT.
1992 The aspectual interface hypothesis. In Sag, I. and A. Szabolcsi (eds.), Lexical
Matters, Stanford, CA: CSLI Publications, 1-28.
ter Meulen, Alice
1994 Representing meaning: magic or logic? (Review of Jackendoff 1990)
Semiotica 99: 211-215.
Vandeloise, Claude
1986 L'espace enfranfais. Paris: Editions du Seuil.
Van Hoek, Karen
1992 Paths through conceptual structure: constraints on pronominal anaphora.
Unpublished doctoral dissertation, University of California at San Diego.