Você está na página 1de 183

THE CONCEPT OF LOGICAL CONSEQUENCE

JOHN ETCHEMENDY

21 CENSIER |
\

JSi

t 'Y

THE DAVID HUME SERIES


PHILOSOPHY AND COGNITIVE SCIENCE REISSUES

CSLI PUBLICATIONS

^ #

Copyright 1999
CSU Publications
Center for the Study of Language and Information
Leland Stanford Junior University
Printed in the United States
03 02 0100 99

12345

Library of Congress Cataloging-in-Publication Data


Etchemendy, John, 1952The concept of logical consequence / John Etchemendy.
p. cm.
Originally published: Cambridge, Mass.: Harvard University Press, 1990.
Includes bibliographical references and index.
IS B N

1-57586-194-1 (pbk.: alk. paper)

1. Logic, Symbolic and mathematical 1. Title.


[B C

135. E 83 1999]

i6o-dc2i 99-12538
CIP
00 The acid-free paper used in this book meets the minimum requirements of the
American National Standard for Information Sciences - Permanence of Paper for
Printed Library Materials, a n s i Z 39. 48- 1984.

The David Hume Series of Philosophy and Cognitive Science Reissues consists of
previously published works that are important and useful to scholars and students
working in the area of cognitive science. The aim of the series is to keep these
indispensable works in print in affordable paperback editions.
In addition to this series, CSLI Publications also publishes lecture notes, monographs,
working papers, and conference proceedings. Our aim is to make new results, ideas,
and approaches available as quickly as possible. Please visit our web site at
http://csli-publications.stanford.edu/

for comments on this and other titles, as well as for changes and corrections by the
author and publisher.

For Nancy and Max

Acknowledgments

I owe many thanks to many people. For their help and encourage
ment, without which I may never have finished the book, and their
criticism, without which I would certainly have finished too soon, I
would like to thank Ian Hacking, Calvin Normore, Ned Block, Greg
OHair, Richard Cartwright, Leora Weitzman, and, in particular,
John Perry, Genoveva Marti, and Paddy Blanchette. For their pa
tience, I thank my family, and especially my wife, Nancy. And for all
of the above and more, I thank my friend and colleague Jon Barwise.
Finally, I am indebted to the Mrs. Giles Whiting Foundation and to
the Center for the Study of Language and Information for support
while working on various stages of this book.

Contents

1
2
3
4
5
6
7
8
9
10
11
12

Introduction
1
Representational Semantics
12
Tarski on Logical T ru th
27
Interpretational Semantics
51
Interpreting Quantifiers
65
Modality and Consequence
80
T he Reduction Principle
95
Substantive Generalizations
107
T he Myth o f the Logical Constant
125
Logic from the M etatheory
136
Completeness and Soundness
144
Conclusion
156
Notes
161
Bibliography
Index
173

171

1
Introduction

The highest compliment that can be paid the author of a piece of


conceptual analysis comes not when his suggested definition survives
whatever criticism may be leveled against it, or when the analysis is
acclaimed unassailable. The highest compliment comes when the sug
gested definition is no longer seen as the result of conceptual analy
siswhen the need for analysis is forgotten, and the definition is
treated as common knowledge. Tarskis account of the concepts of
logical truth and logical consequence has earned him this compliment.
Anyone whose study of logic has gone beyond the most rudimentary
stages is familiar with the standard, model-theoretic definitions of the
logical properties. According to these definitions, a sentence is
logically true if it is true in all models; an argument is logically valid, its
conclusion a consequence of its premises, if the conclusion is true in
every model in which all the premises are true. These definitions,
along with the additional machinery needed to understand them, are
set forth in every introductory textbook in mathematical logic.1 In
these texts we are taught how to delineate a class of models for a simple
language and how to provide a recursive definition of truth in a
modelin short, how to construct a simple model-theoretic semantics.
Once this semantic theory is in place, the model-theoretic definitions
of the logical properties can be applied.
This method of defining logical truth and logical validity is gener
ally traced to Tarskis 1936 article, On the Concept of Logical Conse
quence.2 In this article Tarski sets out to give a precise and general
account of what he calls the intuitive consequence relation and the
corresponding property of logical truth. The definitions that result are
meant to be applicable to any language whose truth predicate can be

Introduction

defined, and to remain, as Tarski puts it, close in essentials to the


common, everyday concepts.
Tarski devotes most of his attention in this brief, twelve-page article
to shortcomings of other attempts to define the consequence relation,
in particular attempts to characterize it syntactically, by means of
formal systems of deduction. His own, semantic account, sketched in a
mere four pages, is devoted in part to the exposition of some ancillary
notions treated at length in his earlier monograph on truth. The main
thrust of the article is not to discuss details of the semantic account of
consequence, or even to give a simple example of its application, but
rather to urge that in considerations of a general theoretical nature
the proper concept of consequence must be placed in the foreground
(1956, p. 413).
Tarski begins his article by emphasizing the importance of the intu
itive notion of consequence to the discipline of logic. He dryly notes
that the introduction of this concept into the field was not a matter of
arbitrary decision on the part of this or that investigator (1956,
p. 409). The point is that when we give a precise account of this notion,
we are not arbitrarily defining a new concept whose properties we then
set out to studyas we are when we introduce, say, the concept of a
group, or that of a real closed field. It is for this reason that Tarski
takes as his goal an account of consequence that remains faithful to the
ordinary, intuitive concept from which we borrow the name. It is for
this reason that the task becomes, in large part, one of conceptual
analysis.
Tarskis account of the logical properties is widely regarded as suc
cessful in this respect, as capturing, in mathematically tractable form,
the proper concepts of logical truth and logical consequence. We can
see this not only from explicit acknowledgments of its success by many
philosophers and logicians, but also from the treatment given it by
those not interested in conceptual analysis as such. Perhaps the most
striking indication is the different status afforded syntactic characteri
zations of consequence, formal systems of deduction.
It has long been acknowledged that the purely syntactic approach
does not yield a general analysis of the ordinary notion of conse
quence, and in principle cannot. The reason for this is simple. It is
obvious, for starters, that the intuitive notion of consequence cannot
be captured by any single deductive system. For one thing, such a
system will be tied to a specific set of rules and a specific language,
while the ordinary notion is not so restricted. Thus, by consequence
we clearly do not mean derivability in this or that deductive scheme.
But neither do we mean derivability in some deductive system or
other, for any sentence is derivable from any other in some such system.

Introduction

So at best we might mean by consequence derivability in some sound


deductive system. But the notion of soundness brings us straight back
to the intuitive notion of consequence: a deductive system is sound if it
allows us to prove only genuinely valid arguments, those whose con
clusions follow logically from their premises.
We recognize that a syntactic definition does not capture the or
dinary notion of consequence, and we recognize this even though we
may be convinced, for one reason or another, that a given deductive
system is adequate for a given languagethat is, even if we believe that
all valid arguments, and only valid arguments, are provable within the
system. This recognition is at a conceptual level, but its main impact is
at the extensional. The upshot is that systems of deduction require
external proofs of their extensional adequacy (or inadequacy, as the
case may be). To be sure, with careful selection of our rules of proof, it
is fairly easy to guarantee that only valid arguments are provable in a
given system. But our assurance that all valid arguments are provable
in the systemif such an assurance is to be hadmust come from
somewhere other than the deductive system itself. We need outside
evidence that our system is complete, evidence we would not require
if the system straightforwardly captured, in mathematically tractable
form, the ordinary concept of consequence.
To appreciate how different our attitude is toward the modeltheoretic account of consequence, consider the significance we read
into Gdels completeness theorem. It is now common to state this
theorem in the following form, where 5 is any sentence in a first-order
language and K is an arbitrary set of such sentences:
If K (= 5 then K \-S.
Here, the relation indicated by f= is the model-theoretically defined
consequence relation, while |- indicates a syntactic or prooftheoretically defined consequence relation. This theorem, plus its con
verse, the soundness theorem,
If K |- 5 then K \= S,
shows that the model-theoretic and proof-theoretic definitions of con
sequence coincide, that they apply to the same pairs (K, S) in the
first-order language. But we think of these results as having an intu
itive significance that goes beyond the mere coincidence of two alter
native characterizations of the consequence relation. Specifically, we
think of them as demonstrating the extensional adequacy of the de
ductive system in question. They are thought to show that the system is
sound, that it will not allow the derivation of conclusions that are not

Introduction

genuine consequences of their premises, and that it is complete, that it


allows the derivation of all the consequences of any given set of sen
tences in the language.
What is revealing is that the significance we read into these results is
asymmetric, even though their form alone would not seem to warrant
it. After all, for any given language there will be a wealth of theorems
displaying the same general pattern:
If K |-i 5 then K \-2S,
If K \-2 S then K |~i 5.
But if, for example, both h i and |~2 are syntactically defined conse
quence relations, perhaps involving variant proof regimes, we would
hardly take these results as showing the adequacy, the soundness and
completeness, of one regime rather than the other. In such a case we
would take the theorems as showing nothing more than the coex
tensiveness of the two characterizations. To think they demonstrate,
say, the extensional adequacy of |~2 would obviously presuppose
additional theorems showing the completeness and soundness of h i
In this case, the pair of results would be viewed as entirely symmetric.
The felt asymmetry in our original two theorems stems from our
assumption that the model-theoretic definition of consequence, unlike
syntactic definitions, involves a more or less direct analysis of the
consequence relation, and so its extensional adequacy, its complete
ness and soundness, is guaranteed on an intuitive or conceptual
level, not by means of additional theorems. If it were not for this
assumption, we would feel equal need for external evidence that the
model-theoretic characterization of consequence is extensionally cor
rect, that it applies to all valid arguments, and only valid arguments, of
the language in question.
How do we know that our semantic definition of consequence is
extensionally correct? How do we know it does not declare some
logically valid arguments invalid, or declare some invalid arguments
logically valid? Many readers will find this question quite odd. But it is
not odd in the same way as the quesdon How do we know that all
structures satisfying the group axioms are really groups? This second
quesdon is simply confused: the notion of a group is arbitrarily de
fined to mean those structures satisfying our characterization. But as
Tarski points out, the situadon is quite different with the concept of
logical consequence. Here the correctness of our model-theoretic de
finition is not determined by arbitrary fiat; on the contrary, whether
the definition is right or wrong will depend on how closely it cor
responds to the pretheoretic notion it is meant to characterize. That
the first question now strikes us as odd just indicates how deeply

Introduction

ingrained is our assumption that the standard, semantic definition


captures, or comes close to capturing, the genuine notion of conse
quence.
The situation here might be illuminated by analogy with some basic
results in recursion theory. Recursion theory, like logic proper, was
originally driven by an interest in a rather imprecise and intuitive
notion. Here the notion was that of an effectively computable func
tion, a function whose values could in principle be calculated by al
gorithmic meansthat is, using fixed instructions requiring no insight
or creativity. During the 1930s, many mathematically precise charac
terizations of the class of computable functions were proposed, by
Church, Gdel, Turing, and others, and various important results
concerning the precisely defined classes were proved. Among them
was the striking result that, although the precise characterizations
proceeded in widely divergent ways, they were nonetheless coex
tensive; they carved out exacdy the same class of functions. This result
was taken as evidence that this class of functions, however specified,
formed a natural and important collection. But did it also show that
the specified class was exacdy the class of intuitively computable func
tions? The answer, of course, is no. For if none of the precise charac
terizations individually captured the intuitive notion of computability,
the question of whether they coincide exactly with this concept hardly
followed from their convergence. The coincidence of the various de
finitions provided some indirect evidence, as did the fact that no
obviously algorithmic function could be found that fell outside the
defined class. But these do not amount to a mathematical demonstra
tion. Because of this, logicians take great care to distinguish the various
mathematical results in recursion theory from the claim that all intu
itively computable functions fall into the precisely delineated class.
This claim is usually called Churchs thesis, and although it is almost
universally accepted, it is not considered amenable to mathematical
proof.
This situation is parallel to the one that confronted early, formal
logicians. Much of their work was driven by an interest in the intuitive
notions of logical truth and logical consequence, but the only precise
access to these notions was through specific, proof-theoretic charac
terizations, specific deductive systems. These syntactic characteri
zations, however, clearly did not capture the intuitive notion; they
were not straightforward analyses. Because of this, the claim that a
particular proof regime, say for some first-order language, coincides
with the languages genuine consequence relation, seemed at best to
admit of indirect evidence. The coincidence of various different sys
tems of proof provided some support, as did our ability to construct

Introduction

formal derivations of many specific instances of valid reasoning. But as


Hilbert once put it, evidence accrued only through experiment, not
through mathematical proof.3 To emphasize the parallel with recur
sion theory, we might call this claimthe claim that all and only
logically valid arguments of a given language are provable within a
given deductive systemHilbert's thesis.
Now, what ever happened to this latter thesis? Why has Churchs
thesis been given such a prominent position in logical pedagogy, while
its counterpart has not? Both involve the relationship between a math
ematically precise definition and one of the central, albeit intuitive,
notions of our discipline. The difference is that in the latter case, the
thesis has been replaced by theorems: the soundness and completeness
theorems are thought to provide a mathematical proof of Hilberts
thesis for first-order languages, a proof that the syntactic characteri
zations of consequence do in fact coincide with the genuine conse
quence relation for these languages. And of course it is such a proof,
on the assumption that the model-theoretic definition captures the
genuine concept of consequence. It is such a proof, on the assumption
that Tarskis analysis is right.
It is precisely this assumption that I question in this book. Briefly
put, my claim is that Tarskis analysis is wrong, that his account of
logical truth and logical consequence does not capture, or even come
close to capturing, any pretheoretic conception of the logical proper
ties. The thrust of my argument is primarily at the conceptual level,
but again the main impact is at the extensional. Applying the modeltheoretic account of consequence, I claim, is no more reliable a tech
nique for ferreting out the genuinely valid arguments of a language
than is applying a purely syntactic definition. Neither technique is
guaranteed to yield an extensionally correct specification of the lan
guages consequence relation. Needless to say, this conclusion requires
that we reassess the intuitive significance of Gdels completeness theo
rem, as well as the import of the failure of analogous results when we
move, for example, to second-order logic.
The intuitive concept of consequence, the notion of one sentence
following logically from others, is without doubt the most central
concept in logic. It is what has driven the study of logic for more than
two thousand years. On the other hand, the remarkable achievements
in logic during the past century have been the direct result of the
mathematization of the field. The infusion of mathematically precise
definitions and techniques has turned a field dominated by homely
admonitions into one capable of suppordng significant and illumina
ting theorems. My aim in this book is to attack a common misun
derstanding of one widely used mathematical technique, not to ad-

Introduction

vocate a return to homely admonitions, or even to suggest that we


abandon the particular technique. The fact that neither the modeltheoretic nor the proof-theoretic account of consequence alone cap
tures the genuine notion does not mean they are useless for studying
this very same concept. Direct analysis is just one way to gain access to
an important, intuitive concept; lessons from elsewhere in mathemat
ics should convince us of that.
Some History
Though my concern in this book is not historical, a few preliminary
words should be said about the complicated heritage of the modeltheoretic definitions of the logical properties. As I mentioned, these
definitions are generally credited to Tarskis 1936 article, and for the
purposes of this book, there is no need to question this attribution.
What is clearly right about it is that Tarskis article contains the only
serious attempt to state, in its most general form, the analysis underly
ing the standard definitions, and to put forward a detailed philo
sophical justification for that analysis. It is, so to speak, the philosophi
cal locus of the model-theoretic definitions.
From a historical point of view, though, attributing the definitions to
Tarski alone oversimplifies the situation a great deal.4 For one thing,
most of the main features of the analysis were anticipated, in various
different ways, by earlier authors, including Bolzano (1837), Padoa
(1901), Bernays (1922), Hilbert and Ackermann (1928), and Gdel
(1929). Of all of these, Bolzanos discussion is by far the most exten
sive; in Chapter 3, I will briefly describe his account and motivate
certain features of Tarskis analysis by comparing it with Bolzanos.
Padoa, unlike Bolzano, does not offer an analysis of logical truth and
logical consequence, but gives a general statement of the familiar,
model-theoretic technique for establishing a sentences logical inde
pendence from a given set of axioms, a technique that presupposes
one direction of the definition of consequence. Bernays, Hilbert and
Ackermann, and Gdel all present, with varying degrees of clarity, a
model-theoretic definition of logical truth, though none of them tries
to justify it, or offers the corresponding definition of logical conse
quence.
When Tarski proposed his analysis in 1936, he was fully aware of
these predecessors, with the notable exception of Bolzano. In his
article, Tarski emphasizes that his treatment of the logical properties
makes no very high claim to complete originality, and that the ideas
involved . . . will certainly seem to be something well known (1956,
p. 414). Still, the article is not just a codification of commonly accepted

Introduction

ideas and techniques. For one thing, as Tarski points out, the defini
tions he gives presuppose methods which have been developed [only]
in recent years. Specifically, they involve techniques for defining the
notions of satisfaction and truth, concepts that had been left at an
intuitive level by all earlier authors. Second, and more important, is
Tarskis attempt to present and motivate the definitions in a com
pletely general setting. It is easy to underestimate the importance of
this contribution. But clearly, the ordinary notions of logical truth and
logical consequence are not restricted to a specific language or small
collection of languages, and so our definition of a single languages
consequence relation, or of its set of logical truths, must flow from
some more general analysis of these concepts. Finally, unlike his im
mediate predecessors, Tarski extends his account to the notion of
logical consequence as well as logical truth.
For the purposes of this book, I simply assume that the modeltheoretic definitions originated with Tarskis analysis. The historical
question of who should receive primary credit for the definitions is a
complicated one, both for the reasons sketched here and for another
important reason that will emerge in Chapter 5. It turns out that
certain paradigmatic instances of the model-theoretic definitions in
volve a subtle but significant departure from Tarskis analysis, one that
has gone completely unnoticed. But to explain that departure at this
point would be premature.
The Plan of This Book
This book consists of a single, extended argument. The conclusion of
the argument is that the standard, semantic account of logical conse
quence is mistaken. What I mean by this is, first of all, that when we
apply the account to arbitrary languageseven perfecdy familiar,
well-behaved onesit will regularly and predictably define a relation
at variance with the genuine consequence relation for the language in
question. The definition will both undergenerate and overgenerate: it will
declare certain arguments invalid that are actually valid, and declare
others valid that in fact are not.
This is not to say that every application of Tarskis account is extensionally incorrect. Indeed, I will eventually argue that with suitably
weak languages (and with certain qualifications that I explain later) the
definition does get the extension right. But even in these cases we must
seek external guarantees of that fact. This is the second point, and
though a bit more subtle, it is at least as important as the first. The
point is that the semantic account shares with syntactic accounts the
following limitation: there is no way to tell from the definition alone or

Introduction

from characteristics of the language whether the extension of the


account is correct. Clearly, no amount of pondering a syntactic system
of deduction can assure us of its extensional adequacy; for that, we
must turn to indirect evidence, whether in the form of theorems or,
failing these, evidence of a more experimental sort. I claim that
exacdy the same holds true of any application of the model-theoretic
account of consequence.
As I said, this book consists of one, rather long argument. Most of
the argument deals with various intuitive or conceptual considerations
bearing on the adequacy of Tarskis account. The reason for this
emphasis is simple. I think the basic problem with Tarskis account is in
some sense obvious, once certain confusions and misunderstandings
are cleared away. But there are several of these confusions, and each of
them lends a certain plausibility to the analysis. Together, they give
rise to a remarkably persuasive illusion, an illusion that the account (as
Tarski puts it) captures the essential features of the ordinary concept
of consequence.
Of course, if this were really the case, if the account simply trans
lated our intuitive concept into mathematically tractable form, we
would have an ironclad guarantee of its extensional adequacy when
applied to arbitrary languages. The situation would then be analogous
to, say, our inductive definition of N, the set of natural numbers.
According to this definition, N is the smallest set that contains 0 and is
closed under the successor operation.5 Now, it is perfectly clear that
this definition is not identical to the intuitive notion it supplants. Thus,
it employs a variety of set-theoretic concepts that are not, by any
stretch of the imagination, part of our ordinary understanding of the
natural numbers. Conversely, certain things that are arguably central
to our intuitive concept (say, the concrete process of counting) are at
best dimly reflected in the inductive definition. But the definition
obviously captures the essential feature of the intuitive notion, and so
its extensional adequacy is apparent from the definition itself. We do
not, so to speak, have to try it out to see that it really works.
Most people react to the model-theoretic account of consequence in
the same way they react to the inductive definition of N. Neither is
given extensive justification since neither seems to need it. I claim that
this reaction is, in the former case, mistaken. But it is not, unfor
tunately, a simple mistakeor, for that matter, a single one. For this
reason, much of this book is devoted to explaining the variety of
confusions and misunderstandings that have made Tarskis analysis
seem so convincing. Until these are finally laid to rest, purely exten
sional evidence against Tarskis account, evidence that I think we have
long had, will continue to be explained away.

io

Introduction

I try to treat these misunderstandings one by one, in what I hope is


an orderly, comprehensible way. Unfortunately, treating them one at
a timethe only way I see to do ithas certain drawbacks. For one
thing, not everyone will share a given misunderstanding, and so an
individual reader may find certain parts of the book obvious, while
another might find those same points illuminating and others not. For
example, the first few chapters are addressed to a confusion extremely
common among those who enter logic through philosophy or linguis
tics, but almost nonexistent among those who enter through main
stream mathematics. Here, I can only ask the readers patience. If I
appear, at points, to be addressing the wrong issue, and perhaps
ignoring entirely some key insight that justifies the account, I hope the
reader will nonetheless persevere.
This gives rise to a second problemnamely, that different parts of
the book are really addressed to somewhat different audiences. Since
these audiences will have different technical backgrounds (not to men
tion different interests and concerns), I have tried not to assume much
common ground, at least in covering the key points of my argument.
The model-theoretic account of consequence has had a tremendous
influence on all logic-related disciplines, from philosophy and linguis
tics to mathematics and computer science. Thus, I have tried to make
the book understandable to anyone who has had a first course in
mathematical logic. I hope it does not seem tedious to those who have
had more.
My main criticism of Tarski's account is contained in Chapters 7
through 10. There, I explain two things. First, I explain what I take to
be the central defect in the account, the reason it will, in general, be
extensionally incorrect. Second, I describe what I believe is the main
source of the accounts remarkable persuasiveness. The chapters lead
ing up to this are devoted to untangling some of the more straightfor
ward confusions that surround the analysis, and to giving a clear
explanation of Tarskis original definition and of its relation to the
model-theoretic treatment with which we are now familiar.
In order to understand Tarskis account it is essential to distinguish
it from what I call representational semantics. Representational semantics
is a perfectly legitimate approach to semantics, but (as will become
clear) it bears no relation whatsoever to Tarskis account of the logical
properties. Unfortunately, Tarskis analysis is frequently conflated
with representational semantics. For this reason I will begin, in Chap
ter 2, by discussing this alternative approach to semantics, so that it can
be usefully contrasted with Tarskis account rather than vaguely con
fused with it. Chapters 3 through 5 are devoted to a careful exposition
of Tarskis original definitions and their relation to the standard,

Introduction

11

model-theoretic account. Then, in Chapter 6, I consider and reject


Tarskis own positive arguments in support of his analysis.
In Chapter 11, I try to reconcile the lessons learned in Chapters 7
through 10 with widespread intuitions about completeness and
soundness theorems. There, I modify an argument of Kreisels in
order to see how, and in what precise sense, we can verify the extensional adequacy of certain applications of the model-theoretic defini
tions.
One final point before beginning. Through large stretches of this
book I focus, for simplicity, on the notion of logical truth. Logical
truth, since it is a property of single sentences, is often far easier to
discuss than logical consequence, which is a relation between a collec
tion of sentences (say, premises of an argument) and another sentence
(the conclusion). For example, it is much easier first to look at the
details of Tarskis account as they bear on the concept of logical truth,
and then to explain briefly the more general account of consequence,
than it is to tackle the consequence relation head on.
This greatly facilitates the exposition, but it could also be misleading.
We must not lose sight of the fact that the concept of consequence is far
more important than that of logical truth, both intuitively and techni
cally. On their own, logical truths are of very little interestrecall that
these are sentences we often describe as trivial, devoid of information,
true by virtue of meaning, and so forth. Where the notion of logical
truth gains its importance is as the limiting case of the consequence
relation: these are sentences that follow logically from any set of sen
tences whatsoever. The crucial notion, ultimately, is that of one sen
tence following logically from others. Logic is not the study of a body
of trivial truths; it is the study of the relation that makes deductive
reasoning possible.

2
Representational Semantics

To understand Tarskis account of the logical properties, we need to


distinguish clearly between it and representational semantics. But to do
that, we need a fairly clear idea of what the latter approach to seman
tics is all about. A good place to begin is with a simple puzzle suggested
by Donald Davidson. In a well-known article in which he defends his
own approach to semantics, Davidson draws a broad distinction be
tween theories that characterize or define a relativized concept of
truth and his own call for a theory of absolute truth (1973, p. 79).
Davidson points out that as we ordinarily understand it, truth is a
property of sentences, a property whose holding or failing to hold is
expressed by a monadic predicate. In this respect, truth sets itself apart
from many other concepts that we consider peculiarly semantic. Thus,
denotation is a relation between a singular term and an object denoted,
satisfaction a relation between an open sentence and the things it holds
true of, and so forth. But truth, perhaps the preeminent semantic
concept, does not relate a sentence to something else; it simply applies
or fails to apply, so to speak, absolutely.1
Davidson goes on to note that at least on a superficial level, much
contemporary work in semantics seems to belie this simple point.
Much effort is devoted to the investigation of what Davidson sees as
irreducibly relational notions, notions like truth in a model, truth in an
interpretation, valuation or possible world. These technical concepts,
which Davidson subsumes under the generic term truth in a model,
hold or fail to hold between sentences and objects of some other sort:
generically, models. Because of this, Davidson argues, such theories
of relative truth do not have as consequences the so-called T-sentences
distinctive of the theory of absolute truth. The T-sentence

Representational Semantics

13

Snow is white is true if and only if snow is white


does not, as Davidson puts it, fall out o f a theory that simply tells us
which models Snow is white is true in. And for this reason, theories of
relative truth do not necessarily have the same sort of interest as a
theory [of absolute truth] (1973, p. 79). A theory that yields Tsentences provides, first and foremost, an explication of absolute
truththat is, of truth as we, ordinarily understand it; theories of relative
truth must, at least on the surface, be seen as providing explications of
something else.
I am not concerned here with the merits or demerits of competing
semantic programs, and in particular I will not spend time considering
Davidsons own approach. But it is worthwhile taking seriously
Davidsons simple, initial point: truth is, after all, a property; truth in a
model, a relation. What bearing can a characterization of such a rela
tional concept have on our ordinary monadic concept of truth? If
there is no close tie between the two, as Davidson occasionally implies,
then why is the relation of truth in a model given a name that sounds
so misleading?2
We can look at Davidsons puzzle this way. A theory of relative truth
provides us with a characterization of x is true iny. Yet it is common
to think of such theories as telling us something about truth, as having
at least intuitive or informal consequences involving the ordinary mo
nadic predicate x is true. Davidson, of course, is particularly inter
ested in the so-called T-sentences, but the same point might be made
about any claims involving absolute truth. That point is this. Before a
theory of relative truth can be judged to have consequences, formal or
otherwise, involving the standard monadic concept, we must give some
explanation of exactly how the defined x is true in y" is related to the
already understood x is true. Somehow, we must explain how we are
to move from our theory about the relation to claims involving the
property. If we can give no such explanation, then the simple, primafacie
evidence is that our theory of relative truth has no bearing on the
concept of truth as we ordinarily understand it. But that, of course, is
absurd.
Truth as Specification
We often find it advantageous to explain a monadic concept in terms
of a relational one. So, for example, we may find the explication of x is
a brother far more tractable if we first set out to analyze x is a brother
of y. The former then reduces to an existential generalization of the
latter: brotherhood is just brother-of-someone-hood. There are similar cases

14

Representational Semantics

in which we gain access to the monadic concept through a universal


generalization of the relational; thus with comparatives and super
lativessay, taller than and tallest. But clearly the monadic concept of
truth, the concept we ordinarily employ, is no generalization of any of
the various relational concepts. A sentence can be true in some model,
yet not be true; a sentence can be true, yet not be true in all models.
If the monadic concept of truth is not a generalization, universal or
existential, of the concept of truth in a model, then the natural alterna
tive is to think of the former as a specification of the latter. In other
words, perhaps the monadic concept emerges from the relational by
fixing on a specific instance of the nonsentential parameter, the y in
x is true iny. Being true simpliciter would then be viewed as equivalent
to being true in some particular model, and getting from a theory of
relational truth to a theory of absolute truth would be a matter of
indicating which specific model was the right model. Our conceptual
analogy might then run: x is true in y stands to x is true as x is a
brother of y stands to x is Freds brother.
In broad outline, this is clearly the intended relation between theo
ries of relative truth and the ordinary, monadic concept of truth. In a
sense it is the relational concept that is a generalization of the monadic
concept; what justifies the appearance of the word true in theories of
relative truth is that the relation studied comes from abstracting or
unfixing an implicitly fixed parameter embedded in the ordinary
notion of truth. Theories of relative truth try to characterize x is true
in y, while theories of absolute truth aim to characterize, so to speak,
x is true in Fred."
Of course, this still does not tell us who or what Fred is. We have not
determined what sort of hidden parameter our models are meant to
fill, or what makes one model the right one, the model that binds the
ordinary concept of truth to the more general concept of truth in. I will
devote several chapters of this book to exploring one possible answer
to this question, the answer presupposed by the model-theoretic defi
nitions of the logical properties. But there is another very natural
answer, one assumed in what I have called representational semantics.
Briefly, this answer is that Fred is the accurate model, the one that
represents the world as it really is.
Truth in a Row
Consider the simplest and most familiar theory of relative truth, a
theory we are taught during the first few days of any inaugural course
in logic. This is the theory of truth in a row, the theory that enables us
to construct truth tables.

Representational Semantics

15

To fill out a truth table for a simple sentence of English, we have to


acquire two principal skills. In the first place, we must master the
proper technique for constructing the reference column of the truth
table, a column headed by a horizontal list of the atomic components of
the sentence in question. This technique generally involves some sim
ple, extendable pattern of writing the words true and false in
horizontal rows beneath our list of atomic sentences, a pattern guaran
teed to capture all the required permutations for a given number of
such components. Thus, depending on the atomic sentences con
tained in the target sentence 5, each of the following would serve as
proper reference columns:
Snow is white

TRUE
FALSE

Snow is white

Roses are red

TRUE
TRUE
FALSE
FALSE

TRUE
FALSE
TRUE
FALSE

Snow is white

Roses are red

Violets are blue

TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
FALSE
FALSE

TRUE
TRUE
FALSE
FALSE
TRUE
TRUE
FALSE
FALSE

TRUE
FALSE
TRUE
FALSE
TRUE
FALSE
TRUE
FALSE

Our reference columneverything to the left of the double lines


provides us with the rows that our target sentence is to be true or false
in. The ultimate goal is to write the words true or false in each
row below S; true if S is true in that row, false if S is not true in
that row. But to do that, of course, no standard pattern of the sort used
in constructing the reference column will suffice, will ensure that we
enter the correct value in each row. Rather, we need a radically differ
ent technique, a technique that involves the repealed application of

16

Representational Semantics

certain recursive tables. The following are two sample recursive tables;
the not table:
p

not p

TRUE
FALSE

FALSE
TRUE

and the or table:


P

<1

p or q

TRUE
TRUE
FALSE
FALSE

TRUE
FALSE
TRUE
FALSE

TRUE
TRUE
TRUE
FALSE

These recursive tables are meant to tell us when a complex sentence


is to be considered true in a row, on the assumption that we have
already determined whether its immediate constituents are true in
that row. Equipped with the values of the constituents, we need only
match them to the appropriate row of the appropriate recursive table
and read to the right. Often the recursive tables will also have been
applied in order to determine the values of the relevant constituents,
and, in turn, of their relevant constituents. Indeed, there is no upper
bound on the number of times a recursive table may have to be applied
within a single row before the final value of the target sentence is
reached.
Once we are adept at these techniques we can easily produce tables
in which our target sentence is assigned a definite value in each row.
Thus, taking the target sentence to be Snow is white or roses are not
red (and abbreviating our reference column somewhat), we get the
following simple table:
s w

RR

Snow is white or roses are not red

T
T
F
F

T
F
T
F

TRUE
TRUE
FALSE
TRUE

Now, consider exactly what this table tells us. First of all, it clearly
does not tell us the actual truth value of our target sentencethat is, its

Representational Semantics

17

monadic value. But this was to be expected, since our theory is at


most a theory of relative truth.3 It does, however, tell us exactly which
rows our sentence is true in; specifically, it tells us that the sentence is
true in every row save the third. But what bearing does this informa
tion have on the genuine, monadic truth value of our sentence?
At the close of the last section we noted that truth simpliciter was
meant to be a specific instance of relative truth. Translating to
present terminology, the truth of a sentence should boil down to its
truth in some specific row. And since we know that the current sentence
is actually true, we can rule out the third row without further ado; that
row is surely not Fred. On the contrary, as any student of introduc
tory logic could quickly tell us, our target sentence is true simpliciter
because it holds true in the first row of the present table. Here, at least,
it is the first row that binds relative truth to truth.
But what makes the first row the right row? This may seem like a silly
question; after all, Snow is white and Roses are red are both true
that is, genuinely trueand the first row is the only row in which these
sentences both come out true. But notice that in offering this reply, we
have simply put off solving Davidsons puzzle. There is no question
that Snow is white is true in the first row of this table; for that, we need
not even apply our recursive techniques. Yet it is equally clear, even on
the level of atomic sentences, that being true in a row is quite different
from being absolutely true; evidence for that will be found in any of
the remaining rows of our table.
Language and the World
Davidsons puzzle reappears at the very bottom level of our theory of
truth in a row, with the atomic sentences that acquire their values in the
reference columns of our tables. If truth is to be truth in some specific
row, then clearly the first row of our sample table must be the right
one. But it is equally clear that this observation does not provide any
account of the link between our theory of relative truth and the ordi
nary, monadic concept from which we pirate the name. To provide
such an account we must explain how the first row, so to speak, comes to
be the right row. Furthermore, our explanation cannot simply reduce
to the plea that if we picked any other row, various sentences would be
true in the right row and yet not be true simpliciter. Such a response
would leave our theory of relative truth entirely suspended in air.
If we could not pinpoint some implicit parameter in our ordinary
notion of truth, some parameter whose potential effect on the abso
lute truth values of our sentences is mimicked by the effect of changes
from row to row in the theory of relative truth, then Davidson would

18

Representational Semantics

be completely justified in claiming that the defined x is true in y is


irreducibly relational. And consequently he would be justified in
claiming that, for this reason, our theories of relative truth cannot be
thought to illuminate the notion of truth as we ordinarily understand
it. But this conclusion would obviously be wrong. It is perfectly clear
that truth tables tell us something about truth, about ordinary monadic
truth, and that the relation of truth in a row was not just conjured up
by some logician or semanticist with no concern at all for its tie to the
ordinary concept.
But Davidsons puzzle is not unsolvable. The problem is not finding
an appropriate parameter in our ordinary notion of truth, but rather
choosing between two obvious alternatives. Consider the move from
the first row of our sample truth table to the third. Here the relevant
change in our reference column is the value assigned to the atomic
sentence Snow is white. The effect of this move is that the resulting
value of our target sentence turns from true to false. Now the question
is simply what change would have a similar effect on the absolute
truth value of Snow is white, and a similar effect on the absolute
value of our target sentence.
There are only two parameters to which the sentence Snow is white
owes its truth: broadly speaking, the language and the world. It is due to
the language that the sentence means what it means, that it makes the
claim it does. But it is due to the world that snow is white. Appropriate
changes on either side would have made our atomic sentence false.
Thus, had the language been somewhat different, this sentence would
have been false in spite of the whiteness of snowsay, if white had
meant hot. On the other hand, had the world been different, this
sentence might have been false in spite of its meaningsay, had snow
been red.
We can interpret the move from row to row in our truth table in
either of these two ways. In the first place, we can view our theory of
truth in a row as explicating the relation x is true in L for a limited,
though nontrivial range of languages L. From this perspective, we
would assume that any extralinguistic fact that might influence the
truth value of sentencessay, the color of snow or rosesis held
fixed; our concern is not with changes in the world. Viewed this way,
the first row of our sample table is right simply because English, the
implicitly specified parameter in x is true, happens to be one of the
languages that expresses true propositions by both Snow is white and
Roses are red. Thus, the third row would have been right had
we been speaking a language exactly like English save that white
meant hot.

Representational Semantics

1g

If we adopt the alternative perspective, then the first row is still


right, but for entirely different reasons. Here we view our theory as,
throughout, a theory of truth for English, or for some fragment
thereof. Our aim is to explicate the relation x is true in W, where W
ranges over various intuitively possible configurations of the world,
the world our language describes. Thus, the first row of our table is
rightjust because snow really is white and roses are indeed red. From
this perspective, the move to the third row involves no change in
meaning; that row would have been right simply had snow not been
the color it is.
We commonly think of truth tables as capable of supporting certain
counterfactual claims about the (absolute) truth values of their tar
get sentences. We imagine these claims to be supported because our
theory assigns values to these sentences even in rows that are not
right, rows in which the atomic sentences are not assigned their
actual values. So, for example, the third row of our sample table
supports a claim of the form:
The sentence Snow is white or roses are not red would have
been false had . . .
Obviously, the appropriate completions of this counterfactual will
vary depending on which parameter we view as changing in the move
from row to rowthat is, depending on what we take to be the relation
between truth in a row and the monadic truth predicate appearing
in the claim. In effect, our theory will support those completions that
we consider elucidations of had the third row been the right row.
Thus, if we view our parameter to be the language, we might offer the
completed counterfactual:
The sentence Snow is white or roses are not red would have
been false had white meant hot.
While if we view the parameter to be the world, we would likely
produce:
The sentence Snow is white or roses are not red would have
been false had snow not been white.
As these sample counterfactuals show, the significance we read into
our truth tables depends critically on which perspective we assume, on
the nature of the parameter that corresponds to the rows our sentences
are true in. Of course, since both points of view are possible here, we
might justify either of the above counterfactuals by referring to the
third row of our sample truth table. Or, to simplify matters, we might

20

Representational Semantics

even merge both of our claims into a single counterfactual:


The sentence Snow is white or roses are not red would have
been false had Snow is white been false.
But the fact that we can do this does not mean the resulting claim is
somehow justified by the abstract theory, quite independent of any
account we might give of the relation between x is true in y and x is
true. Or, to put it another way, the fact that our theory of truth in a
row seems doubly illuminating because it admits of either perspective
should not lull us into thinking that it retains its illumination indepen
dent of these perspectives. Rather, as Davidsons puzzle nicely points
out, the purely abstract characterization of relative truth, of x is true
iny, supports no claim whatsoever about absolute truth, about truth as
we ordinarily understand it.
A Representational Semantics
When we view a particular theory of relative truth as explicating x is
true in W, we see it as providing an account of how the world wields its
influence on the truth values of sentences within a fixed language. If
characterizing this influence is the aim of our relativized theory of
truth, then I will say we are engaged in representational semantics. The
reason I use this somewhat unusual term is simple. Our theory pro
vides an account of a relation, x is true iny, and what the theory takes
to satisfy the y position are, for all intents, just ordinary objects of
some sort or otherchunks of the actual world. Thus, in our theory of
truth in a row, the y term was filled by rows, rows that were fixed by
the reference column of our truth table. Other representational theo
ries might define a relation between sentences and abstract, settheoretic objects, maybe functions of some sort. But obviously these in
no case actually are the possible configurations of the world that they
are meant to represent. Rows of a truth table are just blotches of ink,
and functions are set-theoretic constructs; the world, thankfully, is
neither of these.
The point is a simple one, but all too easily overlooked. When we
viewed our theory of truth in a row as explicating x is true in W the
fact that the target sentence came out false in the third row of the table
was taken to indicate that the sentence would have been false in a world
in which roses were red but snow not white. But the third row itself, the
ink marks on paper, is not a world in which roses are red but snow not
white. It is just a handy surrogate, used for purposes of our theory.
From this representational standpoint, our truth table gives us valu
able information about truth, but certainly not about how truth would

Representational Semantics

21

be affected by changes in row. Rather, it tells us how truth would be


affected by changes in the world, by changes that are represented or
depicted by changes in row.
The techniques used in constructing truth tables are not generally
thought to constitute a full-fledged semantic theory for any language
or language fragment. More than anything else, this is due to certain
traditions of fairly recent vintage concerning the accepted format of
such theories. Still, it may seem perverse to view our theory of truth in
a row as a representational semantics, insofar as it may seem perverse
to view it as a semantics at all. But this can easily be remedied.
Suppose we are interested in the fragment of English containing the
atomic sentences Snow is white, Roses are red, and Violets are blue,
plus whatever complex sentences can be formed from these using a
sign for negation, not, and a sign for disjunction, or. I will assume
that we have a precise syntactic theory for our language, one that
enables us to form the negation of any sentence and the disjunction of
any two.4 A standard representational semantics for this simple lan
guage might proceed in the following way. First we define a class of
models that will represent all possible configurations of the world
relevant to the truth values of our sentences. Thanks to the simplicity
of our language, this purpose can be served by the class of functions
that assign a truth value, either true or false, to each of our three atomic
sentences. Thus, our class of models consists of eight functions, one
that assigns true to each sentence (representing worlds in which snow is
white, roses are red, and violets blue), one that assigns false to each
(representing worlds in which snow is not white, roses not red, and
violets not blue), and so forth.
Our next step is to provide a recursive definition of S is true in f for
arbitrary sentences S and models/. Since we will take this relation as an
indirect characterization of x is true in W, our aim will be to ensure
that any given sentence of our language is true in exactly those models
which represent worlds that would indeed have made the sentence
true. So if a model depicts a world in which snow is not white, our
definition should guarantee that Snow is white comes out false in that
model. Here we assume, of course, that the sentence Snow is white
means what it actually means; the sentence is ours, even though the
world depicted by the model is not.
The definition proceeds in the obvious way, by recursion on the set
of sentences in our language:
If S is an atomic sentence, then it is true in a model/just in case/
assigns it the value true.
If .S' is the negation of .S', then it is true in a model/just in case S' is
not true in /.

22

Representational Semantics

If S is the disjunction of S' and S", then it is true in a model /ju st in


case either S' is true in f or S" is true in/
For the most part, what we have done here just involves a recasting of
our theory of truth in a row. But there are two changes worth
mentioning. In the earlier theory, we constructed reference columns
for each sentence encountered, the number of rows being determined
by the atomic components of the target sentence. In the new theory,
our models take over part of the burden shouldered by the reference
columns, since they provide the objects our sentences are true or false
in. Indeed, they do so with somewhat more aplomb, allowing us to use
the same models for any sentence in our fragment. Thus, we have
managed, in the new theory, to introduce a standard collection of
objects, each of which fully determines the apportionment of truth
values throughout the entire language.5
Now, although it could easily escape notice, the reference columns
of our earlier theory actually did a bit more than our models. The
reference columns both delineated the needed rows and simultaneously
specified the values of our atomic sentences in those rows. In contrast,
whether an atomic sentence comes out true in a given model is deter
mined not by the model itself but by the base clause of our recursive
definition, the clause beginning if S is an atomic sentence . . . The
fact that we took models to be functions that yield the values true and
false is entirely a mnemonic convenience in the new theory; any two
objects would have worked as wellfor example, the numbers zero
and one. Indeed, if we had used zero and one, the substantial contribu
tion made by the base clause of our definition would have been high
lighted: without the base clause, we would not know whether a model
that assigns zero to Snow is white represents a world in which snow is
white, or one in which it is not. To provide similar freedom in the
reference columns of our truth tables, say, the freedom to use + and
rather than true and false, we would have to supplement our
recursive tables with base tables to complete the definition of truth in a
row. Such tables would look something like this:
Snow is white

Snow is white

TRUE
FALSE

Thus, our new semantic theory, unlike the earlier truth tables, ex
plicitly distinguishes the definition of x is true in y from the de
lineation of the class of objects that sentences of the language are to be
true in.

Representational Semantics

23

Representational Guidelines
The basic motivation underlying a representational semantics, an indi
rect characterization of x is true in W, is fairly clear. The approach
provides a natural framework in which to couch a theory of meaning,
or at any rate a theory of those aspects of meaning relevant to the truth
values of sentences, both the values they actually have and the values
they would have, were the world differently arranged. Needless to say,
the simple representational semantics of the last section can at best be
considered a partial theory of meaning for the relevant fragment,
since it offers no detailed account of the semantic functioning of the
three atomic sentences. In giving the semantics, we simply assumed
that Snow is white somehow comes to mean what it does, and for this
reason is true in exactly those worlds in which snow is white. A more
detailed semantics would presumably say something on this score as
well.
Of course, the fact that the motivation is clear does not mean the task
of devising a representational semantics for any interesting language is
either easy or philosophically unproblematic. But these difficulties are
not, at present, our concern. For Tarskis analysis of the logical
properties does not involve giving a characterization of x is true in W;
in effect, it involves a characterization of x is true in L, for a specified
range of languages L. As we will see, Tarskis is a remarkably different
goal from that presupposed by the representational approach to se
mantics, in spite of the fact that one and the same account of x is true
in / may occasionally admit of both construals. Failing to recognize
this difference, many philosophers have assumed that Tarski, in de
fining the logical properties, had in mind something akin to represen
tational semantics, a characterization of x is true in W," for all possi
ble worlds W. For example, we find David Kaplan extolling the
insight of Tarskis reduction of possible worlds to models, a reduc
tion Kaplan claims to be implicit in the analysis of the logical proper
ties developed in Tarskis article.6 But this, as we will see, is just a
confusion, one of several that lend undeserved credence to Tarskis
analysis.
Let me conclude this chapter by emphasizing the guidelines that will
seem natural if our aim in constructing a model-theoretic semantics is
to give a characterization of x is true in W." First, there is the obvious
though rather vague criterion we use in judging the adequacy of our
class of models. In a representational semantics the class of models
should contain representatives of all and only intuitively possible con
figurations of the world. This was accomplished in the semantics of the
last section by employing a rather crude but effective system of representation. Our collection of models imposed, so to speak, a complete

24

Representational Semantics

partition on the class of possible worlds, a partition whose boundaries


were determined by the color of snow, roses, and violets in those
worlds. Had we excluded any one of our eight functions, the remain
ing class of models would have been inadequate in this respect, leaving
no representative for certain perfectly conceivable worlds. On the
other hand, had our atomic sentences been Snow is white, Snow is
red, and Snow is blue, then we would have been justified in limiting
the class of models to those functions that assign false to at least two of
our atomic sentences. The remainder would not represent genuine
possibilities.
Once we have specified the class of models, our definition of truth in
a model is guided by straightforward semantic intuitions, intuitions
about the influence of the world on the truth values of sentences in our
language. Our criterion here is simple: a sentence is to be true in
a model if and only if it would have been true had the model been accu
ratethat is, had the world actually been as depicted by that model.
Obviously, the possibility of success on this score is not independent of
the objects we have chosen to include in our class of models. In particu
lar, it is this ultimate goal that determines the amount of detail we need
to incorporate into our models, how crude a system of representation
we can get by with. So, for example, with our sample fragment we
could not have used functions that assigned truth values only to Snow
is white and Roses are red. Although these models would indeed
have given us a complete partition of possible worlds, the partition
would not have been fine-grained enough to allow us to carry out our
semantic task: the accuracy of any of these models would have been
consistent with either the truth or falsehood of Violets are blue. And
of course with more complicated languages, say, languages containing
quantifiers, our technique of constructing representations will have to
allow for a considerably more detailed depiction of the world.
Now, the final points to notice about representational semantics
concern the sentences that turn up true in all models. It is an immedi
ate and trivial consequence of the two criteria I have just described that
sentences which are true in all models should be exactly those that are
necessarily true. If a sentence is not necessarily true, yet comes out true
in all models, then we have either omitted representations for some
possible configurations of the world, namely those that would have
made the sentence false, or our definition of truth in a model has gone
astray, having declared the sentence true in at least one model that
depicts a world in which it would actually have been false. Just so, a
sentence that is necessarily true can only come out false in a model if we
have gotten its semantics wrong or if the model fails to depict a genu
ine possibility.

Representational Semantics

25

Clearly, all and only necessary truths will come out true in all models
of an adequate representational semantics. And so if logical truths are
thought to be necessarily true, these will of course be among those true
in every model. Similarly, if one sentence comes out true in every
model in which a second sentence is true, then the truth of the first
must be a necessary consequence of the second. That is, it must be
impossible for the first to be false while the second is true, at least if our
semantics really satisfies the representational guidelines.
Equally trivial is the observation that analytic truths, sentences that
are true solely by virtue of the fixed semantic characteristics of the
language, will come out true in all models. If a sentence is not true in all
models, then its truth is clearly dependent on contingent features of
the world, and so cannot be chalked up to meaning alone. Thus,
insofar as logical truths are analytic, true in virtue of meaning, these
must again be among the sentences that are true in every model of an
adequate semantics, one that satisfies the stated criteria.7
These are all immediate consequences of the simple representa
tional guidelines sketched above. But in spite of these consequences, it
would clearly be wrong to view representational semantics as giving us
an adequate analysis of the notion of logical truth. For one thing, if
there are necessary truths that are not logically true, say, mathematical
claims, then these will also come out true in all models of a representa
tional semantics. But more important, even if we are prepared to
identify necessary truth and logical truthan identification most peo
ple would balk atit is still clear that representational semantics af
fords no net increase in the precision or mathematical tractability of
this notion. Any obscurity attaching to the bare concept of necessary
truth will reemerge when we try to decide whether our semantics
really satisfies the representational guidelinesin particular, when we
ask whether our models represent all and only genuinely possible
configurations of the world.
The value of representational semantics does not lie in an analysis of
the notions of logical truth and logical consequence, or in the analysis
of necessary or analytic truth. Rather, what this approach gives us is a
perspicuous framework for characterizing the semantic rules that gov
ern our use of the language under investigation. It should be seen as a
method of approaching the empirical study of language, rather than
an attempt to analyze any of the concepts employed in that task.
Certainly, all necessary truths of a languageof whatever ilkshould
come out true in every model of a representational semantics. If they
do not, this just shows that our semantics for the language is somehow
defective, perhaps that we are wrong about the meanings of certain
expressions. But this is only a test of the adequacy of the semantics, not

26

Representational Semantics

a sign that we also have an analysis of necessary truth. The latter notion
is simply presupposed by this approach to semantics. This is not an
objectionable presupposition, by any means, so long as our goal is to
illuminate the semantic rules of the language and not the notion of
necessary truth.
I have sketched some simple and general criteria that guide the
construction of a representational semantics, a theory of x is true in
W, for variable W. As I explain in Chapter 4, Tarskis analysis of the
logical properties gives rise to an alternative approach to semantics,
one whose aim is to characterize the relation x is true in L, for some
range of languages L. The intuitive importance of such a theory, and
the general guidelines appropriate to it, are not nearly so apparent as
those of representational semantics. To get a clear idea of these guide
lines, and to see how they differ from those I have just sketched, we
need to take a close look at Tarskis account of logical truth and logical
consequence.

3
Tarski on Logical Truth

My remark that Tarskis account involves the notion of x is true in L


for variable L would seem odd to anyone familiar with his original
analysis but unfamiliar with modern presentations of it. There is no
mention in Tarskis article of any range of languages, or of any
notion of relative truth, of truth in. The remark is appropriate only,
so to speak, in hindsight, as the natural way of viewing the modeltheoretic definitions that emerge from Tarskis account. In Chapter 4,
I explain how making a few minor (though somewhat confusing)
changes in Tarskis original account yields a recognizable modeltheoretic semantics. But to see exactly how the resulting semantics
differs from a representational semantics, it is important to start from
the beginning, with a clear understanding of Tarskis original defini
tions and their underlying motivation.
I approach Tarskis account of logical truth and logical consequence
indirectly, by considering first a simpler account developed by Bolzano
nearly a century earlier.1 The two accounts are remarkably similar;
indeed, Tarski initially entertains what is, for all intents, precisely the
same definition as Bolzanos, but modifies it for reasons I will eventu
ally explain. But in spite of the striking similarity in the two accounts,
Tarski was unaware of Bolzanos work until several years after the
initial publication of his article. The key difference between the two
accounts is simply that Bolzano employs substitution where Tarski uses
the more technical, and for the purposes more adequate, notion of
satisfaction.

28

Tarski on Logical Truth

Bolzano on Logical Truth


We normally think of logical truth as a single property that holds or
fails to hold of sentences within a language. Both Bolzano and Tarski
adopt a slightly different approach, in effect treating logical truth as a
relation that holds between sentences and sets of atomic expressions in
the language, or alternatively, as a collection of properties that can be
obtained from this relation by fixing its second argument.2 On either
Bolzanos or Tarskis account, there will be sentences that are logically
true with respect to one set of atomic expressions, but not logically true
with respect to another. The logical truth of such sentences depends,
as Bolzano puts it, on which expressions we take to be variable and
which we take to be fixed. To use Tarskis phrase, it depends on which
expressions we treat as logical constants.
According to Bolzano, what is distinctive about logical truths is that
they remain true when we exchange some subset of their component
expressions for any other expressions of similar type.3 Bolzano notes,
for example, that the sentence
If Caius was a man then Caius was mortal
remains true regardless of the subject term we put in the two positions
currently occupied by Caius. On the other hand, the sentence that
results from inserting the term omniscient in the position occupied by
mortal is false. Thus, Bolzano concludes, this sentence is logically true
when we allow only the first sort of exchange, though it is not logically
true when we also allow substitutions for the expression mortal. We
cannot say the sentence is or is not logically true simpliciter, since this
will depend, as Bolzano sees it, on which sorts of substitutions we
permit.
Following Bolzano, I shall call the terms we allow to vary variable
terms and those we keep fixed fixed terms. Assuming that all grammati
cally correct sentences are either true or false, we can take expressions
to be of similar type just in case they are members of the same
grammatical category. We can then describe Bolzanos account of
logical truth as follows. A sentence 5 is logically true with respect to a set ^
offixed termsjust in case 5 is true and every sentence S' that results from
making permissible substitutions for expressions in 5 is also true. A
substitution of a for b in S is permissible if a and b are expressions of the
same grammatical category, if all of the occurrences of b are uniformly
replaced by a, and if expression b contains no member of 5, the set of
fixed terms.
Consider an example. The following sentence is true:
Snow is white or snow is not white.

Tarski on Logical Truth

29

Also true is the sentence that results from substituting grass for
snow,
Grass is white or grass is not white,
and the sentence that results (ignoring the awkward placement of
not) from the uniform replacement of is white by is green:
Snow is green or snow is not green.
Even simultaneous substitution of grass and is green produces the
true sentence
Grass is green or grass is not green.
It seems reasonable to assume that the truth of this sentence survives
any grammatically appropriate substitution for the expressions snow
and is white.4 In which case, the sentence Snow is white or snow is not
white is logically true with respect to any set $ that contains the terms
or and not.
According to Bolzanos account, though, this sentence is not logi
cally true with respect to every selection of fixed terms. So for instance
if $ contains just the three expressions not, snow, and is white, that
is, if the expression or is considered a variable term, then the sentence
can easily be turned into a false one. Thus, the false sentence
Snow is white and snow is not white
results from the substitution of the expression and for or, a substi
tution permitted on this selection of Similarly if we take as our only
fixed terms or and is white, we can presumably get the false sentence
Grass is white or grass is necessarily white
by making grammatically appropriate substitutions for the two re
maining variable terms. On the other hand, Snow is white or snow is
not white does seem to be logically true with respect to the set contain
ing snow, is white,and or. Regardless of what we put in for not, the
resulting sentence will, by all appearances, be true.
The result of Bolzanos substitutional test for logical truth depends
crucially on the set of terms we decide to hold fixed. Bolzano was well
aware of, and indeed welcomed, this dependence, chalking it up to the
fact that different terms have different logics. Thus, the sentence
If Tom knew Carolyn to be a dean then Tom believed Carolyn
to be a dean
is logically true when we hold fixed the three expressions if-then,
knew, and believed; substituting at will for Tom, Carolyn, and to

jjo

Tarski on Logical Truth

be a dean never yields a false sentence. On the other hand, when we


consider knew to be a variable term, we get substitution instances like
If Tom wanted Carolyn to be a dean then Tom believed
Carolyn to be a dean.
One of these instances will no doubt be false, if not this particular
instance (Tom may be prone to wishful thinking) then one that results
from further substitutions for the other variable terms. We might take
this to indicate that our sentence is a truth of, say, epistemic logic, but
not a truth of, say, mere doxastic logic.
For any language there will be as many versions of logical truth, as
many logics, as there are subsets of the atomic expressions of the
language. This is just to view Bolzanos account as providing, instead
of a relation between sentences and sets of expressions, the collection
of properties that can be obtained from that relation by holding con
stant one of its arguments, the set $ of fixed terms. If we setde on the
empty set, if we hold no expressions fixed, then in general no sentence
will qualify as logically true. At the other end of the scale, allowing all
atomic expressions into we find that logical truth merely reduces to
truth. Thus, the sentence Snow is white is logically true if we fix both
snow and is white. This, simply because it is true; if all of a sentences
component expressions are in $, there are no permissible substitution
instances to worry about.

The Violation of Persistence


On all of these points, Tarskis conception of logical truth coincides
with Bolzanos. Tarski argues, though, that the substitutional test de
scribed above should not be considered a sufficient condition for logical
truth, but only a necessary condition. As I have characterized Bolzanos
definition, it has an obvious drawback: logical truth depends not only
on our selection of but on the expressive resources of the language
as well.5 This is where Tarski and Bolzano part company.
Suppose we were applying Bolzanos definition to a very simple
language, one containing two names, say, George Washington and
Abe Lincoln; two predicates, was president and had a beard; and
some truth functional operators, say, or and not. Now, when we
consider the two names to be our only variable terms, the sentence
Abe Lincoln was president passes Bolzanos test for logical truth,
though the sentence Abe Lincoln had a bearddoes not. Both of these
are in fact true sentences. But in the first case, when we substitute the
only other available name we get a true sentence, George Washington

Tarski on Logical Truth

31

was president/ while in the second case, the same substitution pro
duces a false one, George Washington had a beard/
Of course, the difference here is just a quirk of our language. The
world has plenty of people who have never been president. If our
meager language had a name for just one of them, say Ben Franklin,
the sentence Abe Lincoln was presidentwould suffer the same fate as
Abe Lincoln had a beard: neither would be logically true on the
imagined selection of fixed terms.
This example shows that Bolzanos substitutional test is liable to give
results that depend on purely accidental features of the language.
With our current choice of $, the sentence Abe Lincoln was president
has only two substitution instances, one that results from the trivial
substitution of Abe Lincoln for itself, the other resulting from the
substitution of George Washington for Abe Lincoln. But this seems
artificially restrictive in light of the fact that, had we simply increased
our list of names, the test would obviously have produced opposite
results. Thus it happens that Ben Franklin was president does not
result from making a permissible substitution in Abe Lincoln was
president/ Ben Franklinnot being an expression of the language. But
Ben Franklin could have been introduced into an existing category,
could have been given an appropriate interpretation, and thereby
would have provided us with a false substitution instance of the sen
tence at issue. In that case Abe Lincoln was president would not have
come out logically true.
We should characterize this problem more precisely. What under
lies our intuition here is perhaps best isolated by considering contrac
tions rather than expansions of the language, by considering the con
verse of the problematic case we have encountered. It seems clear that
on our ordinary conception, logical truth has at least the following
property: if a sentence S is not a logical truth of a given language, then
neither should it become a logical truth simply by virtue of the deletion
of expressions not occurring in S. After all, nothing directly relevant to
this sentence, to its meaningfulness or its truth, has been changed. If
Abe Lincoln was president is not logically true, it should not become
so merely through the deletion of an otherwise irrelevant name, Ben
Franklin, from the language.
If the property of not being logically true should persist through
contractions of the language, the property of being logically true should
persist through expansions. This desideratum, which I will call the
requirement of persistence, presumably remains binding regardless of
how we specify our set $ of fixed terms. That is, the property of being
logically true with respect to a given $ should persist through simple expansions
of the language.

3u

Tarski on Logical Truth

As we have seen, Bolzanos definition of logical truth fails to meet


the requirement of persistence. Tarskis account aims to avoid this
defect by appealing to the notion of the satisfaction of a sententialfunction
where Bolzano relies on the considerably simpler though less powerful
notions of truth and substitution.
Sentential Functions
We can think of a sentence as the limiting case of a sentential function,
where this latter notion permits variables of appropriate type to take
the place of ordinary expressions.6 So, for example, if V is a variable of
appropriate type, the linguistic object x was president will be called a
sentential function; it is exactly like the sentence Abe Lincoln was
president save that a variable has been inserted in the position here
occupied by the name Abe Lincoln. Sentential functions may contain
more than one variable, indeed more than one type of variable; thus (x
g might be the sentential function that results from allowing gto take
the place o fwas president in lx was president. I will say that sentences
are just sentential functions that contain no variables.7
The notion of a variable should not be confused with that of a variable
term. A variable term is an ordinary expression of the language, one
that differs from a fixed term only for the immediate purposes of our
test for logical truth. Thus, in the last section we chose $ to include was
president and to exclude Abe Lincoln; the former was thereby
dubbed a fixed term, the latter a variable term. But neither is a variable.
Hence, regardless of our selection of Abe Lincoln was president is
a sentencethat is, a sentential function that contains no variables.
To simplify the transition from Bolzanos definition of logical truth
to Tarskis more complicated account, it will help to introduce the
notion of a sentential function into the former. We can think of
Bolzanos test for logical truth proceeding in the following way. First
we introduce a stock of variables for each grammatical category. Next
we replace each variable term in sentence 5 with a variable of appro
priate type, ensuring that multiple occurrences of a term receive the
same variable, and distinct terms, distinct variables.
The result of this operation is a sentential function S' containing
only expressions that occur in the chosen set of fixed terms. We now
consider the collection of substitution instances of 5 'that is, the
collection of sentences that result from S' by placing expressions
drawn from appropriate categories back in the variable positions. If
every member of this set is true, then 5 is judged logically true with
respect to the current selection of fixed terms; if one or more is false,
then 5 is not logically true with respect to that selection.

Tarski on Logical Truth

33

According to the present account, the violation of persistence ob


served in the last section arises from the limited stock of names avail
able to insert for V in the sentential function x was president. Tarskis
account of logical truth allows us to go beyond the actually available
substitution instances of this sentential function. The key concept is, of
course, satisfaction. Using it, Tarski bestows some measure of persis
tence on logical truth.
From Substitution to Satisfaction
It is impossible to give a general definition of satisfaction applicable
to all languages; this for various reasons, not the least of which are the
so-called semantic paradoxes. But in simple cases the concept is pretty
intuitive. So, for instance, satisfaction is the relation that holds between
Abe Lincoln, the person, and the sentential function x was president,
but that fails to hold between Ben Franklin, the person, and this same
sentential functionin the first case because Lincoln was president, in
the second because Franklin was not.
Let us try to capture this intuitive description in a somewhat more
formal setting. For the moment we will confine our attention to senten
tial functions which, like x was president, contain a single variable
standing in a position ordinarily occupied by a name. It will be conve
nient to assume that our metalanguage contains the object language
and hence, in particular, that any sentential function of the object
language is also a sentential function of the metalanguage.
Let . .. x . . . be a schematic placeholder for an arbitrary senten
tial function of the sort describedthat is, a sentential function con
taining (perhaps multiple) occurrences of a single name variable. We
will use
. . . as a schematic placeholder for a name (in the
metalanguage) of that same sentential function, n as a placeholder
for any name, and " ... n . . . as a placeholder for the sentence that
results from replacing all occurrences of x in the sentential function
\ . . x . . . with the name that replaces n. Using these notational
conventions, we can offer a schema, analogous to Tarskis T-schema,
that partially captures the concept of satisfaction:8
(1)

n satisfies \ . . x . . . if and only i f . . . n . . .

This schema, and the various constraints placed on its instantiation,


are stated in the metametalanguage. But like Tarskis celebrated Tschema for characterizing the notion of truth, all instances are sen
tences of the metalanguage. Thus, we find among the instances
(1.1)

Abe Lincoln satisfies x was president if and only if Abe


Lincoln was president

34

Tarski on Logical Truth

and
(1.2)

B enFranklin satisfies x was president if and only if Ben


Franklin was president.

These instances sustain our intuitive remark that satisfaction is a rela


tion that holds between Lincoln and x was president because Lincoln
was president, while it fails to hold between Franklin and x was presi
dent since Franklin was not president.
Like Tarskis T-schema, (1) is important not because its instances
provide a definition of satisfaction, but because they provide a fairly
precise measure of the success of any attempted definition. Schema (1)
gives us a clear idea of what a relation, so to speak, must look like before
it deserves to be called satisfaction. We will return to this topic in a later
section; for now, let us remark on the obvious bearing of our schema
on substitution.
On the assumption that our metalanguage contains the object lan
guage, any object language name will be a permissible replacement
for n. Furthermore, the sentence that results from inserting this
name into the sentential functionthat is, the sentence that replaces
.. . n . . . in our schemawill also be a sentence of the object
language. Let us introduce . .. n .. . as a placeholder for a name
of this sentence. We can now offer a second schema:
(2)

n satisfies . . . x . . . if and only if \ .. n . . . is true in L.

This schema is a direct consequence of (1) and Tarskis T-schema.9


The only additional constraint we must place on the instantiation of (2)
is that n be replaced by a name actually appearing in the vocabulary
of the object language L. For otherwise . .. n . . . would not be a
sentence of L, and hence never true in L.
Consider again the language that caused problems for Bolzanos
account. Since Abe Lincoln is a name in this language, we are allowed
the following instantiation of (2):
(2.1)

Abe Lincoln satisfies x was president if and only if Abe


Lincoln was president is true in L.

However, since Ben Franklin is not a name occurring in L, but only a


name in our metalanguage, the restriction placed on schema (2)
prevents us from taking the further step to
(2.2)

Ben Franklin satisfies x was president if and only if Ben


Franklin was president is true in L.

When Bolzanos test for logical truth turned in positive results for
Abe Lincoln was president (holding fixed was president), we la-

Tarski on Logical Truth

35

mented the fact that there was a simple expansion of the language that
would provide a false substitution instance for the function x was
president. Our two schemata allow us to clarify this hazy intuition.
Franklin was never president, and so by (1.2) he does not satisfy the
function x was president. This latter fact, along with the presence of
schema (2), supports the counterfactual claim that Ben Franklin was
president would have been false had Ben Franklin been an object
language name with the same meaning it enjoys in the metalanguage.
For then we could have carried out the forbidden instantiation of (2)
to (2.2).
Of course, this all suggests a simple way to circumvent miscarriages
of the substitutional test, a way to meet the requirement of persistence
while still retaining the spirit of Bolzanos account. The idea is to rule
out the logical truth of Abe Lincoln was president simply by virtue of
the fact that there is some perhaps unnamed object that fails to satisfy
x was president. Then no expansion of L which merely includes a
name of this object can affect the logical status of our original sentence.
That, in short, is Tarskis strategy for getting around the shortcomings
of the original, substitutional definition. But to make good on this idea,
we first have to generalize the notion of satisfaction in two ways, one
simple and one not so simple.
Multiple Variables
The simple generalization is aimed at handling sentential functions
with more than one variable. Thus, when we want to test Abe Lincoln
was president or George Washington was not president for logical
truth (with names the only variable terms), we first convert this to the
sentential function x was president or y was not president. Any per
missible substitution will here result in a true sentence, since both
available names name presidents. But it also happens that any single
object we choose will either satisfy x was president or satisfy 'y was not
president. Yet there are obvious expansions of the language that
would give us false substitution instances of this function, witness Ben
Franklin was president or Thomas Jefferson was not president. What
we need is an account of the satisfaction relation that captures this
intuition, one that allows us to say that Franklin and Jefferson, as a pair
and in that order, fail to satisfy x was president or y was not president.
We will say that sentential functions are satisfied by sequences, where
a sequence is any function that assigns an object to each of the variables
introduced for the purpose of testing logical truth.10 Thus, no se
quence that assigns Ben Franklin to xand Thomas Jefferson to y will
satisfy x was president or y was not president; on the other hand,

)j()

Tarski on Logical Truth

sequences that assign a president to x or a nonpresident to y will


indeed satisfy this sentential function.
In the spirit of our earlier discussion, we can think of sequences as
providing a technique for simultaneously entertaining a collection of
possible expressions for substitution into our sentential function,
one for each variable. Rather than consider a general schema, which
would be premature at this point, we can see this by employing a
sample instantiation:
(.1)

Sequence/satisfies x was president or y was not president if


and only if / ( x) was president o r /( y) was not president.

In (.1) f names a sequence and / ( x) and / ( y) are complex


names of objects, the objects that result from applying sequence/ to,
respectively, variables x and y.11 If language L also contained the
names / ( x) and / ( y)though of course it does notthen we
would have in addition:
(.2)

Sequence/satisfies x was president ory was not president if


and only if f ( x) was president o r / ( y) was not president is
true in L.

In this eventuality the sentence mentioned in the second half of (.2),


/ ( x) was president o r /( y) was not president, would be a permissible
substitution instance of the sentential function x was president ory was
not president. Further, if/assigns Franklin to V and Jefferson to y
the substitution instance would be false. Which is to say, Bolzanos
substitutional test would have produced negative results had the object
language contained a few more aptly chosen names, perhaps Ben
Franklin and Thomas Jefferson, perhaps / ( x) and / ( y).
This technique works for sentential functions with arbitrarily many
variables standing in place of names. Consequently, we can now use
the notion of satisfaction to define logical truth with respect to certain
choices of Suppose $ contains all the atomic expressions of a lan
guage except perhaps one or more names. In other words, let us
assume that any atomic expression which is not a name is a fixed term.
Let S' be any sentential function that results from the sentence S after
we replace all variable terms with variables, ensuring of course that the
same variable is used for all occurrences of a given variable term, and
that distinct variable terms receive distinct variables. Then we can say
that 5 is logically true with respect to $ just in case 5' is satisfied by all
sequences.
This definition meets the requirement of persistence in the follow
ing way: If a sentence is logically true (with names the only variable
terms), then it will remain logically true even if the language is ex-

Tarski on Logical Truth

37

panded to include additional names. For regardless of what object the


name names, that individual has already been found to satisfy the
sentential function in question. In this sense, satisfaction puts at our
disposal all possible names that might be incorporated into the lan
guage.
On Generalizing Satisfaction
Accounting for logical truth in terms of satisfaction avoids certain
problems in the substitutional approach, but it encounters some new
ones as well. So far the account is not nearly so general as Bolzanos,
which allowed atomic expressions of any grammatical category to
be considered variable terms. Thus, if we choose $ to contain Abe
Lincoln but not to contain was president, Bolzanos test for the logical
truth of Abe Lincoln was president simply involves substituting vari
ous predicate expressions into the sentential function Abe Lincoln g.'
This operation is no more problematic than inserting names in the
sentential function %was president.
We have generalized the notion of satisfaction to the point where we
can handle sentential functions with multiple variables, so long as the
variables stand proxy for names. Now the not so simple generaliza
tion mentioned earlier must be faced: explaining the satisfaction of
sentential functions that contain variables of arbitrary grammatical
t y p e *

First a word of motivation. It should be clear that the same intuitions


that led us to forsake substitution for satisfaction are at stake even
when the variable terms selected include expressions other than
names. For example, suppose our language allows only the expres
sions was president and had a beard to be inserted into the sentential
functions Abe Lincoln g and George Washington g.The first of these
has all true substitution instances, whereas the second has a true in
stance, George Washington was president, and a false instance,
George Washington had a beard. Consequently, the sentence Abe
Lincoln was president is logically true with respect to the fixed terms
'Abe Lincoln and George Washington, while the sentence George
Washington was president is not. Here again something seems amiss,
something precisely parallel to the problem solved earlier by invoking
satisfaction. There are obvious possible expansions of the object
language that contain false substitution instances of Abe Lincoln g,'
lor instance any language with the predicate wore a powdered wig
interpreted, of course, as it is in English.
Bolzanos substitutional account violates the requirement of persis
tence regardless of what we take to be variable terms: contractions or

38

Tarski on Logical Truth

expansions of the language can always affect the substitution class of a


particular type of variable, whether it stands in place of a name, a
predicate, a sentential connective, or something else. Our simple ac
count of satisfaction allows us to avoid this problem so long as the
variable terms are all names; the move from substitution to satisfaction
thereby bestows some measure of persistence on logical truth. But
since the danger of artificially restricting our substitution class is en
tirely general, not limited to names, we need to offer a generalized
account of satisfaction to handle arbitrary choices of
So much for motivation; now for the problems. First recall schema
(1), which seemed to capture the intuitive characteristic of satisfaction
that makes it a natural extension of substitution:
(1)

n satisfies . . x . . . if and only i f . . . n .. .

Recall that we require the sentential function to have a single variable


in name position, and that w must be replaced with a name (which at
least occurs in the metalanguage) of some object or other. A parallel
schema for sentential functions with a single predicate variable might
look like this:
(3)
p satisfies . . . g . . . if and only i f . .. p . . .
Our first sign of trouble comes when we try to instantiate (3). In
instantiating (1) we inserted a name for both occurrences of n; thus,
with (3) we might try replacing pwith a predicate:
(3.1)

Was president satisfies Abe Lincoln g if and only if Abe


Lincoln was president.

But unlike instantiations of (1), (3.1) is not even a grammatical sen


tence. This may be easy to overlook, especially if we confuse it with the
perfectly grammatical sentence
(3.2)

Was president satisfies Abe Lincoln g if and only if Abe


Lincoln was president.

But (3.2), although grammatically correct, is not what we are after.


Satisfaction is explicitly intended not to be a relation between a linguis
tic entity (here a predicate) and a sentential function.12 Alternatively
we might try
(3.3)

Having been president satisfies Abe Lincoln g if and only if


Abe Lincoln having been president.

Or perhaps
(3.4) The set of former presidents satisfies Abe Lincoln g if and
only if Abe Lincoln the set of former presidents.

Tarski on Logical Truth

39

Both of these instantiations start out fine, but quickly degenerate


into nonsense. The first begins with the name of a property, the
property Abe Lincoln has just in case he was once president. The
second begins with the name of a set, the set that contains all the
individuals, including Abe Lincoln, who once were president. But
neither of these names can comfortably occupy the predicate position
in which it later finds itself.
When dealing with sentential functions containing variables other
than those standing in place of names, we obviously need a more
complex schema than (1). This is clear from the purely grammatical
troubles spawned by (3). But the real problem is not simply finding the
right phrasing for a schema, phrasing that produces a collection of
tolerably grammatical sentences of the metalanguage. Rather, the
problem lies in knowing what exactly we are looking for.
Semantic Presuppositions of Persistence
Our ultimate aim is for satisfaction to take the place of substitution in
our definition of logical truth, to take its place even when the expres
sions substituted are not names. But satisfaction is a relation, and all
relations hold or fail to hold between objects of one sort or other. In
our search for a schema with grammatically proper instances, this was
reflected in the fact that the term satisfies must be sandwiched be
tween two names, which of course is not the case in (3.1). In (3.3) and
(3.4), on the other hand, we have taken heed that satisfaction is a
relation between objects, that satisfies must be flanked by names. But
it then becomes obscure precisely why such a relation would be
thought of as a simple replacement for substitution; the obvious demon
stration of its relevance, inserting a name of the first object into the
sentential function that constitutes the second object, again produces
only an ungrammatical string of signs.
One thing this exercise demonstrates is that satisfaction is not as
innocent an extension of substitution as it might at first seem. Let me
explain. Bolzanos test for logical truth requires a division of expres
sions into grammatical categories, basically into groupings whose
members can be freely exchanged within sentences without risking
ungrammaticality. Such exchanges often produce sentences that dif
fer in truth value. This shows that although there may be some similar
ity running through each category, something that accounts for the
endurance of grammaticality through such exchanges, there are also
differences. In particular, there are semantic differences, differences in
the way members of the same category contribute to the truth value of sentences
in which they occur. Thus, we know that the grammatically similar ex-

40

Tarski on Logical Truth

pressions Abe Lincolnand George Washington do something differ


ent when they appear in the sentences Abe Lincoln had a beard and
George Washington had a beard, simply because the first is true and
the second false. Just so we know that the grammatically similar was
presidentand had a beard must contribute differently to truth values
of sentences in which they occur, as must the grammatically similar or
and and. In each case the evidence is simple and incontrovertible: the
presence of pairs of sentences diverging only in the occurrence of
these expressions and, of course, in truth value.
When we maintain the purely substitutional approach there is no
need to provide any account of how the various expressions we classify
as grammatically similar contribute to the truth value of their contain
ing sentences; we are officially interested only in the end result of that
contribution, in the truth or falsity of the sentence. In this way the
substitutional definition allows us to keep our semantic theory to a
minimum.13 However, as soon as we try to extend the substitutional
approach using satisfaction, we are forced to hazard at least a simple
theory about the semantic functioning of expressions within a given
grammatical category, a theory of how they each contribute, and differ
in their contribution, to the truth values of sentences in which they
occur.
This change in perspective was easily disguised in the case of names,
thanks in large part to the apparent simplicity of schema (1). In the last
section we described satisfaction as a simple technique for taking into
account possible expansions of the list of expressions which, in our
object language, fall into the category of names. But we then assumed,
without making our assumption explicit, that the range of possible
names (for example, Ben Franklin) that could be incorporated into
the object language was determined by the range of objects (for exam
ple, Ben Franklin) that could be picked out or denoted by a name.
But this assumption requires that we see the category of names as
held together by more than just the grammatical similarity of its mem
bers. We could certainly introduce an expressionsay, Nixthat we
allow to occur in all and only positions that also admit Abe Lincoln,
but whose contribution to the truth value of a sentence cannot be
explained by appeal to the fact that an object named by Nix satisfies a
given sentential function. Perhaps every sentence containing Nix,
including Nix was president or Nix was not president, is simply false,
with complete disregard for what else might be going on in the sen
tence. No purely grammatical grounds for rejecting this possible
expansion of the object language spring to mindnone, at any rate,
that do not also threaten the inclusion of Ben Franklin. But the
possibility of such a bizarre name does not seem to call into question

Tarski on Logical Truth

41

the logical truth of Abe Lincoln was president or Abe Lincoln was not
president. Nix would be a name only in grammar.
When we used satisfaction to extend Bolzanos account, we assumed
that our grammatical category was also a semantic category, that the
expansion of the category was constrained not only by the requirement
of grammatical interchangeability, but also by the requirement that
each member of the category display some common semantic feature.
It seems clear that the names Abe Lincoln and George Washington
both pick outor name, or denote, or refer toindividuals. Further
more, the fact that these expressions pick out different individuals can
alone account for any divergence in truth value among sentences in
which they occur, at least in the simple languages we have considered
so far. It was this that made it so natural to turn from names to objects,
to individuals that could have been named by expressions in the lan
guage. It seemed obvious that for each such individual our substitution
classnow taken to be a semantic categorycould have been appro
priately extended. On the other hand, the possible expansion of our
substitution class to include an expression that behaves like Nix is
ruled out by the move to satisfaction. This hardly seems an objection
able bias.
Well-Behaved Expansion and Satisfaction Domains
Let us now return to the problem of generalizing the notion of satisfac
tion to arbitrary sentential functions. Satisfaction must still be a rela
tion between objects of some sort and sentential functions (which are
also, of course, a type of object). The difficulty we encountered with
schema (3) arises because we are now dealing with expressions not
naturally thought of as names, whose contribution to the truth value of
a sentence is not easily reduced to the simple naming of an individ
ual. Consequently, it is not obvious how to extend satisfaction to the
new breed of sentential function. In particular, it is not obvious what
sort of object, if any, might stand in the satisfaction relation to these
sentential functions.
Let us call the class of individuals, things that could have been picked
out by names, the name domain of the satisfaction relation. Intuitively,
this is the collection of objects that stand in the satisfaction relation to
some sentential function displaying a single name variable. Our prob
lem is now to specify the predicate domain of the satisfaction relation, the
class of objects that can satisfy sentential functions which contain a
single predicate variable. But more important, if our account of logical
truth is to achieve a generality that approaches that of Bolzanos, we
need a fairly clear idea of what should guide us in choosing a satisfac-

42

Tarski on Logical Truth

tion domain regardless of the type of variable appearing in the senten


tial function.
The most recent considerations suggest the required perspective on
satisfaction. Our aim is to provide a notion of logical truth that persists
through expansions of the language. But the considerations of the last
section make it clear that our concern with persistence does not extend
to all conceivable expansions of the language, to all conceivable altera
tions in Bolzanos substitution classes. After all, any grammatical cate
gory could be expanded to include a semantically ill-behaved expres
sion like Nix. Thus, for any sentence that is logically true and contains
at least one variable term, there will be a possible expansion of the
language in which it fails Bolzanos test, in which it is not logically true
with respect to the same choice of constant terms.
But such possibilities are not the sort that led us to impose the
requirement of persistence. Rather, we were concerned with perfectly
well-behaved expansions of the language, with the introduction of
expressions whose semantic behavior seemed no more different from
that of present members of the substitution class than the behavior of
the present members differed from one to another. The possibility of
adding a name like Ben Franklin is quite another thing from the
possibility of adding a name like Nix.
Abe Lincoln and George Washington both stand in a particular
relation to two members of the name domain of the satisfaction rela
tion, Lincoln and Washington; these individuals are named by the ex
pressions. The remainder of the domain comprises all individuals that,
intuitively, could have stood in that same relation to other expressions,
and hence to expressions that contribute to the truth value of a sen
tence in a fashion similar to Abe Lincoln and George Washington.
Thus, if a given sentential function is satisfied by all of these individ
uals, no semantically well-behaved expansion of the category of names
will provide a false substitution instance of that sentential function.
The other expressions of our languagewas president, or, and so
ondo not name objects; only names, so to speak, name.14 But it
seems equally clear that the category of, say, predicates admits of seman
tically well-behaved expansions, just like the category of names. Obvi
ously the inclusion of wore a powdered wig should be permitted,
while the inclusion of nixes, the predicate analogue of Nix, should
not. The only question is whether the notion of satisfaction offers a
technique for clarifying this intuition, for distinguishing appropriate
from inappropriate expansions of the category of predicates.
With a simple language like the one we have defined, it clearly does.
In fact there are several ways to circumscribe the new domain. Perhaps
the most intuitive is to take the predicate domain of the satisfaction

Tarski on Logical Truth

43

relation to contain propertiesfor example, having been president, hav


ing had a beard, having worn a powdered wig, and so forth. This does not
commit us to the claim that predicates name properties, only to the
claim that expansions of the category of predicates are constrained by
the availability of properties.15 The underlying idea is that the pre
dicates in our language contribute to the truth value of sentences by
asserting, of some object, that it possesses a particular property. Thus,
for any given property, we could appropriately expand the category of
predicates to include one which asserts possession of that property, just as
the category of names can be expanded, for any given individual, to
include one which names that individual.
We can now see exactly why schema (3) caused so many problems
not encountered with schema (1). When we are dealing with sentential
functions containing a single predicate variable, we will need a schema
of the following sort:
(3')

P satisfies . . . g . . . if and only i f . . . p . . .

One of the conditions governing the instantiation of (3') will have to


run as follows: p must be replaced by an expression that asserts
possession of the property named by the expression that replaces P. Thus,
the following would be a proper instantiation of (3')(3.5)

Having been president satisfies Abe Lincoln g if and only if


Abe Lincoln was president.

Now consider the following alteration of schema (1):


(T)

N satisfies . . . x . . . if and only i f . . . to . . .

This restatement is precisely parallel to (3'), and would require a


similar condition to govern instantiations of W and to. However,
since names do not assert possession of properties, but rather name
individuals, the condition would now run: to must be replaced by an
expression that names the individual named by the expression that
replaces N. But of course every name names the individual named
by itself So our restriction can be built directly into the schema by
changing N to towhich of course yields (1)and demanding that
both occurrences of to simply be replaced by a single name.
We are prevented from making a similar simplification of (3') since
no predicate asserts possession of a property named by itself, and
likewise, no name asserts possession of a property named by itself.
This merely because predicates do not name, and names do not assert
possession of, properties. But this does not indicate that satisfaction is
any less natural an extension of substitution in the case of predicates
than it is in the case of names. It indicates only that names and pre-

44

Tarski on Logical Truth

dicates contribute differently, in both the object language and the


metalanguage, to the truth values of sentences in which they occur.
But in both cases the move to satisfaction represents an attempt to
isolate that contribution and to extrapolate the way in which further
expressions of a similar type might function.
I remarked that with our simple language there are several ways to
delineate the predicate domain of the satisfaction relation.16 I sug
gested populating this domain with properties, since the resulting
account of the semantic functioning of predicates seems intuitively
appealing. Intuition aside, this move is not generally adopted. Rather,
it is standard to take this domain to consist of sets of individuals drawn
from the name domain. We can then think of predicates as asserting,
of some individual, that it is a member of a given set. Once again we
need not claim that predicates name sets, only that expansion of the
category of predicates is constrained by set availability.
The present language gives us no reason to prefer one of these
options over the other; there are no sentences in which the contribu
tion of predicates is not equally well explained either as the assertion of
set membership or as the assertion of property possession. But this is
not to say that the two are equivalent. There may on the one hand be
sets that do not correspond to any property, or on the other hand
multiple properties shared by all (and only) members of a single set. So
in either case the possible expansions of the category of predicates will
be differently circumscribed.
There are two remaining categories whose satisfaction domains we
must specify: the category containing or (which I will call a sentential
connective) and the category containing not (which I will call a sentential
operator). For simplicity, we can take sentential functions with a single
connective variable to be satisfied by binary truth functions, and those
with a single operator variable to be satisfied by unary truth functions.
Again, there is no need to say that sentential connectives and operators
name truth functions, only that there is a fixed relation that holds
between each of them and some member of the appropriate satisfac
tion domain. I will say connectives and operators express truth func
tions. Thus, taking c to be a connective variable, the sentential func
tion Abe Lincoln was president c George Washington had a beard is
satisfied by the truth function expressed by or, though by neither the
truth function expressed by and nor the truth function expressed by
nor.
Clearly, our choice of domains here severely restricts the possible
expansions of these two categories. According to the present account,
there are only sixteen possible sentential connectives and four possible
sentential operators. The category of operators could not, for exam-

Tarski on Logical Truth

45

pie, be expanded to include necessarily as it is ordinarily understood.


Although this term may be grammatically similar to not, its contribu
tion to the truth value of sentences in which it occurs cannot be
reduced to the expression of a unary truth function. Our decision thus
treats necessarily with the same disdain earlier afforded Nix; both
are, from the present perspective, semantically ill-behaved.
We can give schemata parallel to (T) and (3') that characterize
satisfaction for sentential functions with single connective or operator
variables. Thus, for the former we will have:
(4)

B satisfies . .. c . . . if and only i f .. . b . . .

Here we require that the expression replacing 6 must express the


binary truth function named by the expression that replaces .
Finally, for sentential functions with single operator variables, we
will use the following schema:
(5)

U satisfies . . . o .. . if and only i f . . . u . . . ,

requiring that the expression replacing w must express the unary truth
function named by the expression replacing U.
A Persistent Account of Logical Truth
Our technique for extending the notion of satisfaction to sentential
functions with an arbitrary number of variables is again to employ
sequences. But now our sentential functions may also contain variables
of arbitrary type. Say that a sequence is any function that assigns to each
variable an object from the appropriate satisfaction domain. Let S(x,
g, c, o) be a schematic placeholder for any sentential function all of
whose name variables are among x\, . . . ,Xk\ all of whose predicate
variables are among g\, . . . , gh\ all of whose connective variables are
among c\, . . . ,Ck\ and all of whose operator variables are among
oi........Ok. Let lS(x, g, c, o) stand for a name of that sentential
function, and finally, let S(x/n, g/p, ell, o/u) be the result of uni
formly replacing variable x, with expression n,, gi with pi, C{with 6,, and
Oiwith Ui (for 0 < i < k), wherever they occur in that sentential function.
For any given sequence/, we require that n* name the individual/(x,),
that pi assert possession of the property figi), that 6, express the binary
truth function/(c,j, and that Ui express the unary truth function/(o,j.
We then have:
(6)

Sequence f satisfies lS(x, g, c, o) if and only if


S(x/n, g/fi, ell, o/u).

46

Tarski on Logical Truth

If a given sequence/assigns Ben Franklin to xi, the property of


having worn a powdered wig to gi, and the truth function expressed
by and to ci, then the following are sample instantiations of (6):
(6.1)

Sequence/satisfies xi g\ if and only if Ben Franklin wore a


powdered wig.

(6.2)

Sequence/ satisfies lx\ was president c\ Abe Lincoln g{ if and


only if Ben Franklin was president and Abe Lincoln wore a
powdered wig.

When, for a given sequence/, we also have available object language


expressions nu . . . ,n k; p\, . . . ,pk\ b\, .. . ,bk\ and u\, . . . ,u h
which meet the above conditions on naming, assertion, and expres
sion, we can offer the following analogue of schema (2):
(7)

Sequence/satisfies S(x, g, c, o) if and only if


lS(x/n, g/p, c/l, o/u)' is true in L.

Thus if/is a sequence that assigns George Washington to xi and the


property of having been president to lg\, we get:
(7.1)

Sequence/satisfies lx\ g{ if and only if George Washington


was president is true in L.

Together (6) and (7), like (1) and (2) before them, demonstrate the
connection between satisfaction and substitution. Satisfaction isjust an
extensionthough not as simple an extension as it first appearedof
substitution. It allows us to extend our various substitution classes to
include expressions from any semantically well-behaved expansion of
the language. An expansion is well-behaved just in case any new mem
ber of a given category of expressions stands in the specified relation to
an object in the appropriate satisfaction domain. In the case of our
present language we allow new names if they name individuals, new
predicates if they assert possession of properties, new connectives and
operators if they express appropriate truth functions. Our upcoming
definition of logical truth will thus meet the requirement of persis
tence, with the implicit qualification we have all along been assuming: logical
truth will be persistent through semantically well-behaved expansions
of the language.
Before applying the generalized notion of satisfaction to the defini
tion of logical truth, it should again be emphasized that we have not
given a definition of satisfaction, either of the general notion, which
resists definition in principle, or even of satisfaction for sentential
functions of our current object language. Instances of schema (6) can
be taken only as adequacy conditions that constrain the formal defini-

Tarski on Logical Truth

47

tion of satisfaction for the present language. A formal definition of


satisfaction for arbitrary sentential functions of the language would
proceed by a simple recursion on the set of sentential functions.
Once we have access to a definition of satisfaction for arbitrary
sentential functions of a particular language, we can give the following
definition of logical truth. Let S' be any sentential function that results
from uniformly replacing all atomic expressions in S, other than mem
bers of with variables of appropriate type. Then we will say that 5 is
logically true with respect to %just in case S' is satisfied by all sequences.
This is Tarskis definition of logical truth.
Logical Consequence
How should we define logical consequence? One route that might
seem attractive is a simple reduction of this notion to that of logical
truth. Certainly, if a sentence S is a logical consequence of a set of
sentences K = {K\, .. . , Kn}, then the single conditional sentence
whose antecedent is the conjunction of the members of K, and whose
consequent is S, must be logically true. That is, 5 will be a logical
consequence of K if and only if the sentence
If K\ and . . . and Kn, then S
is logically true. Given our definition of logical truth, it might seem
natural to rally this observation into an account of the consequence
relation.
There are three problems with this idea, not overwhelming, but still
significant. First, we would have to assume that the language we are
dealing with contains the expressions and and if . . . then, or the
equivalent, and this assumption would restrict the applicability of the
account.17Second, we would have to assume that these expressions are
always included in
and this again restricts the generality of the
suggested definition.18 Finally, and most important, the reduction will
work only if K is finite, or, alternatively, if the language allows infinitely
long sentences. For otherwise we could never form the antecedent of
our conditional sentence.
For these reasons, neither Bolzano nor Tarski tries to reduce the
notion of logical consequence to that of logical truth. But their defini
tions of consequence are, not surprisingly, quite similar, Tarskis being
a simple emendation of Bolzanos. We can describe them quite suc
cinctly.
Say that an inference or argument of a language L is any ordered pair
(K, S) in which S is a sentence of L, and K a set of sentences of L. An
expression will be said to occur in an argument (K, S) if it occurs either

48

Tarski on Logical Truth

in S or in some member of AT; we will call the argument truth preserving


just in case either 5 is true or some member of K is false. So, for
instance, any argument whose conclusion (that is, S) is the English
sentence Abe Lincoln was president will be truth preserving, as will
any argument with the sentence George Washington had a beard
among its premises (that is, in K). This simply because the first sen
tence is true and the second false.
According to Bolzano, an argument (K, S) is logically valid with re
spect to a selection of fixed terms just in case it is truth preserving
and every argument (K', S') that results from making one or more
permissible substitutions for expressions occurring in (.K, 5) is also
truth preserving. A permissible substitution is defined in the obvious
way: all members of %must be left untouched, and the replacement of
variable expressions must be uniform throughout the argument. A
sentence 5 is a logical consequence, with respect to
of a set of
sentences K just in case the argument (K, S) is logically valid with
respect to
As Bolzano defines it, the logical consequence relation, like the
property of logical truth, depends crucially on our selection of fixed
terms. Just as every true sentence can be rendered logically true by
including all its atomic expressions in so too every truth-preserving
argument becomes logically valid when we fix all of its component
terms. Obviously any argument that concludes with the true sentence
Abe Lincoln was presidentwill be logically valid when %contains each
expression appearing in the argument; such arguments will in fact be
logically valid on any selection of $ that includes both Abe Lincoln
and was president. Since all permitted substitution instances share the
same conclusion, the continued truth of that sentence ensures the
truth preservation of those instances. On the other hand, if $ is empty,
if we hold no terms fixed, then in general the only valid arguments will
be those in which the same sentence appears both as premise and
conclusionthat is, where S is a member of K.
According to the present account, the logical consequence relation is
not persistent. Whether S is a logical consequence of K does not
depend only on our choice of fixed terms; it can also be affected by the
size of the substitution classes for the variable terms. In particular, the
relation will not persist through semantically well-behaved expansions
of the language, although our choice of $ remains constant. Thus,
when we fix the atomic predicates in our previous language, the sen
tence Abe Lincoln was president is a logical consequence of any set
containing the sentence Abe Lincoln had a beard; this due to our
omission of names for bearded nonpresidents. Had we merely in
cluded the expression Robert E. Lee, interpreted, as in English, to

Tarski on Logical Truth

49

denote the Confederate gentleman, Lincolns presidency would not


have turned up a consequence of his having a beard.
Tarskis definition employs satisfaction, thereby ensuring that the
consequence relation will persist through semantically well-behaved
expansions of the language. Let us take an inferential function (or,
better, an argumentform) to be any ordered pair whose first member is a
set of sentential functions and whose second member is a single sen
tential function. Thus, an argument isjust an argument form in which
no (free) variables occur. We will say that an argument form (.K', S') is
satisfaction preserving on sequence/ju s t in case/ either satisfies S' or does
not satisfy some member of K' .
Suppose now that (K', S') is an argument form that results from
uniformly replacing all atomic expressions in argument (K, S), other
than members of with variables of appropriate type. Then we will
say that (K, S) is logically valid with respect to % just in case (K', S') is
satisfaction preserving on all sequences. Finally, sentence S is a logical
consequence, with respect to of set K if the corresponding argument
(K, S) is logically valid with respect to This is Tarskis definition of
logical consequence.
By replacing truth preservation with satisfaction preservation, we
avoid the violation of persistence noted two paragraphs back. Once we
have specified the class of well-behaved expansions of the language
that is, once we have chosen satisfaction domains and defined the
satisfaction relation for arbitrary sentential functionswe are assured
that any argument judged logically valid will remain so throughout
those expanded versions of the language. In this sense Tarskis defini
tion of logical consequence, like that of logical truth, successfully meets
the demand for persistence.
Recapitulation
Tarskis goal is to provide an analysis of the notions of logical truth and
logical validity, to provide definitions that are, as he puts it, close in
essentials to the common concepts. To this end, he develops an
account that refines the substitutional definitions first proposed by
Bolzano. He notes that the substitutional tests must be demoted from
the status of necessary and sufficient conditions to mere necessary
conditions; to achieve persistence, the limitations encountered with
actual substitution classes must be overcome.
The idea behind Tarskis solution is simple. If a given sentential
function is satisfied by all sequences, then naturally all its permissible
substitution instances will be true.19 But of course the converse of this
does not always hold: a sentential function may survive the substitu-

50

Tarski on Logical Truth

tional test, though not be satisfied by certain sequences. Thus, no


sentence (or argument) can pass Tarskis more stringent test without
passing Bolzanos as well, and where the tests produce different re
sults, the problem will invariably lie in the limited resources available
in one or more of the original substitution classes. So any divergence
marks a potential failure of persistence for Bolzanos account.
Now Tarskis solution, though simple in conception, may not be so
simple in execution. The new complexity is an immediate consequence
of the concern over persistence: the goal of achieving a persistent
account of the logical properties makes no sense except in the context
of a theory of (or assumptions about) how existing members of a
category contribute, and how potential members could contribute, to
the truth values of sentences in which they occur. The required ac
count of satisfaction must provide such a theory, both to give precise
(and plausible) sense to the demand for persistence, and of course to
give us resources with which to meet that demand.
In arriving at a definition of satisfaction for a sufficiently broad class
of sentential functions, we attribute to each expression classed as a
variable term a specific semantic functionnaming an individual,
asserting possession of a property, expressing a truth function, and so
forth. By populating a satisfaction domain with the appropriate type
of objectindividuals, properties, truth functionswe take a stand
on how the existing category might be expanded: we condone new
members so long as their semantic contribution, their contribution to
the truth value of sentences, can be charted in a fashion similar to that
of the present members. Thus, any expression that names an individ
ual is treated as a potential member of the category containing Abe
Lincoln, any expression that asserts possession of a property may
belong to the category containing was president, and any expression
that expresses either a unary or a binary truth function is admitted into
the category containing either not or or. We will eventually see how
certain other semantic categories, specifically quantifiers, can be han
dled within this same framework.
Once we have an account of satisfaction, Tarskis definitions run as
follows: S is logically true if and only if 5' is satisfied by all sequences
(where S' results from S by replacing all atomic expressions, except
those in by variables). Similarly, S is a logical consequence of K if and
only if every sequence either satisfies S' or fails to satisfy some member
of K '. Note that, given this latter definition, logical truth can be seen as
a reduced form of logical consequence: S will be logically true just in
case it is a consequence of the empty set, or, alternatively, if it is a
consequence of any set of sentences whatsoever.

Interpretational Semantics

In Chapter 1, I remarked that the standard, model-theoretic defini


tion of consequence is an outgrowth of Tarskis account. I will begin
this chapter by explaining how, upon making some minor ad
justments, the direct application of Tarskis account gives way to a
recognizable model-theoretic semanticsthat is, to the characteri
zation of a relation, x is true in y, holding between sentences and
models. However, the conception of model-theoretic semantics that
emerges is strikingly different from that presupposed in the represen
tational approach sketched in Chapter 2. For reasons that will become
obvious, I adopt the term interpretational semantics for the Tarskian
conception of model-theoretic semantics.
Interpretational and representational semantics occasionally inter
sect. That is, we sometimes find that one and the same model-theoretic
semantics can be viewed from either the interpretational or the repre
sentational perspective. I will discuss a couple of points of intersection:
one is the simple semantics devised for the language of Chapter 2, the
other a slightly more intricate semantics for the language of Chapter 3.
But in spite of the occasional intersection, interpretational and repre
sentational semantics are radically different approaches to semantics,
approaches whose adequacy must be judged by completely different
standards. In Chapter 2, I sketched the standards applied to a repre
sentational semantics; as we will see, the counterparts for interpreta
tional semantics are simply the criteria already discussed for delineat
ing satisfaction domains and for defining satisfaction. For in an
interpretational semantics our class of models is determined by the
chosen satisfaction domains; our definition of truth in a model is a
simple variant of satisfaction.

52

Interpretational Semantics

Distinguished Sentential Functions


How do we get to model-theoretic semantics from Tarskis account of
the logical properties? The steps are by and large just minor modifica
tions of the definitions described in the last chapter. Unfortunately the
end result of these modifications is a certain blurring of the careful
distinction Tarski draws between sentences and sentential functions,
between the ordinary expressions of the language and the variables we
introduced for defining the logical properties.
In considering the following changes, it will be convenient to assume
we are interested in logical truth and logical consequence only with
respect to a particular selection $ of fixed terms. When speaking of our
sample language from Chapter 3 ,1 assume that $ is the set containing
the atomic expressions or and not. In this way we avoid repeated
mention of $ and can simply speak of the fixed terms and the variable
terms of the language. But it is important to keep in mind the (hence
forth implicit) relativization to our choice of
In particular, it is
crucial to remember that variable terms are not variables. Variable
terms are ordinary atomic expressions of the language, differing from
fixed terms only in their omission from $.
Recall that in testing a sentence 5 for logical truth, we first convert 5
to a sentential functional 5'. 5' results from uniformly replacing vari
able terms with variables of appropriate type. It does not matter which
variables we choose in constructing S' so long as distinct variable terms
receive distinct variables, and multiple occurrences of a single variable
term are converted to multiple occurrences of a single variable. So, for
example, the sentence Abe Lincoln was president corresponds quite
indiscriminately to the sentential functions xi g i xi g2 ,' *2 g2 ,' and so
forth.
The first modification we will make ensures that a specific sentential
function S* corresponds to each sentence S in the language. To do this
we need only assign a specific variable, once and for all, to each variable
termdifferent variables, of course, to different terms. Thus, we
might take xi to be the variable corresponding to Abe Lincoln, x2* to
be the variable corresponding to George Washington, lgi the variable
corresponding to was president, lg2 the variable corresponding to
had a beard. We need not choose special variables for or and not,
the remaining atomic expressions, since at present they are both mem
bers of $.
We can now take the distinguished sentential function 5* correspond
ing to S to be the result of replacing each variable term in 5 with its
assigned variable. So, for example, xi g2 or *2 g\ is the distinguished
sentential function corresponding to Abe Lincoln had a beard or
George Washington was president.

Interpretational Semantics

53

This change allows us to simplify our sequences considerably. Cur


rently a sequence assigns objects from appropriate satisfaction do
mains to many variables that never appear in the new, distinguished
sentential functions. But it can hardly make any difference what is
assigned to, say, predicate variable g3 when we know that any distin
guished sentential function 5* contains occurrences of at most gi and
lg2 - So without modifying our account of satisfaction, we can simply
take the domain of a sequence to be limited to the chosen variables
that is, to the variables assigned to specific variable terms in the lan
guage. For the present language, such a limited sequence/* will be any
function that assigns members of the name domain to 'x\ and x2, and
members of the predicate domain to 'g\ and g2- Clearly these simpler
sequences suffice for our current needs.
Tarskis test for logical truth can now be characterized in the follow
ing way: we convert a sentence 5 to the distinguished sentential func
tion 5* that results from replacing each variable term with its assigned
variable. We then run through our new, pruned down sequences to
see whether they all satisfy 5*. If so, 5 is logically true; if not, not. The
test for logical validity proceeds similarly. First we replace an argu
ment (K, S) with its distinguished argument form (K*, S*)that is, the
result of replacing all variable expressions occurring in (K, S) with their
chosen variables. We then check to see that (K*, S*) is satisfaction
preserving on all limited sequences/*. If so, 5 is a logical consequence
of K; if not, not.
D-Sequences and D-Satisfaction
We are now halfway to model-theoretic semantics. The remaining
change is equally slight, though potentially more confusing. Since we
have set up a one-to-one correspondence between variable terms and
variables, and between sentences and sentential functions, there is a
way to achieve the same results as our present tests without bothering to
detour through variables and sentential functions. The new method
will yield a recognizable model-theoretic semantics for our language.
First we must introduce a new type of sequence, one whose domain
consists of the variable terms of the language rather than the chosen
variables. Let us say that a direct or d-sequence is any function that assigns
to each variable term an object from the appropriate satisfaction do
main. Thus, a d-sequence will assign Ben Franklin directly to the ex
pression Abe Lincoln, whereas a limited sequence assigns Franklin to
X|, the variable chosen to correspond to Abe Lincoln.
For any d-sequence f, let /* be the corresponding limited se
quenceIluii is, Ihe function that assigns the same object to a chosen

54

Interpretational Semantics

variable (for example, *i, gi) as f assigns to the corresponding vari


able term (Abe Lincoln, was president). We can now introduce a
relation, parallel to satisfaction, which holds between d-sequences and
sentences. Specifically, say that a d-sequence/ d-satisfies sentence S if
and only if the corresponding limited sequence /* satisfies the distin
guished sentential function S*.
Although d-satisfaction is defined in terms of satisfaction, it is im
portant not to confuse the two notions. For one thing, if we briefly
reflect on schema (6) of the last chapter, it will be clear that sentences,
sentential functions with no variables, are only trivially satisfied or not
satisfied by sequences. A true sentence is satisfied by all sequences,
while no sequence satisfies a false sentence. Thus, for any limited
sequence/* we have the following instantiation of (6):
(6.2)

Sequence/* satisfies Abe Lincoln was president if and only if


Abe Lincoln was president.

Since Lincoln was president, every sequence satisfies Abe Lincoln was
president; had he not been, no sequence would.
Now suppose that / is a d-sequence that assigns Franklin to Abe
Lincoln and the property of having worn a powdered wig to was
president. If /* is the corresponding limited sequence, we will have
the following instantiation of (6):
Sequence /* satisfies xi g{ if and only if Ben Franklin wore a
powdered wig.
Since xi g\ is the distinguished sentential function corresponding to
Abe Lincoln was president, our definition of d-satisfaction gives us:
D-sequence / d-satisfies Abe Lincoln was presidentif and only
if sequence/* satisfies xi gi.
And from these we get:
(6.2D) D-sequence / d-satisfies Abe Lincoln was president if and only
if Ben Franklin wore a powdered wig.
The comparison of (6.2) and (6.2D) points up the difference be
tween satisfaction and d-satisfaction. In (6.2) the makeup of sequence
/* is quite immaterial, since the sentential function Abe Lincoln was
president has no variables; it is simply a true sentence. But it is clear
from the derivation of (6.2D) that d-sequences do not trivially d-satisfy
true sentences, nor will they trivially fail to d-satisfy false sentences.
In effect, d-satisfaction tells us whether a sentence would have been
true had its variable terms been interpreted in accord with the assign
ments of the d-sequence. In fact Abe Lincoln was president is a true

Interpretational Semantics

55

sentence. But had Abe Lincoln named Ben Franklin and had was
president meant wore a powdered wig, then this sentence would have
been true just in case Ben Franklin wore a powdered wig. Since Franklin
did not, as a matter of fact, wear powdered wigs, this sentence would
have been false on the interpretation suggested by d-sequence /.
We have now, of course, arrived at model-theoretic semantics,
though our ungainly terminology could stand some revision. But be
fore making the final, terminological change, let us note how Tarskis
definitions of logical truth and logical consequence survive the altera
tions already in place. It is a trivial consequence of the former defini
tion and our present account of d-satisfaction that a sentence is logi
cally true just in case it is d-satisfied by every d-sequence. Just so, an
argument is logically valid, its conclusion a logical consequence of
its premises, if and only if it is d-satisfaction preserving on all
d-sequences. There is now no need to move to sentential functions or
argument forms to apply Tarskis definitions.
Our final terminological change will be this: replace d-sequence
with model, and the phrase is d-satisfied by with is true in. Thus,
a sentence will be logically true if and only if true in all models, and an
argument logically valid just in case it is truth preserving in all models.1
Semantically Well-Behaved Reinterpretation
Consider for a moment the nature of d-sequences, or of models, as we
are presently calling them. In Chapter 3, we saw how the technique of
satisfaction is meant to extend Bolzanos substitutional tests for logical
truth and logical validity, our stock of sequences allowing considera
tion of all semantically well-behaved expansions of the various substi
tution classes. The technique of d-satisfaction, truth in models, em
bodies precisely the same extension of the substitutional account,
though the style of the tests is slightly modified. In particular, no
syntactic manipulations of the sentences or arguments being tested, no
exchanges of variables for variable terms, are now required.
We can think of the new technique in various ways. For example, we
can obviously consider it a simple abbreviation, somewhat confusing
perhaps, of Tarskis original method. As such, we must imagine the
variable terms of the language doing double duty: on the one hand,
they act as ordinary expressions of the language, taking part in genu
ine sentences whose logical properties we hope to reveal. But when it
comes time to test for logical truth and logical validity, the variable
terms also act as variables, their replacement by actual variables now
rendered superfluous thanks to the slight technical modifications de
scribed in the last section. If expressions like Abe Lincoln are just

56

Interpretational Semantics

considered odd-looking variables, then d-sequences are simply se


quences and d-satisfaction simply satisfaction.
There is another view of the model-theoretic technique that is con
siderably more natural, and equally faithful to the basic idea of
Tarskis definitions. As I suggested above, we can think of a dsequence as providing a possible reinterpretation of the variable terms in
our language, of the atomic expressions not currently being held fixed.
From this perspective, our model theory provides a characterization of
x is true in L for a limited range of languages L. Thus, the class of
d-sequences, or models, does not encompass all conceivable reinterpre
tations of the variable terms, but instead encompasses all semantically
well-behaved reinterpretations. No model suggests that Abe Lincoln
might have contributed to the truth value of sentences in a manner
akin to Nix. Rather, since Abe Lincoln presently contributes by
naming an individual, the permissible reinterpretations of this expres
sion are taken to be constrained by the availability of nameable
individuals, by the name domain of the satisfaction relation. Similarly,
permissible reinterpretations of predicates are limited by the predicate
domain. And of course had or and not been left out of the set $ of
fixed terms, their range of interpretation would be constrained by the
connective and operator domains, respectively.
Among these interpretations we will find what is called the intended
interpretation. For our language, the intended interpretation is the
model that assigns Abe Lincoln to Abe Lincoln, having been presi
dent to was president, and so forth. This is simply the trivial reinter
pretation of the variable terms, the interpretation in which all expres
sions of the language mean what they actually mean. If our class of
models omitted this assignment, we could not be sure that a logically
true sentence was not actually falsethat is, false when the variable
expressions are interpreted in the normal way. Similarly, if the in
tended interpretation were not included in the test, we would have no
general assurance that logically valid arguments in fact preserve truth.
Obviously it makes no real difference whether we see models as
interpretations of our language, or whether we simply view variable
terms as variables of convenience, with models cast as ordinary
assignments to these not-so-ordinary variables. The difference is
purely heuristic. Either way, I will call the present conception of
model-theoretic semantics the Tarskian or interpretational view. Accord
ing to it, our models are meant to range over all semantically wellbehaved interpretations of some subset of the expressions in the lan
guage. Let us now contrast the interpretational perspective with the
representational view described in Chapter 2.

Interpretational Semantics

57

Samples of the Contrasting Views


The best way to emphasize the contrast between the interpretational
and representational views is to consider specific examples. In Chapter
2, I sketched a simple representational semantics for a language con
taining five atomic expressions: three were sentences (Snow is white,
Roses are red, and Violets are blue), one a connective (or), and one
an operator (not). Obviously we could devise a more finely grained
grammatical analysis of this simple language, but it will be an instruc
tive exercise to devise a Tarskian semantics while retaining the coarser
parsing.
Let us begin the old way, introducing variables for each type of
atomic expression. For sentences we will use /1, p?, . . . ; for connec
tives W W . . . ; and for operators 01, o2, . The next step is to
specify satisfaction domains for the various types of variables. As we
saw in the Chapter 3, this requires that we hazard a simple theory of
how existing members of a category contribute, and differ in their
contribution, to the truth values of sentences in which they occur. We
can again take connectives and operators to express appropriate truth
functions, and construct the respective satisfaction domains accord
ingly. Thus, it remains to settle on the sentence domain of the satisfaction
relation.
As before, we will opt for the simplest plausible satisfaction domain.
In the present language, we can explain the semantic contribution of
any embedded sentence to its embedding sentence in one of two ways:
either the component sentence says something true or it says something
false. Thus, we can take the sentence domain to consist of the two truth
values, true and false. Again, there is no reason to say sentences name
truth values, any more than predicates name properties. I will simply
say they have truth values; Snow is white has the value true because it
says something truespecifically, that snow is white.
Sequences will, of course, be functions that assign truth values to
sentence variables, binary truth functions to connective variables, and
unary truth functions to operator variables. The analogue of schema (6)
for the present language will then run as follows:
(8)

Sequence/satisfies S(p, c, 0) if and only if S(p/s, cfc, o/u).

Recall that S(p, c, 0) is to be replaced by the name of an arbitrary


sentential function of the language, and S(p/s, cfo, o/m) is to be
replaced by a sentence that results from inserting appropriate expres
sions of the metalanguage for variables of the sentential function. The
appropriateness of the replacement expressions will now be deter-

58

Interpretation^ Semantics

mined by the following instantiation conditions: sentence 5, must have


the truth value f(pi)', connective bt must express the binary truth function
f(Ci); and operator m, must express the unary truthfunctionf (0,). Thus, if /
assigns false to p\ and the truth function expressed by and to ci, we
would have the following instantiation of (8):
(8.1)

Sequence/ satisfies Snow is white c1p\ if and only if snow is


white and George Washington had a beard.

Here again, the relationship between satisfaction and the substi


tution of possible expressions is obvious: George Washington had a
beard is not a sentence of the present language, nor is and a connec
tive of the language. But had they been, the sentence Snow is white
and George Washington had a beard would have been a false substi
tution instance of the sentential function Snow is white c\ p\.
Naturally, in constructing (8.1) we could have chosen sentences of
the metalanguage other than George Washington had a beard, so
long as the chosen sentence said something falsethat is, actually had
the truth value false. The situation is similar with and: any meta
language connective that expresses the same truth function would
have met the requisite instantiation condition; thus, moreover might
have been used in lieu of and.
Let us now convert to d-sequences and d-satisfaction. Suppose we
are again interested only in logical truth and logical validity with
respect to the expressions or and not. In other words, our three
atomic sentences will be considered the only variable terms. For any
sentence S we will let S* be the sentential function that results from
uniformly replacing Snow is white with /1 , Roses are red with
and Violets are blue with ps. A d-sequence (model) will be any
function whose domain is the set of atomic sentences and which assigns
a truth value, a member of the sentence domain of the satisfaction
relation, to each of those sentences. D-satisfaction (truth in a model)
will of course be defined as before: d-sequence / d-satisfies sentence S if
and only if the corresponding limited sequence /* satisfies the distin
guished sentential function S*.
Notice that there are precisely eight d-sequences, or models. Fur
ther, these models happen to be exactly the same eight functions we
introduced for our representational semantics in Chapter 2: there is a
model that assigns true to all three atomic sentences, one that assigns
false to all three, and various models that assign the remaining combi
nations of values.
But of course, according to the Tarskian view these models are not
meant to represent possible configurations of the world, as the repre
sentational view would have it; rather, they are meant to canvass

Interpretational Semantics

59

semantically well-behaved reinterpretations of the atomic sentences of


the language. Thus, at present the sentence Snow is white says some
thing true. Yet it could have been provided with another interpre
tation, an interpretation in which it said something falsesay, that
Washington had a beard. The d-sequences that assign false to this sen
tence are meant to take account of these possible reinterpretations.
Complex sentences which are d-satisfied by every d-sequencethat is,
sentences which are true in all modelsare just those whose truth
would survive any semantically well-behaved reinterpretation of the
atomic sentences of the language.
We have finally arrived at the alternate view of truth tables described
n Chapter 2. And we can emphasize the difference between our
interpretational and representational semantics much as we did the
difference between the two perspectives taken on our theory of truth
in a row. According to both views, our model theory supports certain
counterfactual claims about the truth values of sentences in the lan
guage. But the counterfactual claims that emerge are strikingly differ
ent. Thus, from the representational perspective our semantic theory
supports the claim that the sentence Snow is white or snow is not white
would have been true even if snow had not been white; that contin
gency is, after all, depicted by various of our models. However, from
the interpretational perspective no claim is made about what would have
happened to the truth value of our sentence had snow not been white.
Rather, our theory supports the quite different claim that this sentence
would still be true even if the component expression Snow is white
were somehow reinterpreted, perhaps given the interpretation pres
ently enjoyed by the English sentence George Washington had a
beard.
Now consider the model theory constructed for our second lan
guage. There, too, we held fixed the interpretations of or and not,
and so our models consisted of functions that assign individuals to
Abe Lincoln and George Washington, and properties to was presi
dent and had a beard. We have already considered at length the
interpretational perspective on these models; let us now look at them
briefly from the contrasting representational perspective. For here
again we can view our models in either way.
To provide a representational account of these models, we begin by
assuming that all the expressions of our language have their ordinary
interpretation, regardless of the assignments made by a model. The
purpose of assigning various objects to various expressions is to con
struct representations of alternative configurations of the world. The
individual assigned to Abe Lincoln in a given model represents
Lincoln in that model, the property assigned to was president repre-

6o

Interpretational Semantics

sents the property of having been president. If the individual has the
property, the model depicts a world in which Lincoln was president; if
the individual does not, the model depicts one in which Lincoln was not
president. The fact that the individual may happen to be Ben Franklin
(or perhaps an abstract object like the number one), and the property
that of having worn a wig (or perhaps that of being an even number),
has no bearing on our interpretation of Abe Lincoln or was presi
dent. On the contrary, the interpretation of these expressions, their
actual interpretation, is our key to understanding what the model
represents, what configuration of the world it depicts.
Again the difference emerges in the counterfactuals our theory
supports. The sentence Abe Lincoln was president is not true in any
model that assigns Franklin to Abe Lincoln and the property of
having worn a powdered wig to was president. According to the
Tarskian view, this supports a counterfactual claim about how the
truth value of this sentence would have changed had Abe Lincoln
named Ben Franklin and had was president meant wore a powdered
wig. From the representational perspective, it supports a claim about
how the truth value of this sentence would have changed had Lincoln
not been president. Here Franklin is just a convenient stand-in, the
property of wearing a wig a handy prop. The same representational
roles could have been played equally well by innumerable other objects
and properties, and in each case the moral would have been the same:
the sentence Abe Lincoln was president would have been false had
Lincoln not been president.
The Failure of Intersection
We have here two very different conceptions of model-theoretic se
mantics. According to the representational view, the models ap
pearing in our semantics are simple depictions of possible configura
tions of the nonlinguistic world, the world our language talks
about. A sentence is true in a given model just in case it would have
been true if the world had been as depicted by the model. Conse
quently if, judging by some intuitive metaphysics, all possible configu
rations of the world receive some manner of depiction, then sentences
that come out true in all models are true regardless of how the world
might be; perhaps they are true simply due to the way the language
works. Of course, should some possibilities be omitted, inadvertently
or otherwise, these results will hold only modulo the metaphysical
assumptions embodied in our semantics. This is arguably the case with
the model theory for our second language; the semantic theory does
not tell us, for instance, how the truth value of sentences would have

Interpretational Semantics

61

been affected had Lincoln not existed. Perhaps there are other possi
bilities our theory fails to cover.
According to the second conception, the Tarskian view, each model
provides a possible interpretation of certain expressions appearing in
the language, those not included in the set %of fixed terms. A sentence
is true in a given model if, so to speak, what it would have said about
the world on the suggested interpretation is, in fact, the case. Thus,
sentences that come out true in all models are true regardless of how
we interpret a subset of their component expressions. Here, too, the
regardless must be qualified: the result holds only modulo our cir
cumscription of the class of semantically well-behaved reinterpre
tations of the variable terms. It is assumed that Abe Lincolnwould not
have functioned like Nix, or even like the considerably less bizarre
Pegasus. The semantic theory does not tell us how the truth values of
our sentences would react to such reinterpretations.
With the semantic theories considered in the last section, the two
conceptions seem aptly described as differences in perspective: to move
from one to the other requires nothing more than a subde shift in
gestalt. But it would be a serious mistake to imagine that this will always
be the case. Indeed in our two simple examples we have just been
lucky; we have just hit upon a fortuitous intersection of the two ap
proaches.
Clearly, not every model-theoretic semantics allowed from the inter
pretational perspective can also be viewed representationally. In the
case of our sample languages, this becomes apparent when we con
sider different theories that emerge from different selections of $, the
set of fixed terms. With other choices of $ we encounter one of two
problems: either the resulting class of models, when seen representa
tionally, omits depictions of genuinely possible configurations of the
world, or there is simply no way to view the class of models as represen
tations.
We would have run into the first problem had Snow is white been
included in On this choice of fixed terms our models would consist
of functions that assign truth values to Roses are red and Violets are
blue. These models can still be taken representationally, but as such
they contain an obvious omission: we have no models that depict
worlds in which snow is not white. A similar problem would arise with
our second language were we to include, say, Abe Lincoln and was
president in Among the resulting class of models we would still find
depictions of worlds in which Washington was not president (namely,
any sequence that assigns a nonpresident to George Washington) and
worlds in which Lincoln had no beard (namely, any sequence that
assigns a properly that Lincoln does not possess to had a beard), but

62

Interpretational Semantics

we would have no models representing worlds in which Lincoln was


not president.
The second problem would have arisen, with either language, had
we excluded or or not from
Consider, for instance, the
d-sequences we get for our second language when or is considered an
additional variable term. These consist of functions that assign individ
uals to our two names, properties to our predicates, and a binary truth
function to our sole connective. If we try to view such models representationally, we must somehow imagine that or receives its ordinary
interpretation and that our assignment of various truth functions to
this expression isjust a technique for representing possible configura
tions of the nonlinguistic world. But there is no plausible way of
understanding, representationally, models in which or is assigned,
say, the truth function ordinarily expressed by and. This is not to say
that such models depict extremely bizarre possible worlds, worlds we
have difficulty conceiving. There is just no representational counter
part to such a Tarskian semantics.
Consider a more familiar example. Suppose L is the quantifier-free
fragment of the language of elementary number theory. Thus, L
contains such sentences as 2 + 2 = 4and either 7 x 8 = 49 or 7 x 8 =
56. A standard interpretational semantics will hold fixed the meanings
of the identity predicate and the connectives, but will reinterpret the
numerals (0, T, 2, etc.) and function symbols (*+, X, etc.) of the
language. Thus, one model might assign the empty set to 2, the set
containing the empty set to 4, and set union to *+. In this model
that is, according to this interpretation2 + 2 = 4 comes out false,
since the union of the empty set with itself is the empty set, not the set
containing the empty set. Such a semantics makes perfect sense from
the interpretational standpoint, but obviously cannot be viewed repre
sentationally. There is no way to construe the model described as
somehow representing a possible world in which two plus two does
not equal four. That way madness lies: 2 + 2 = 4 might well have said
something false, perhaps something about the union of sets. But what
it saysthat is, what it actually saysis necessarily true.
In all of these cases, the theories described meet the standards of
interpretational semantics, but make no sense if we apply the stan
dards of representational semantics. And it is not hard to find exam
ples of the opposite sort as well: theories that meet the requirements of
representational semantics but that violate the Tarskian conception.
Consider a simple example. Clearly, a perfectly acceptable representa
tional semantics for our second language could get by with far fewer
models than are needed for a plausible interpretational semantics.
Many of the models inherited from the interpretational semantics are

Interpretational Semantics

63

representational^ isomorphic. That is, although two models might assign


different individuals and properties to the names and predicates of
our language, this does not mean they depict different configurations
of the world. All that matters to the depiction is whether the individ
uals assigned to the names have or do not have the properties assigned
to the predicates. For this language a representational semantics could
get by with a small number of nonisomorphic models: sixteen, to be
exact. Viewed interpretationally, any such move would constitute an
unmotivated restriction of the class of semantically well-behaved
reinterpretations of the language, an unjustified limitation on the
name and predicate domains of the satisfaction relation. Such a se
mantics would thus be ruled out by the interpretational guidelines.
To take a more interesting example, recall from Chapter 2 our
discussion of a representational semantics for a language whose atomic
sentences are Snow is white, Snow is red, and Snow is green. There
we suggested models that assign truth values to these sentences, but
with the added proviso that we exclude any model that assigns true to
more than one atomic sentence. This gives us four models rather than
the original eight, the limitation being motivated by the obvious fact
that the remaining models would not depict genuine possibilities. Now
notice that the resulting semantics would be ruled inadequate from the
interpretational standpoint. By including d-sequences that assign true
to Snow is red, we acknowledge that this sentence, though false, could
be assigned a different meaning, perhaps that Lincoln had a beard,
and thereby say something true. Similarly for Snow is green: it could
be reinterpreted to mean, say, that Lincoln was president. But if these
sentences can be assigned such interpretations individually, it must
surely be possible to so interpret them simultaneously. Ruling out
models that assign both of these interpretations at once is no more
justified than ruling out d-sequences that both assign Ben Franklin to
Abe Lincoln and Thomas Jefferson to George Washington, even
though we allow these same interpretations individually. Thus, this
restriction from eight to four models, though easily motivated from
the representational standpoint, would make little sense in interpreta
tional semantics.
Clearly, representational and interpretational semantics are entirely
different enterprises, governed by entirely different standards. They
are not simply two perspectives from which we can view an arbitrary
semantics. The two approaches do happen to come together at certain
fortuitous points, in simple theories that do not explicitly violate either
standard. Such was the case with the examples discussed in the last
section: the same class of models and the same definition of truth in a
model were equally suited to either a representational or an interpre-

64

Interpretational Semantics

tational semantics, to either an explication of x is true in W or an


explication of x is true in L.
Now, there is little significance in the fact that the two approaches
occasionally intersect, or that they do so where they do. The fact that
the same functions can sometimes be used as models for either type of
semantics is hardly more surprising than that a brick can both break a
window and hold up a bookshelf. But what is important to note here is
that the intersection of these approaches is not trivial. Not trivial, but
only in this somewhat trivial sense: it does not always happen. If we
had merely described two perspectives, two ways of viewing one and the
same endeavor, then every interpretational semantics would have a
representational counterpart, and every representational semantics
could also be seen interpretationally. The difference would just de
pend on how we screw up our eyes while watching the move from
model to model.

5
Interpreting Quantifiers

Before going on we should consider one final example, an example in


which the same model theory appears at first glance to satisfy both the
aims of interpretational semantics and the aims of representational
semantics. In the languages we have considered so far, quantifiers
have been conspicuously absent. Yet the standard model theory for
first-order quantified languages seems an obvious case in which inter
pretational and representational semantics intersector so we might
assume.
Actually, the situation is not so simple, and thus this final example
goes beyond mere illustration. The motivation underlying the tradi
tional technique of defining a first-order model seems quite straight
forward when we imagine ourselves offering a representational
semantics. But it turns out that those same models, considered interpretationally, embody a significant departure from Tarskis analysis of
the logical properties. The departure stems from the introduction of
what I call cross-term restrictions on the permissible interpretations of
expressions.
Cross-term Restrictions
Suppose our second language were supplemented with the expression
something, the unrestricted (or trivially restricted) existential quanti
fier. The standard semantics for the resulting language would have us
build models in the following way: first we choose an arbitrary set
called the universe or domain of the model; second we choose a function
that assigns an object front that set to each name in the language, and a
subset of that set to each predicate.1 Truth in a model is defined

66

Interpreting Quantifiers

recursively, the clause governing the newly introduced quantifier en


suring that Something was president is true just in case some member
of the universe falls in the set assigned to the predicate was presi
dent.2
These models have a simple and natural motivation from the repre
sentational viewpoint. As always, the representational semantics draws
no distinction between fixed and variable terms: again, the purpose of
assigning objects of various sorts to the names and predicates is purely
representational, a technique precisely parallel to our earlier account.
The new element in our models, the universe set, provides some
added detail to our representation: it allows us to depict worlds with
various populations, and with various distributions of properties
among those populations. In particular we can represent worlds in
which, say, someone has been president, though that someone is neither
Washington nor Lincoln. And once again, according to the present
system of representation, Washington and Lincoln are always depicted
as existing, their proxies always chosen from among the universe set.
Suppose we try now to construct a complementary, interpretational
account of these models. We must first see if we can trace the boundary
between fixed and variable terms implicit in the semantics. Clearly,
since or and not receive the same treatment as before, they have
been given the status of fixed terms. Similarly, the names and pre
dicates are considered variable expressions, since they receive differ
ent interpretations in different models. The problem is to decide on
the status of something, the newly introduced quantifier. There are
several accounts we might give here; I will describe the two most
natural. In the end both of these come to much the same thing; in the
end neither is satisfactory.
As a first shot we might judge the expression something to have the
status of a variable term, with the range of permissible reinterpre
tations limited to variously restricted existential quantifiers. Reverting
to our pre-model-theoretic terminology, we can take a sentential func
tion containing the variable isin place o fsomething to be satisfied by
an arbitrary set. The underlying idea here is this: for each such set
there is a possible expansion of the language which contains an expres
sion that existentially quantifies over that particular set. For example, our
present language contains neither the expression someone nor the
expression somedog. But the satisfaction domain for this class of
expressions will include the set of humans and the set of dogs. Thus,
we need only define satisfaction in such a way that for some sequences/
we will have:
/ satisfies E was president if and only if someone was presi
dent

Interpreting Quantifiers

67

while for other sequences h we get:


h satisfies E was president if and only if somedog was presi
dent.
If / assigns the set of humans to E / then the expression someone
quantifies existentially over the set named by /(*); while if h assigns the
set of dogs to E , the same semantic relation holds between somedog
and h('E).3
Here we have taken existential quantifier to constitute a single seman
tic category, the possible members of which differ only in their (per
haps implicit) restrictions. The existential quantifier domain of the
satisfaction relation thus consists of these sundry restriction sets. In
choosing a domain for our modelthat is, an assignment to some
thing by our d-sequencewe are simply selecting one possible inter
pretation from that semantic category.4 Hence, something might
have meant someone or somedog, but not everything, everyone, eachdog,
thedog, and so on. Though perfectly understandable, these latter inter
pretations have been ruled semantically ill-behaved: so interpreted,
something would no longer contribute as an existential quantifier to
the truth values of sentences in which it occurs.
The second approach is similar. Suppose we parse our language so
that some thing is seen as a complex expression formed by joining the
determiner some with the common noun thing. We might then
consider some to be a fixed term and thing to be the expression
subject to reinterpretation. According to this view, our semantics treats
three expressions as members of %some, or, and notand the
remainder as variable expressions. Our language happens to have only
one common noun, but there are obviously semantically well-behaved
expansions which contain others. Indeed for each setwhether the
set of humans or the set of dogsour language might have included a
common noun that extends to all and only the members of that set.
Hence, to ensure a persistent yield of logical truths we can take the
common noun domain of the satisfaction relation to consist of sets.
That is to say, we can allow our models, our d-sequences, to variously
interpret thing as meaning human, dog, and so forth. According to this
approach, our universe set is meant to provide one such interpreta
tion for the common noun.
The difference between this and the earlier view is primarily one of
taste. Both accounts allow us to construe something was president as
meaning some dog was president, while neither permits the construal
everything was president. The first account discriminates between these
by invoking a category of existential quantifiers, the second by holding
lixed the determiner some. The first sees the interpretation everything

68

Interpreting Quantifiers

as semantically ill-behaved, the second as disregarding our selection of


fixed terms.5
I said earlier that neither of these accounts is entirely satisfactory.
The problem is this. Recall that our models are constructed by first
specifying a universe set and then choosing appropriate assign
ments, objects or sets, for the names and predicates in our language.
As before, an assignment to Abe Lincoln must fall within the name
domain of the satisfaction relation; an assignment to was president,
within the predicate domain. We have now described two ways of
viewing our selection of a universe set consistent with the aims of
interpretational semantics. On the one hand we see ourselves as
signing to something a member of the satisfaction domain set aside
for existential quantifiers (the class of all possible quantifier restriction
sets); on the other we see ourselves assigning to thing a member of the
common noun domain of the satisfaction relation (the class of all
possible common noun extensions).
According to either of these accounts, each model is simply a
d-sequence, an assignment of some object within the appropriate satis
faction domain to each variable term. But notice that we demand more
of a model than that it be an acceptable d-sequence. We allow models
that assign Ben Franklin to Abe Lincoln, and also models that assign
Fido to Abe Lincoln. Both are considered permissible reinterpre
tations of this variable term. Further, we allow models that assign the
set of humans to something (or thing), and models that assign the set
of dogs to something (of thing). Again these both generate wellbehaved reinterpretations of a variable term. But we do not permit a
model simultaneously to assign Ben Franklin to Abe Lincoln and the
set of dogs to something; nor do we include models that assign Fido to
Abe Lincoln and the set of humans to something. For recall that the
object assigned to Abe Lincoln must fall within the universe set of our
model, the set assigned to something. Clearly there are plenty of
d-sequences in which this will not happen. But we are no longer admitting
all d-sequences into the class of models. So far this change in strategy seems
quite unwarranted.
Demanding that our interpretation of Abe Lincoln not only fall
within the name domain of the satisfaction relation but also that it
somehow be constrained by the interpretation of something (or
thing) imposes a cross-term restriction on the class of models. In the
present semantics, certain d-sequences are excluded from the class of
models not because they suggest a semantically ill-behaved interpreta
tion of any individual expression, but because the interpretations of
different expressions fail to stand in some fixed relation to one another.
Specifically, if / is an arbitrary d-sequence for the current language, it

Interpreting Quantifiers

69

will be disqualified as a model should either /(Abe Lincoln) or


/(George Washington) not be members of /(something), or should
/(was president) or /( had a beard) not be subsets of /(something).
So far we have seen no motivation for imposing cross-term restric
tions in an interpretational semantics. At first glance we have no better
reason for requiring that /( Abe Lincoln) be a member of /(some
thing) than we have for demanding that/(Abe Lincoln) be a member
of/(had a beard) or that /(had a beard) be a subset of /(was presi
dent). Any of these restrictions will naturally alter the yield of our
.semantics, will affect which sentences and arguments qualify as logi
cally true or logically valid. As we will see, for this very reason any such
restriction violates the integrity of Tarskis account of the logical
properties.
Substitution, Persistence, and Cross-term Restrictions
Recall the motivation underlying Tarskis definitions of logical truth
and logical validity. First is the substantial point of agreement with
llolzanos substitutional treatment of the logical properties: according
to Tarski, meeting the substitutional tests is a necessary condition for a
sentence to be logically true or an argument to be logically valid (all, of
course, with respect to the chosen $). A proper definition of satisfac
tion will ensure that this necessary condition is met. The second point
motivates the move to satisfaction: passing the substitutional test is not
a sufficient condition, according to Tarski, since the logical properties
must be persistent. That is, an adequate account of logical truth and
logical validity must show that these properties persist through wellbehaved expansions of the language. Or, turning persistence around,
sentences and arguments that do not qualify as logically true or logi
cally valid should never come to qualify merely through a purge of
otherwise irrelevant expressions from the language.
In the last section I remarked that the introduction of cross-term
restrictions requires a significant departure from Tarskis original
account. We can now pinpoint that departure: if we do not admit all
d-sequences into the class of models, and if this restriction materially
alters the yield of our semantics, then we cannot both demand that the
logical properties be persistent and demand that all logical truths and
logically valid arguments meet the substitutional tests.
The reason is simple. A restriction of the class of models can affect
Ihe output of our semantics only by increasing the set of logical truths
or logically valid arguments. And it can do so only if we have excluded
a( least one d-sequence that provides a well-behaved interpretation on
which some logical truth is false (is not d-satisfied) or some logically

7o

Interpreting Quantifiers

valid argument does not preserve truth (is not d-satisfaction preserv
ing). In which case we can expand the language to include expressions
whose actual interpretations are exactly those specified by the offend
ing d-sequence. But then in the expanded language the original sen
tence or argument will fail the substitutional test with exactly the same
choice of fixed terms. Either persistence has been abandoned (a sen
tence, say, is judged logically true in the original fragment but not in
the newly expanded language) or else the substitutional test has been
rendered violable (the sentence is judged logically true in the ex
panded language in spite of its false substitution instances).
Consider our current semantics. So long as we maintain the cross
term restriction, the following argument is validholding fixed
some, or, and not:
(A)

Abe Lincoln was president.


So, something was president.

But note that it is crucial here that we exclude from our class of models
the d-sequence in which thing is assigned the set of dogs and the
remaining expressions receive their intended interpretations. For oth
erwise the argument would not preserve truth in every model.
Now suppose our language were expanded to contain the common
noun dog. By including the set of dogs in the appropriate satisfaction
domain, we have explicitly approved this expansion as semantically wellbehaved. Yet as soon as we introduce this expression into the langu
age, there will be a permissible substitution instance of the same argu
mentstill holding fixed some, or,and notwhich fails to be truth
preserving:
(A')

Abe Lincoln was president.


So, some dog was president.

We must now choose either to allow (A) to remain valid in the new
language, despite non-truth-preserving substitution instances like
(A'), or to declare (A) invalid, even though it was judged valid in the
preceding fragment. One way we give up substitution; the other per
sistence.
For better or worse, an interpretational semantics that avails itself of
cross-term restrictions cannot avoid straying from Tarskis original
conception of the logical properties, assuming of course that the re
strictions actually alter the output of the semantics. In the next section
I will consider a slightly different account of the present semantics,
one that attempts to minimize, or at least disguise, the change in
underlying conception. But first let me emphasize the generality of the
problem just described.

Interpreting Quantifiers

ji

Cross-term restrictions impose constraints on the simultaneous in


terpretation of two or more expressions. There are, of course, in
numerable such constraints we might imagine imposing. For example,
we might require that /(Abe Lincoln) be a member of /(was presi
dent), or that /(was president) be a subset of/(had a beard). The first
of these would restrict the simultaneous interpretation of expressions
from two different semantic categories, as we do when we constrain
the interpretation of something and Abe Lincoln, while the second
involves our interpretation of two expressions within the same seman
tic category.
Now the ultimate effect of any cross-term restriction is the same: it
excludes certain d-sequences from the class of models. And in so doing
the restriction will, except in trivial cases, expand both the set of
sentences that come out true in all models and the set of arguments
that preserve truth in all models. For this reason the use of any cross
term restriction will require that we abandon one of Tarskis de
siderata: either we allow that logical truths (or logically valid argu
ments) can occasionally be turned false (non-truth-preserving) by
substituting for variable terms, or we admit that the logical properties
are not persistent through well-behaved expansions of the language.
So far I have emphasized one common use of cross-term restric
tions: placing constraints on the simultaneous interpretation of a
quantifier and a name or predicate. But there is no significant differ
ence between this sort of restriction and the use of so-called meaning
postulates. The technique of excluding any model that falsifies a par
ticular meaning postulate is simply a roundabout way of imposing
cross-term restrictions, of limiting the class of models to d-sequences in
which the interpretations of two of more variable expressions stand in
some fixed relation to one another. Commonly these expressions will
fall within the same semantic category, but there is no reason the same
technique might not be used to legislate restrictions across categories
as well. So, for example, we might exclude any model in which Abe
Lincoln was president is false, thus indirectly imposing the first re
striction mentioned two paragraphs back. Of course, not all cross-term
restrictions can be imposed indirectly, through an appeal to meaning
postulates. To use an obvious example, no set of sentences in any
first-order variant of our current language would guarantee that/(was
president) has the same cardinality as /( had a beard).
The important point, though, is not how cross-term restrictions are
imposed, whether through meaning postulates or through direct con
straints on d-sequences. Rather, it is that any such restriction, however
enforced, stands in equal violation of Tarskis original account of the
logical properties. I suspect this violation underlies many traditional

72

Interpreting Quantifiers

objections to the use of meaning postulates in an interpretational


semantics, in particular to the use of postulates that constrain the
simultaneous interpretation of two predicates. But to my knowledge,
no similar objections have been voiced concerning the cross-term re
strictions built into the standard semantics for quantified languages.
No doubt one reason the use of meaning postulates has been con
sidered objectionable while the standard semantics has not is simply
oversight. For one thing, with meaning postulates the violation of
Tarskis analysis is often quite noticeable, since the sorts of postulates
generally used give rise to immediate failures of the substitutional test.
For example, if our language contained the predicate was an elected
official and we required that /(was president) be a subset of/(was an
elected official), then the inference from George Washington was
president to George Washington was an elected official would come
out valid. But here we get a non-truth-preserving substitution instance
without expanding the language: we need only substitute had a beard
for was an elected official.
It should be clear, though, that this immediate violation arises only
because the relevant substitution class contains more than one mem
ber, and of course not all of these members are subject to identical
restrictions: we could hardly place the same restriction on had a beard
as we have placed on was an elected official, for this would rule out the
intended interpretation of the language. The reason we had to expand
our language in order to find a non-truth-preserving instance of (A)
was simply that the present language offers no nontrivial substitution
instances: there is only one expression in the relevant substitution class
(the class of common nouns or the class of existential quantifiers,
depending on our parsing). Had we started out in the expanded
version and imposed our restrictions, violations of substitution like
(A') would arise immediately.
The main objection to using meaning postulates in an analysis of the
logical properties, though, does not involve such failures of substi
tution. Rather, it is the apparent circularity that their use injects into
the analysis. Suppose we considered the inference from George
Washington was president to George Washington was an elected
official to be intuitively valid, and wanted it to be judged so by our
interpretational semantics. Of course, there are many interpretations
of the predicates that make the first of these true and the second false,
but if we allow appeal to meaning postulates, we can easily exclude
these: we need only throw out any interpretations that falsify the
meaning postulate All presidents are elected officials. Then, of
course, this postulate will come out logically truethat is, true in all
the remaining interpretationsand the corresponding inference will
come out logically valid.

Interpreting Quantifiers

73

The problem, though, is that our decision to take this sentence as a


meaning postulate is not guided by anything over and above our
intuitions about which inferences should be valid and which sentences
should be logically true. Thus, we risk reducing our general account of
logical truth to something like this: a sentence is logically true if it is
true in all interpretations that do not falsify any sentences that seem
logically true. Unless we have an independent account of when a
sentence qualifies as a meaning postulate, one that does not simply
appeal to the logical properties we are trying to account for, their use
in a general definition of those properties is simply circular.
All of this applies equally to the use of cross-term restrictions. Tarski
has given us a relatively straightforward technique for specifying the
class of d-sequences or interpretations for a given choice of variable
terms. If we must then impose cross-term restrictions to fine tune the
semantics, but can motivate the restrictions only by noting that the
unmodified semantics gets things wrong, then our analysis is in serious
trouble.
Interpreting Cross-term Restrictions
Tarskis analysis is supposed to provide the theoretical underpinning
of the interpretational approach: constructing such a semantics is
thought to be an application of Tarskis general account of the logical
properties to a particular language. In subsequent chapters I consider
general questions about the adequacy of Tarskis definitions; at
present our problem is more immediate. It is clear that there are
various interpretational semantics for our quantified languagethat
is, various theories consistent with Tarskis original definitions. The
two accounts sketched earlier in the chapter are examples; others
would result from different selections of fixed terms or different
demarcations of satisfaction domains. Yet these seem consistent with
Tarskis definitions only if we admit all d-sequences into our class of
models. But when we do, various intuitively valid argumentssuch as
argument (A)do not preserve truth in every model. Hence, they do
not qualify as logically valid according to an unmodified Tarskian
semantics.
Now if our only concern were to give a Tarskian semantics that
judged (A) valid, it would be quite easy to solve the problem. One way
would be by brute force: we could simply include all of the constituent
expressions in ft. But this would produce an equally counterintuitive
yield of logically valid arguments; in particular, any argument with
Something was president as conclusion would come out valid, this
being, on that selection of ft, a logical truth. Or we could try a more
delicate approach, say, holding fixed the interpretation of something

74

Interpreting Quantifiers

(that is, of both some and thing) but not of the names or predicates.
But this selection also runs into problemsfor example, if our lan
guage contains identity. Thus, we do not want There are at least two
thingssymbolically, 3x3y(x y)to come out logically true, much
less There are at least two billion things. Yet these will be deemed
logical truths if we allow no variation in the interpretation of the
existential quantifier.6
The problem is that with quantified languages, there is no single
selection of fixed terms that gives exactly the right judgments about
validity, at least with an unmodified Tarskian semantics. Yet we have
seen that there is a simple modification of the unrestricted semantics
that does seem to yield an intuitively plausible collection of logical
truths and logically valid arguments. It is the modification incorpo
rated into the standard semantics for such languages: impose cross
term restrictions on the simultaneous interpretation of various atomic
expressions. But it looks like we can employ such restrictions only by
abandoning the very conception of the logical properties that under
lies the interpretational perspective.
What then should we say about the standard semantics for quanti
fied languages? This sort of semantics is almost universally thought of
as firmly grounded in Tarskis analysis. But our recent considerations
suggest otherwise. When applied to our sample language, the result
seems acceptable as a representational semantics, but unacceptable as
an interpretational semantics. This seems surprising enough. But the
situation is even worse when we consider the usual semantics for, say,
the language of first-order number theory. For there, the semantics
clearly cannot be construed representationally, for reasons sketched in
Chapter 4, but neither does it conform to the interpretational guide
lines, thanks to the use of cross-term restrictions. Can it be that our
understanding of the standard, first-order semantics is so completely
undermined by the introduction of cross-term restrictions?
There are three options open to us. First of all, we can simply accept
the surprising conclusion of the recent analysis. We would then have to
count our sample, first-order semantics among those in which the
representational and interpretational approaches fail to intersect, and
banish the semantics for first-order number theory to a limbo some
where between the two approaches. Second, we might try to revise
Tarskis general account so that the occasional use of cross-term re
strictions is vindicated, is shown consistent with some modified interpre
tational analysis of the logical properties. Finally, we might argue that
the recent considerations are somehow faulty, that in fact the standard
semantics is perfectly consistent with Tarskis original definitions.
I will not consider the first two alternatives in any detail. The first

Interpreting Quantifiers

75

option is, obviously, the option of last resort. If we cannot devise


interpretational semantics that produce plausible results for simple
first-order languages, then we have uncovered a serious, if not devas
tating, defect in Tarskis general account. At present this option must
remain on the sidelines; I will return to it later.
The main problem with the second option is simple: the requisite
modification of Tarskis analysis is not at all apparent. The most direct
way to incorporate cross-term restrictions is obviously circular. We can
hardly say that a sentence is logically true in L just in case it is true in all
interpretations that remain after imposing those cross-term restric
tions needed to produce the proper collection of logical truths for L.
Of course, the circularity of this definition could be disguised in
various ways. For example, we might point out that Tarskis original
account proceeds by first specifying a class of languages that are seman
tically similar to Lthe fixed terms are interpreted identically, the
interpretations of variable terms fall within the same satisfaction do
mains, and the underlying definition of satisfaction is the same. It then
offers the following definition: a sentence S is logically true in L just in
case for any language L' that is semantically similar to L, S is true in L'.
Consequently, we might claim that cross-term restrictions serve simply
to tighten the semantic similarity relation. Here, of course, the entire
burden is transferred to our account of the new semantic similarity
relation. If all we can say about languages in which /(Abe Lincoln) is
not a member of /(something), or in which /( was president) is not a
subset of /( was an elected official), is that certain intuitively valid
arguments of L fail to preserve truth (and so such languages must be
semantically dissimilar to L), then our definition isjust as circular as the
trivial one given above, though it might initially appear less so.
Other than incorporating cross-term restrictions in a blatantly circu
lar fashion, no obvious modifications of Tarskis account yield defini
tions consistent with, on the one hand, the use of cross-term restric
tions and, on the other, the basic intuitions underlying the
interpretational approach. To be sure, we can hardly rule out the
possibility that such a modified account can be developed. But until we
have a reasonably clear account to look at, little more can be said about
the second option.
A few words should be said about the third alternative. It might be
argued that although cross-term restrictions are indeed inconsistent
with Tarskis original definitions, the need for them arises only when
we construe the universe or domain of a model as interpreting an
individual expression in the language (whether the quantifier some
thing or the common noun thing). For then the requirement that
/(Abe Lincoln) be chosen from the universe set appears as a direct

76

Interpreting Quantifiers

restriction on the joint interpretations of something and Abe


Lincoln, and hence no more acceptable than, say, invoking the mean
ing postulate All presidents are elected officials, or demanding that
/(was a president) be a subset of /(was an elected official).
But there are other ways of describing the standard semantics. In
particular, when we choose the universe of a model, we commonly see
ourselves as interpreting an implicit parameter of the language, the
languages domain of discourse. Now, we might claim that such implicit
parameters place constraints on the interpretation of several expres
sions in the language, but that they do not directly provide the inter
pretation of any. In this way, it might be argued, the standard seman
tics can avoid the illicit appeal to cross-term restrictions. After all,
demanding that our interpretation of Abe Lincoln fall within the set
chosen as the domain of discourse does not restrict the joint interpre
tation of two expressions, but rather the joint interpretation of an indi
vidual expression and a parameter of some entirely different sort.
But what exactly have we bought by adopting this alternative de
scription? First of all, it is clear that our semantics still treats some
thing (or thing) as a variable term: its contribution to the truth values
of sentences still differs as radically from model to model as that of
Abe Lincoln or was president. Consequently, if we were to list all the
various constraints imposed by our new implicit parameter, we
would have to include among them the following three: Abe Lincoln
can name any individual we please, so long as that individual is a
member of the domain of discourse; was president can have any exten
sion we please, so long as that set is a subset of the domain of discourse;
and finally something can be restricted however we please, so long as
the restriction set is identical to the domain of discourse.
None of those constraints directly involves the interpretation of
more than one expression. But it should be clear that by arranging our
constraints like spokes around an implicit parameter, we do not avoid
cross-term restrictions but simply honor them with a name. Obviously,
any cross-term restriction can be imposed in a similarly roundabout
fashion. For example, rather than requiring that/(was president) be a
subset of /(was an elected official), we might instead posit a genus
domain, an implicit parameter that constrains both the interpretation
o fwas president and the interpretation o fwas an elected official,but
that is not thought of as directly interpreting either. We could then
allow models to assign was president any extension at all, so long as
that set is a subset of the genus domain, and similarly permit was an
elected official to have any extension we please, so long as that set is
identical to this same parameter.
Appealing to implicit parameters may provide an alternative tech-

Interpreting Quantifiers

77

nique for imposing cross-term restrictions, a technique that differs


somewhat from either the direct imposition of those restrictions or the
less direct appeal to meaning postulates. But it does not address the
fundamental conflict between the use of such restrictions and Tarskis
general account of the logical properties. The notion of a domain of
discourse does not solve the problem, but simply disguises it.
For now, we seem to be left with the first option. The standard
model theory for first-order languages seems to violate the very guide
lines that underlie the interpretational approach to semantics. Of
course at present, this is only a tentative and provisional conclusion. It
is always possible that Tarskis account of the logical properties can be
suitably revised so that the standard cross-term restrictions used in
first-order model theory turn out to be consistent with it. I will not be
concerned with this possibility any further, since the real problem with
Tarskis analysis applies equally whether or not we employ cross-term restric
tions. But this will be the topic of Chapters 7 through 9.

Recapitulation
The superficial similarities between representational and interpreta
tional semantics are obvious but misleading. In fact, these two views
of model-theoretic semantics are completely different approaches to
charting the semantic properties of a language. This difference comes
out most clearly in the radically different standards that must be used
in judging the adequacy of the two main features of the theory: the
class of models and the definition of truth in a model. The class of
models is adequate for a representational semantics if it contains a
representative for each genuinely possible configuration of the em
pirical or nonlinguistic world. To make this judgment we must
naturally presuppose some technique of representationwe must un
derstand what our models meanas well as various intuitions about
what is and is not a genuine possibility. With interpretational seman
tics, the class of models (d-sequences) is determined by the satisfaction
domains assigned to each category of expressionmore accurately, by
the domains assigned to those categories containing members not
included in *$, the set of fixed terms. Thus, the class of models in an
interpretational semantics is to be judged according to the criteria for
delineating satisfaction domains. Such a domain must contain an ob
ject for each existing member of the given semantic category, as well as
objects for other potential members of the same categoryintuitively,
expressions that would contribute similarly to the truth values of sen
tences in which they occurred.

78

Interpreting Quantifiers

Standards forjudging the relation of truth in a model differ accord


ingly. With interpretational semantics we must ask whether sentences
declared true in a particular model would indeed have been true
under the suggested interpretation of the variable expressions. This is
simply the intuitive content of the satisfaction schemata of Chapter 3,
the relation of truth in a model being our trivial modification of the
earlier satisfaction-by-a-sequence. Representational semantics, on the
other hand, requires that a sentence be declared true in a given model
if and only if it would have been true had the model been accurate
that is, had the world actually been as depicted by that model.
Once we take these differences seriously, once we realize that the
criteria that apply to a theory of x is true in W are entirely different
from those that apply to a Tarskian theory of x is true in L, then it
should seem a remarkable fact when one and the same modeltheoretic semantics admits of both readings. But such points of inter
section do occasionally occur. Thus, with the simple semantic theories
devised for our two nonquantified languages, the same class of models
and the same definition of truth in a model met the separate demands
imposed by the two approaches. With these semantic theories there is a
straightforward sense in which the importance of the perspective we
adopt is minimized. Here, as with the theory of truth tables discussed
in Chapter 2, the perspective makes little difference simply because
both are readily available.
But clearly such points of intersection are the exception, not the
rule. This was already obvious by the end of the last chapter, when we
considered various cases in which the two approaches failed to inter
sect. Yet there is a tendency to overlook this divergence, to assume that
Tarskis analysis of the logical properties is correct because it guaran
tees, say, that logical truths will be true in all possible worlds, that they
will be necessarily or analytically true. This might be a defensible
position if Tarski had indeed given us areduction of possible worlds
to models, if his analysis required logical truths to turn up true in all
the models appearing in an adequate representational semantics. But
here Tarskis analysis is clear and unequivocal; it is the model-theoretic
translation that engenders the confusion.
In Chapter 2 ,1 remarked on the obvious interest we may have in an
account of x is true in W for a fixed language L. The importance of
such a theory comes not from a general account of logical truth or
logical consequence but from the illumination it may shed on the
semantic rules of the language. At first glance, there is considerably
less interest attaching to an account of the complementary x is true in
L for a fixed world W. One simple reason for this is that with a
sufficiently broad range of languages to choose from, all sentences are

Interpreting Quantifiers

79

precisely on a par. That is, for any true sentence of (say) English, we
can devise some languages in which it is false; similarly, any false
sentence can always find a home in which it happens to be true. This in
spite of any logical or semantic properties the sentence may originally
have had. Sentences, at least in the sense in which these are things that
can wander from language to language, do not carry with them the
semantic characteristics necessary to ensure any truth value.
Now, Tarskis account changes a superficially uninteresting study
into a potentially important investigation. If Tarskis analysis is cor
rect, then we have a standard technique for narrowing in on a limited
range of languages against which the relation x is true in L gains
considerable significance. But here again we must not get the signifi
cance turned around. Our goal in applying Tarskis account is not
simply to specify, by whatever means available, some range of lan
guages whose shared truths happen to be the logical truths of the
original language. Of course, there will always be such a collection of
languages: at worst, we could treat all expressions as variable and take
all logical truths as meaning postulates. But this is simply to give up
Tarskis general account of the logical properties and, so it would
seem, to undermine any interest that may originally have motivated an
account of x is true in L. This is the sacrifice we risk when we resort to
cross-term restrictions.

6
Modality and Consequence

So far, what we have by way of extensional evidence for Tarskis


analysis is a rather mixed and confusing bag. It is clear that with the
extremely simple languages of Chapters 2 and 3, the definitions pro
duce an intuitively plausible yield of logical truths when we hold fixed
the right expressions, specifically, when we hold fixed or and not, the
two terms traditionally considered logical constants. But when we
make other choices for
when we treat other expressions as logical
constants, the account produces strikingly counterintuitive results.
Tarski himself was the first to note this fact.
The situation is considerably more perplexing with quantified lan
guages, such as the language of Chapter 5. Here, no single selection of
fixed terms produces a uniformly plausible distribution of the logical
properties. With these languages, the only way to get a reasonable
extension is by using cross-term restrictions. Yet these restrictions
seem inconsistent with the analysis itself.
To complicate matters even further, it is clear that no matter what
language we may consider, any given valid argument will be declared
valid on some selection of fixed terms. For at the very least, we can
include in $ every atomic expression appearing in the particular argu
ment. Likewise any given invalid argument will be declared such on
some choice of fixed terms; excluding all expressions from $ will
guarantee this. But we have no assurance that there will be any one
selection of fixed or logical terms that produces the right assessment
for every argument expressible in the language. And that, presumably,
is what we are after.
Now, this extensional evidence is all rather hard to assess, and it will
only get more complicated as we move to increasingly complex lan-

Modality and Consequence

81

guages. But one thing is clear: we do not, at this point, have enough
such evidence to conclude either that the account is right or that it is
wrong. The cases where it clearly workssimple truth-functional lan
guages with connectives held fixedhardly inspire confidence that
the account will work for arbitrary languages. On the other hand,
there may be very good explanations for those cases where it seems to
fail. For example, we may be able to explain the haphazard behavior of
the definitions when we vary our selection of fixed termssay, by
finding some characteristic that makes certain expressions suitable for
inclusion in $ and others not. And in the end, we may even uncover
some insight that shows certain cross-term restrictions to be perfectly
consistent with the account, and so find these problems to be surmoun
table as well.
In any event, let us set these questions aside for the moment, and
consider Tarskis own justification of his account. Tarski does not base
his justification on extensional evidence; as I mentioned, he discusses
no specific applications of the definitions. Rather, he argues that the
analysis successfully captures the essentials of the ordinary concept
of consequence. Such an intuitive or conceptual justification is obvi
ously quite important, since extensional evidence will bear at most on a
single language, while the account is meant to work with any language
for which satisfaction can be defined. Since we can hardly survey all
possible languages to which the definitions may be applied, we clearly
need a different kind of evidence, evidence of a more conceptual sort,
to show that the definitions get the right extension in any such lan
guage. Tarskis argument is meant to provide such evidence.
Necessity
The most important feature of logical consequence, as we ordinarily
understand it, is a modal relation that holds between implying sen
tences and sentence implied. The premises of a logically valid argu
ment cannot be true if the conclusion is false; such conclusions are said
to follow necessarily from their premises.
That this is the single most prominent feature of the consequence
relation, or at any rate of our ordinary understanding of that relation,
is clear from even the most cursory survey of texts on the subject. We
find modal characterizations of logical consequence in the very earliest
works on logic:
A syllogism is discourse in which, certain things being stated, something
other than what is stated follows o f necessity from their being so. I mean
by the last phrase that they produce the consequence, and by this, that no

82

Modality and Consequence


fu rth er term is required from without in order to make the consequence
necessary. (Aristotle, 24a 1822)

in modern textbooks geared for the basic, nontechnical course:


A deductive argum ent is valid when . . . it is absolutely impossible for the
premises to be true unless the conclusion is true also. (Copi, 1972, p. 23)

in more advanced texts aimed at intermediate students:


An argum ent is sound if and only if it is not possible for its premises to be
true and its conclusion false. (Mates, 1965, p. 3)

and in texts directed toward the most proficient and mathematically


inclined:
W hat makes [the conclusion] a logical consequence o f [the premises] is
the fact that if [the premises] are true then [the conclusion] must be true
as well. (Bell and Machover, 1977, p. 5)

This modal characteristic, however dimly perceived and poorly un


derstood, is clearly central to our intuitive understanding of the conse
quence relation. It is, at the very least, a necessary condition for the
relation to hold: if it is possible for the members of K to be true while 5
is false, then 5 cannot be a logical consequence of K. Whether it is also a
sufficient condition, as is suggested in the above quotations, is harder
to say. Thus, most logicians would agree that the continuum hypothe
sis, even if true, is not a consequence of the pair-set axiom.1Yet if the
former is true, it is (presumably) necessarily so, and hence it would be
impossible for the latter to be true and the former false. Observations
of this sort suggest that the modality at issue is really of a more
epistemic sort. But in any event, some such modality, whether alethic
or epistemic, is clearly crucial to the relation of logical consequence.
In this section I would like to emphasize two points before con
sidering Tarskis justification of his account. The first point has, I
hope, already been made. The point is that an account of consequence
will indeed capture an essential feature of our pretheoretic notion if it
offers some guarantee that arguments declared valid display the dis
tinctively modal feature invariably attributed to such arguments. We
need not be too concerned about the exact nature of this modality; for
present purposes, we can leave such issues unresolved. What is impor
tant is just this simple observation: For an argument to be genuinely
valid, it does not suffice for it to have a true conclusion or a false
premise, for it simply to preserve truth. The truth of the premises
must somehow guarantee the truth of the conclusion. It is this guaran
tee of truth preservation that gives rise to the familiar modal descrip
tions of the consequence relation. The exact source of the perceived

Modality and Consequence

83

guarantee, whether it be the meanings of the expressions contained in


the argument, brute logical intuition, or something else entirely, need
not concern us at the moment.
The second point is that Tarski himself, not surprisingly, recognized
this guarantee to be the central feature of the ordinary concept of
consequence, the concept his analysis was meant to capture. To appre
ciate this, we need only consider the initial remarks in which Tarski
motivates his analysis. Most of this discussion is aimed at showing that
no purely syntactic or formal definition captures our common un
derstanding of consequence, or is even extensionally equivalent to that
notion. This discussion is worth recounting.
Tarskis first piece of evidence for the inadequacy of syntactic defi
nitions is the existence of theories that are -incomplete. A theory is
-incomplete if it displays the following peculiarity. Employing only
the normal rules of inference, we can derive the following sentences
from the axioms of the theory:
Ao.

0 possesses the property P.

A 1.

1 possesses the property P.

And, in general, we can deduce all sentences of the form


An.

n possesses the property P,

where V is any symbol that denotes a natural number. However, an


-incomplete theory does not allow us to derive, according to the
standard rules of inference, the universal claim
A.

Every natural number possesses the property P.

The phenomenon of -incompleteness shows that the universal sen


tence A is not deducible, using the standard syntactic rules, from the
sentences A0, A\, . . . , An, .. . After pointing this out, Tarski con
cludes:
This fact seems to me to speak for itself: it shows that the formalized
concept o f consequence, as it is generally used by mathematical logicians,
by no means coincides with the ordinary concept. For intuitively it seems
certain that the universal sentence A follows in the ordinary sense from
the totality o f particular sentences A0, A\, . . . , An . . . : provided all
t hese sentences are true, the sentence A m ust also be true.2

1'arskis gloss here of the ordinary concept of consequence is the


familiar one: A follows in the ordinary sense from A0, A 1, . . . be
taust*, as he puts it, provided the latter are all true, the former must be
true as well. According to Tarski, this shows that standard syntactic
characterizations of consequence are not even extensionally adequate.

84

Modality and Consequence

For there are arguments that are valid in this ordinary sense but
whose conclusions cannot be deduced from their premises.
Tarski admits that a formal characterization of consequence could
be supplemented with an infinitary rule of inference that would avoid
this particular failing, the so-called (o-rule. But such an addition, due to
its infinitary nature, would involve a significant departure from stan
dard systems of deduction. A more reasonable alternative would be to
supplement the system with a new rule that allows the derivation of A
from the (single) claim that all the Anare provable using the remaining
rules, a claim that can easily be encoded into sufficiently powerful
languages. This new rule, though more complex than standard rules,
can still be considered purely syntactic or structural. Furthermore,
the resulting set of rules generates new consequences not provable
from the original set, and so is a genuine step in the right direction.
However, such supplementation is ultimately of no avail. The futility
here, Tarski claims, follows from Gdels incompleteness results:
In every deductive theory (apart from certain theories o f a particularly
elem entary nature), however much we supplem ent the standard rules of
inference by new purely structural rules, it is possible to construct sen
tences which follow, in the ordinary sense, from the theorems o f this
theory, but which nevertheless cannot be proved in this theory on the
basis o f the accepted rules o f inference.3

Tarskis point here is this. Even if we add additional rules of the


above sort to our formal system of deduction, there will remain theo
ries from which we cannot deduce all the intuitive consequences.
Specifically, we will not be able to derive the Gdel sentence, G, of the
theory, even though we can easily see that, as Tarski puts it, provided
all the sentences of the theory are true, the sentence G must be true as
well. Again G is, at least in this ordinary sense, a consequence of the
sentences contained in our theory. Yet G cannot be derived according
to the structurally specified rules that were meant, by hypothesis, to be
extensionally equivalent to our ordinary concept of consequence.
Here again there are necessary consequences that our syntactic charac
terization fails to capture.
Tarski takes these considerations to show that no purely structural
characterization can, in principle, even agree in extent with our ordi
nary concept of consequence. Thus, he concludes:
In o rd er to obtain the p ro p er concept o f consequence, one that is close in
essentials to the ordinary concept, we must resort to quite different
m ethods and apply quite different conceptual apparatus in defining it .'1

Modality and Consequence

85

With this, Tarski turns to the stated task of his article: giving a precise
definition which, unlike syntactic accounts, captures the essential
features of our ordinary concept of consequence.
Now, Tarskis argument that any syntactic characterization of conse
quence will be extensionally inadequate may strike us as problematic in
several respects. Perhaps the most obvious is that in neither case is the
sentence cited by Tarski (that is, A or G) a standard model-theoretic
consequence of the theory from which it allegedly follows. Indeed,
both of Tarskis examples involve the consequence relation for firstorder languages, where the model-theoretically defined relation co
incides with the syntactically defined relation. How can a semantic
account be judged extensionally superior to the usual syntactic charac
terization if the two are, in fact, extensionally equivalent? It would
seem that the complaints Tarski has about the extensional adequacy of
the syntactic characterization will ricochet off the completeness theo
rem and strike his own account with equal force.
In fact this is not true, for reasons I have already mentioned. In any
of these cases, the intuitive consequence will emerge as a Tarskian
consequence if we include a sufficient number of expressions in the set
of fixed terms. So, for example, the 6>-rule comes out valid, modeltheoretically, if we include in $ the expression every natural number
as well as the collection of numerals 0, 1, 2, and so forth. I assume
this is why Tarski does not consider his account subject to precisely the
same criticism he directs at syntactic definitions.
What is important for our purposes, though, is not the specific
examples Tarski employs in his argument but his emphasis on the
intuitive consequence relation, the relation he characterizes using the
familiar modal terms. When one sentence is, in the ordinary sense, a
logical consequence of others, then it must be true provided the others
are true as well. That is, the truth of the premises must guarantee the
truth of the conclusion. However vague and poorly understood this
guarantee may be, it is clearly an essential feature, if not the essential
feature, of our ordinary concept of consequence.
TarskVs Fallacy
Any intuitively valid argument {K, S) will come out logically valid, ac
cording to Tarskis account, on some choice of fixed terms. The argu
ment would not be valid if all the members of K were true while 5 was
false, and hence the argument will at least satisfy Tarskis definition
when all of its component expressions are included in This observa
tion gives us the following implication:

86

Modality and Consequence


If 5 is a consequence (in the ordinary sense) of K, then 5 is a
Tarskian consequence of K on some selection of

Now, if the converse of this implication could be demonstrated, we


would have a rather impressive result:
(L)

5 is a consequence (in the ordinary sense) of K if and only ifS is a


Tarskian consequence of K on some selection of

Needless to say, if equivalence (L) could somehow be shown, then


Tarskis definition of consequence could hardly be faulted. But in
order to show that the equivalence holds, we must show that if 5 is a
Tarskian consequence of K, then it is a consequence in the ordinary
sense. That is, we must show that if all the members of K are true, 5
must be true as well.
After proposing his account, Tarski offers the followingjustification
of his definition. It contains a simple argument that appears, at first
glance, to give us exactly the implication we need:
It seems to me that everyone who understands the content of [my]
definition must adm it that it agrees quite well with ordinary usage. This
becomes still clearer from its various consequences. In particular, it can be

proved, on the basis of this definition, that every consequence of true sentences must
be true, and also that the consequence relation . . . is completely indepen
d en t o f the sense of the extralogical constants which occur in these
sentences.5

The proof that Tarski is referring to is quite straightforward. Suppose


that (K', S') is the argument form corresponding to the argument
(K, S), and that (K', S') is satisfaction preserving on all sequences. As
sume further that all the sentences in K are actually true while S is
actually false. Our goal will be to derive a contradiction from this
assumption.
The contradiction is virtually immediate. For we know that there
must be at least one sequence that assigns to each variable occurring in
(K', S') that member of the appropriate satisfaction domain which
corresponds to the expression the variable replaced. Since we have
assumed that the members of K are true but that S is false, it follows
that on this assignment (K', S) cannot be satisfaction preserving. But
this contradicts the hypothesis that the argument form was satisfaction
preserving on all sequences, and so we conclude our proof.
We have shown that if S is a Tarskian consequence of K, and if all of
the members of K are true, then 5 must be true as well. Furthermore,
we can see from our proof that this holds quite independently of our
selection of logical constants. Thus, it would seem, we have exactly the
result we need. If S is a Tarskian consequence of K (on any selection of

Modality and Consequence

87

5), then 5 is a consequence of K in the ordinary sense. This gives us


biconditional (L).
But (L) is so obviously false that something has clearly gone wrong.
We know that any truth-preserving argument is logically valid, accord
ing to Tarskis definition, on some selection of 5- Thus, Lincoln had a
beard is a Tarskian consequence o f Washington was president when
all the component expressions are held fixed. For then the corre
sponding argument form isjust the argument itself, and this argument
is satisfaction preserving on all sequences simply because the conclu
sion is itself a true sentencehence, satisfied by any sequence. But it is
clear that the former sentence is not a genuine consequence of the
latter. We would hardly say that, provided Washington was president
is true, Lincoln had a beard must be true as well.
It is perfectly clear that with many selections of 5, there are Tarskian
consequences that are not genuine consequences, and hence that (L) is
simply false. Yet our proof that every Tarskian consequence of true
sentences must be true is perfectly correct. The problem is not with
our proof, but with thinking that this proof shows that any modal
relation holds between the premises and conclusion of the argument
(K, 5). T o show that all Tarskian consequences are consequences in the
ordinary sense, we would need to prove a theorem with an embedded
modality. Specifically, we would have to show that, for any K and S, if
(1)
S is a Tarskian consequence of K (for some 5)
then the following are jointly incompatible:
(2)
All the members of K are true
(3)

S is false.

But of course all we can show is that for any K and S, the following three
conditions are jointly incompatible:
(1)
S is a Tarskian consequence of K (for some 5)
(2)

All the members of K are true

(3)

S is false.

Now, it should be clear from a purely abstract point of view that the
joint incompatibility of (1), (2), and (3), plus the truth of (1), does not
entail the joint incompatibility of (2) and (3). Here we need only note
the fallaciousness of any inference from
Necessarily (if P and Q then not R)
t

If P then necessarily (if Q then not R).

88

Modality and Consequence

More concretely, we can note that the argument with Lincoln had a
beard as conclusion and Washington was president as sole premise
could not come out valid on any selection of % if it did not in fact
preserve truththat is, have either a false premise or true conclusion.
But the mere fact that this argument does come out valid on some
selection of $ certainly does not imply that it is a necessarily truth
preserving argument, that it is valid in the ordinary sense.
The fallacy here may be emphasized by sketching the parallel con
sideration for Tarskis definition of logical truth, for there the problem
is even more transparent. With logical truth there is also an important
modal feature of our ordinary concept. A logical truth must be true
that is, it is necessarily true. Thus, it would be a strong point in favor of a
definition of logical truth if we could show that sentences satisfying the
definition are necessarily true, that they have the intuitive modal
property. And indeed we can prove that if a sentence satisfies Tarskis
definition of logical truth then it must be true. After all, if it were not true, it
would not satisfy the definition. Unfortunately, this does not guarantee
that the sentence has any peculiar modal properties, any more than
the trivial observation if a sentence is true then it must be true shows
every truth to be a necessary truth.
Obviously, the proof in question does not show that every Tarskian
consequence is a consequence in the ordinary sense. It is only
through an illicit shift in the position of the modality that we can
imagine ourselves demonstrating of any Tarskian consequence that it is
entailed by the corresponding set of sentences. This fallacy becomes
quite apparent when we consider the arguments that come out valid
when we include all expressions in %. But it is crucial to recognize that
the inference remains fallacious, and for exactly the same reasons,
regardless of our choice of fixed terms. The fallacy may be easier to
spot when we include names and predicates in but the inference is
no less fallacious when we only hold fixed (say) the truth functional
connectives. The argument does not depend on and it does not get
better or worse according to what we suppose the members of $ to be.
A parallel justification for Tarskis account can be given, but isjust as
fallacious, when we replace the alethic reading of must with a purely
epistemic reading. Although the most common pretheoretic descrip
tions of logical consequence involve necessity, we find many in which
the must takes on a more epistemic cast. For example, the following,
from Quines introductory text, is a familiar description:
[Among the] relations o f statements to statements, one o f conspicuous
im portance is the relation o f logical implication: the relation o f any
statem ent to any that follows logically from it. If one statement is to he
held as true, each statem ent implied by it must also be held as true.
(Quine, 1972, p. 4)

Modality and Consequence

89

If you accept the premises of a valid argument, you must also accept
the conclusion (to which we sometimes add on pain of irrationality).
This epistemic characteristic is sometimes thought to be more impor
tant than, and perhaps to underlie, our intuitions about the alethic
modality involved in valid arguments. For example, some would claim,
not implausibly, that it is only due to the a priori relation between the
premises and conclusion of a valid argument that wejudge the latter to
follow necessarily from the former, and hence that we judge the
argument valid. On this view, a necessary consequence that could not
be recognized as such a priori would never qualify as a logical conse
quence. And this certainly seems right.
Can we show that this epistemic feature follows from the definition?
Again, the best we can offer is a version of Tarskis fallacy. We can
note, quite accurately, that it would be irrational to believe that an
argument satisfies Tarskis definition (for any $) but has true premises
and a false conclusion. Or we can point out that if you accept the
premises of an argument, and also accept that it passes Tarskis test for
validity, then you must accept the conclusion. But neither of these
shows that any peculiar epistemic relation holds between the premises
and conclusion of these arguments. These observations show only that
it is a genuine consequence of Tarskis definition that the argument in
question either has a false premise or a true conclusion, that it indeed
preserves truth. But they do not show that it would be irrational to
accept the premises and deny the conclusion; they show only that if
you did, you could no longer hold that the argument satisfied the
definition.
The last point brings out the real weakness of this justification,
regardless of what the sought-after modality may be. Tarskis account
demands, first and foremost, that any argument declared valid
preserve truth; those that do not do not pass muster. The account
shares this characteristic with Bolzanos, and it is perhaps easier to see
with the simpler, substitutional definition. With Bolzanos account, this
feature is incorporated directly into the definition, by virtue of the fact
that any argument is, for any a permissible substitution instance of
itself. So Bolzanos demand that all substitution instances of (K, S)
preserve truth can be divided into two requirements:
(a)

that (K, S) have either a false premise or true conclusion, and

(b)

that all members of some (possibly empty) collection of related


arguments preserve truth as well.

Tarskis move from substitutioiVto satisfaction will at most increase the


stringency of clause (b), allowing consideration of arguments drawn
from expansions of the original language.

go

Modality and Consequence

Now, Tarskis proof that every consequence of true sentences must


be true depends only on clause (a) of his account. This is why the
choice of $ is entirely irrelevant; the set appearing in (b) may as well,
and often will, be empty. And it is also why a parallel consideration can
be offered as equal justification for any definition of consequence that
incorporates requirement (a).6
The fact that Tarskis proof depends only on clause (a) shows how
little bearing this initially impressive consideration really has on the
adequacy of the analysis. Indeed, we could offer precisely the same
justification for the following definition of logical consequence: 5 is a
logical consequence of K just in case either 5 is true or some member of
K is false. Now, this analysis is certainly far off track. However, we
might note, first of all, that every intuitive consequence will obviously
be a logical consequence according to this trivial definition, thus
giving us at least one direction of biconditional (L). And for the con
verse implication we can point out that any consequence of true sen
tences indeed must be true: after all, if a sentence is not true, it will only
turn up a logical consequence of sets containing at least one false
sentence. But of course this must has nothing to do with any modal
or epistemic property of the genuine consequence relation, or with any
guarantee of truth preservation. Yet Tarski can, on this particular
score, give us nothing more to commend his own account. This new,
obviously incorrect account has as much claim to biconditional (L) as
Tarskis, and for precisely the same reasons.
Tarskis Reasoning
Was Tarski guilty of the modal fallacy I have described? Did he really
believe that his proof that every consequence of true sentences must
be true assures us that the right sort of relation will hold between the
premises and conclusion of arguments satisfying the definition? Or
was he simply making a very weak claim for his definitionnamely,
that it will not designate as logically valid any argument with true
premises and a false conclusion? This latter claim follows trivially from
the definition, but hardly seems much evidence that it agrees quite
well with ordinary usage.
Needless to say, the crucial sentence is ambiguous. This is not sur
prising, since the key to the fallacy itself is the ambiguous scope of the
modality in question. But this same ambiguity leaves open the possibil
ity that Tarski himself was not misled by the fallacy, though perhaps
guilty of a misleading turn of phrase.
A careful reading of the article seems to suggest otherwise. It is
significant that we find no comparable ambiguity in Tarskis initial

Modality and Consequence

91

observations about the ordinary concept of consequence. Recall, for


example, how he characterizes this concept in his discussion of o>-incompleteness:
[The existence o f co-incomplete theories] shows that the formalized con
cept o f consequence . . . by no means coincides with the ordinary concept. For
intuitively it seems certain that the universal sentence A follows in the
ordinary sense from the totality o f particular sentences A0, A\ , . . . ,

An ... :provided all these sentences are true, the sentence A must also be true?

There is only one way to construe the modality that Tarski here
identifies with the ordinary concept of consequence. Obviously, he is
not simply noting that the argument in question happens to have a
false premise or a true conclusion; since no specific sentences have
been given, such a construal would not even make sense. The observa
tion clearly concerns the modal or epistemic relation between these
sentences, the fact that arguments of this form are guaranteed to
preserve truth. Here, the scope of the modality is clear and un
equivocal.
Now consider again Tarskis justification of his account, this time
paying particular attention to his exact choice of terms:
It seems to me that everyone who understands the content o f the above
definition must adm it that it agrees quite well with ordinary usage. This
becomes still clearer from its various consequences. In particular, it can
be proved, on the basis o f this definition, that every consequence of true

sentences must be true}

Set next to his earlier remarks, it is hard not to see the fallacy at work in
this justification. It is hard to overlook Tarskis use of precisely the
same expressions to describe, in the first passage, the modality central
to consequence in the ordinary sense and, in the second, the alleged
agreement of his definition with ordinary usage. But as we have
seen, thinking that any such modality is a consequence of the defini
tion is a simple confusion.
Tarski clearly saw the importance of the modal features of our
ordinary concept of consequence. Indeed, his article is peppered with
modal and quasi-modal descriptions of this relation. Some of these
display the same scope ambiguity as his justification, while others are
not ambiguous at all. For example, in his most extensive discussion of
the intuitive notion, Tarski makes the following observations. First, he
notes that if (.K , S) is a logically valid argument, then it can never
happen that the class K consists entirely of true sentences while at the
same time the sentence 5 is false.vH e goes on to note that the relation
that holds between K and 5 cannot be influenced in any way by

92

Modality and Consequence

empirical knowledge.10 Finally, he says that if any other argument


(K', S') shares this arguments form, then the sentence S' must be true
provided only that all the sentences of the class K' are true.11 It is
difficult, if not impossible, to interpret these descriptions as implying
only that valid arguments have a false premise or a true conclusion.
Tarski seems, by all appearances, to have fallen prey to the fallacy.
The main reason for doubting this is that he was well aware that his
notion of logical consequence reduces to material consequencethat is,
mere truth preservationwhen all expressions are included in
Indeed, at the very end of his article, he explicitly points out this fact:
In the extreme case we could regard all terms of the language as logical.
The concept of formal consequence would then coincide with material
consequence. The sentence S would in this case follow from the class K of
sentences if either 5 were true or at least one sentence of the class K were
false. (1956, p. 419)
Certainly, when we have this extreme case in mind, the modal fallacy
is much too apparent to overlook. This would suggest that Tarski was
not fooled by the fallacy, and hence that his earlier, intuitive remarks
are rather more disingenuous than confused.
Did Tarski really think that the right modality followed from his
definition? Or did he see that it did not, but still try to convince his
readers that it did? I think the most likely explanation is much less
dramatic than either of these. Although Tarski recognized the impor
tance of some intuitive modality to the relation of logical consequence,
he also recognized that this modality is obscure and poorly under
stood. Given this fact, he may well have thought that the modality
appearing in his justification, though perhaps not quite right, was close
enough to count as capturing this essential but ill-understood feature
of the consequence relation. Better a misplaced modality than no
modality at all.
What is important for our purposes is that we recognize that no real
modality, obscure or otherwise, follows from Tarskis definition. Thus,
suppose that some argument (K, S) satisfies the definition. Can we
show, for example, that it can never happen that the members of K
will all be true while 5 is falsethat is, that truth preservation is in any
way an enduring feature of this argument? The answer is no: for all we
know, the same argument may have true premises and a false conclu
sion tomorrow. Of course, should this come to pass, Tarskis definition
guarantees that the argument will no longer qualify as logically valid.
But this is a guarantee of entirely the wrong sort, one shared by the
trivial definition suggested at the end of the last section.

Modality and Consequence

93

A logically valid argument must, at the very least, be capable of


justifying its conclusion. It must be possible to come to know that the
conclusion is true on the basis of knowledge that the argument is valid
and that its premises are true. This is a feature of logically valid
arguments that even those most skeptical of modal notions recognize
as essential. Now, if we equate logical validity with mere truth preser
vation, as suggested in the last section, we obviously miss this essential
characteristic of validity. For in general, it will be impossible to know
both that an argument is valid (in this sense) and that its premises are
true, without antecedently knowing that the conclusion is true. This is
why such arguments as
(B)

Washington was president


So, Lincoln had a beard

are incapable of justifying their conclusions. For although this argu


ment preserves truth, there is no guarantee of this fact independent of
the specific truth values of its constituent sentences. Consequently, any
doubts we may have about the truth of the conclusion translate directly
into doubts about the arguments validity.
Tarskis account equates validity with the joint truth preservation of
a collection of arguments. In the extreme case, the collection will
contain only the argument itself, and then the account reduces to the
trivial one above. In other cases, though, the collection will contain
other arguments. But whichever is the case, Tarskis equation still
misses the essential feature of validity. For in general, it will be impossi
ble to know whether an argument is a member of such a collection of
truth-preserving arguments, hence whether it is valid, without ante
cedently knowing the specific truth values of its constituent sentences.
If we know that the premises of an argument are true, then any doubts
about the truth of its conclusion will translate directly into doubts
about whether this argument, and any others in the associated collec
tion, are valid. Simply moving from a collection of one to a collection
of many does not change this in any significant way. We still have no
assurance that arguments satisfying the definition will be capable of
justifying their conclusions, and hence no assurance that they will be
genuinely valid. Tarskis fallacy obscures this omission, by noting that
arguments declared valid are indeed guaranteed to preserve truth.
But this is not the required guarantee: it is backed up only by the
definition of validity, not by any characteristic of the argument itself,
whether modal, epistemic, or semantic. Consequently, it leaves such
arguments impotent as a meanp of justifying their conclusions.
Tarskis brief justification oKhis account, when properly under
stood, adds very little to the rather ambiguous, extensional evidence

94

Modality and Consequence

surveyed at the beginning of this chapter. What we can say with


certainty is simply this. First of all, the definition will never say that an
argument with true premises and a false conclusion is logically valid.
Second, if an argument is declared valid by the definition, then so too
will be any other argument that results by replacing expressions that
are not members of
I*1 other words, the assessments made by the
account are, as Tarski puts it, independent of the sense of those
expressions not held fixed. We have no assurance, however, that argu
ments declared valid carry with them any independent guarantee of
truth preservation, whether modal or epistemic or semantic, nor that
validity is an enduring characteristic of these arguments. To think
otherwise is to succumb to one form or another of Tarskis fallacy.

7
The Reduction Principle

So far we have seen two reasons, both bad, for accepting Tarskis
account of the logical properties. The first is the conflation of the
Tarskian or model-theoretic definitions with representational seman
tics. It is perfectly obvious why an adequate representational semantics
can yield necessary truths, and hence logical or analytic truths, insofar
as these are a species of those. But interpretational semantics is not
representational semantics; what we get trivially from the latter should
not be considered a deep or significant upshot of the former. The
second reason is the argument I have called Tarskis fallacy, an argu
ment that seems to have played a role in Tarskis own adoption of the
analysis.
Still, pointing out bad reasons for accepting an account is a far cry
from giving good reasons for rejecting it. In Chapter 1,1 claimed that
Tarskis definitions are, in a sense, obviously mistaken. It is time I
explained what I consider the obvious mistake: an implausible prin
ciple on which both Tarski and Bolzano base their accounts. Since it is
simpler to discuss this principle when treating sentences rather than
arguments, let us once again direct our attention to the definition of
logical truth. This is just a matter of convenience, though; the points
can be made, with the obvious changes, about the analysis of logical
consequence.
Quantificational Accounts
Both Bolzano and Tarski propose quantificational accounts of logical
truth: both equate the logical mjth of a sentence within a given lan
guage with the ordinary truth of a universally quantified sentence

96

The Reduction Principle

appearing in a (perhaps) expanded version of the language. The


difference between the two accounts comes down to the nature of the
universal quantifiers in the associated sentence: for Bolzano, these
quantifiers are substitutional; for Tarski, objectual.
Suppose S' is a sentential function obtained by uniformly replacing
the variable terms in S, those expressions not contained in the set $ of
fixed terms, with variables of appropriate type. Recall that according
to Bolzano, S will be logically true (with respect to $) if all the permissi
ble substitution instances of S' are true. Notice that these are simply
the truth conditions for the universal substitutional closure of S'
that is, for the sentence obtained from S' by appending an initial string
Uvi . . . Uvn of substitutionally interpreted universal quantifiers bind
ing each free variable in S'.1Thus, Bolzano equates the logical truth of
S with the ordinary truth of the universal generalization
n V i . ..

ma s' ].

Now, the intent of Tarskis move from substitution to satisfaction is


not to alter the quantificational nature of the account, but to insist
that the associated sentence be the universal objectual closure of the
sentential function S'. Thus, recall that according to Tarski, sentence S
is logically true (with respect to the same choice of $) if the sentential
function S' is satisfied by all sequences. But these are, once again,
simply the truth conditions for the following universal closure of S':
Vv, . . . Vvn[ S' ].
But here, the universal quantifiers are of the standard, objectual
sort, the variables ranging over objects within the appropriate satisfac
tion domains.2
It will often be the case that neither of these universal closures is a
sentence of our original language X, simply because X may not contain
the needed quantifiers. But if we can apply either Bolzanos or Tarskis
account of logical truth to X , then we can define truth for an expanded
language X + containing the appropriate universal closure among its
sentences. In the case of Bolzanos account, this simply follows from
the assumption that each variable term is associated with a well-defined
substitution class and that we already have a truth predicate for the
original language X. With Tarskis account, it follows from the fact
that if the satisfaction relation has been defined for a particular senten
tial function S', then the standard clause for V will guarantee that it is
defined for the sentential function VvS'.
The logical truth of 5 comes down to the ordinary truth, in X +, of
the respective universal closure of S'at least according to our two
quantificational accounts of logical truth. This is easiest to see in the

The R eduction Principle

97

trivial case in which $ contains all the atomic expressions occurring in


the sentence 5. Here, the sentential function of interest is just 5 itself.
And since S, being a sentence, contains no free variables, it happens to
be its own universal closure (either substitutional or objectual). Conse
quently, according to both Bolzano and Tarski, the logical truth of 5
will here correspond to its simple truth. Thus, if $ contains both Abe
Lincoln and was president, then according to either quantificational
account, the sentence Abe Lincoln was president will be logically true
just in case the following universal closure is true:
[Abe Lincoln was president].
Needless to say, there is no need here to resort to any expanded
language + in order to express the above universal generalization
of our original sentence.3
Suppose now that we are testing for logical truth with respect to an $
containing all expressions except namesthat is, with names as the
only variable terms. Here, as we have seen, the logical truth of Abe
Lincoln was president is, for Bolzano, determined by the truth values
of the permissible substitution instances of the sentential function x
was president: the original sentence is logically true just in case all the
substitution instances are true. Or, what comes to the same thing, this
sentence will be logically true if the following generalization is true:
Ux[x was president].
For Tarski, on the other hand, the sentence is logically true on this
choice of $ just in case lx was president is satisfied by all sequences, or
equivalently, in case it is satisfied by all individuals in the name domain
of the satisfaction relation. But this isjust to say that the logical truth of
the original sentence is tagged to the ordinary truth of the objectually
quantified sentence
Vx[x was president].
Notice that neither of these closures actually occurs in the language of
Chapter 3. But it is easy to see that in either case there is a simple
language +which contains the associated generalization and in which
the logical truth (with respect to the same $) of Abe Lincoln was
president will be equivalent to the ordinary truth of that universal
closure. As we have seen, the substitutionally quantified sentence will
be true, the objectually quantified sentence false.
To take one final example, if $ contains neither of the lexical com
ponents appearing in the sentence Abe Lincoln was president, then
according to Bolzanos quantifiCational account, this sentence will be
logically true just in case the following closure is true:

98

The Reduction Principle


IbcUP[ xP].

According to Tarskis account, the same sentence will be logically true


in case the corresponding objectual generalization is true:
VxVP[ xP].

Clearly, neither of these sentences is true. Consequently, neither


Bolzano nor Tarski declares Abe Lincoln was president to be logically
true on this selection of
Three Principles
Emphasizing the quantificational nature of these two accounts opens
up a promising avenue for assessing them. We can now consider in
abstraction the various principles that govern the relation between
universally quantified sentences and their instances. Clearly, there are
at least two principles concerning this relation that we can endorse
without hesitation. First is a simple principle of logic, the principle of
universal instantiation:
(i)

If a universally quantified sentence is true, then all of its in


stances are true as well.

Second is a principle, so to speak, about logic:


(it)

If a universally quantified sentence is logically true, then all of


its instances are logically true as well.

This second principle follows from the fact that the set of logical truths
is itself closed under logical consequence and, as the first principle
states, an instance is indeed a consequence of its generalization. I will
call (i) the instantiation principle and (ii) the closure principle. Neither (i)
nor (it) is the least bit controversial.
It should by now be apparent that a quantificational account of
logical truth is based on a third principle, quite different from either of
these, and at first glance considerably more surprising. The principle
is this:
(in)

If a universally quantified sentence is true, then all of its in


stances are logically true.

I will call (in) the reduction principle. Of course, both of our quantifica
tional accounts take logical truth to be relativized to a choice of fixed
terms, of logical constants, and so the underlying principle will be a
somewhat modified form of (Hi). There are a couple of ways this
modification might go, and I will consider both in due course. For now,

The Reduction Principle

99

though, it is important to see that Bolzano and Tarski both base their
accounts on this rather unlikely principle, in some form or other.
Indeed, the substantial technical and mathematical attraction of
Tarskis account derives directly from principle (in). For, assuming his
analysis is right, it is this principle that allows the direct application of
well-known techniques for defining truth to the task of defining logical
truth.
This is an important selling point for Tarskis account. Our ordinary
concepts of logical truth and logical consequence involve various no
tions that are notoriously difficult to pin down, notions like necessity, a
prioricity, analyticity, and so forth. But if the quantificational account is
correct, what it achieves is a truly remarkable reduction of obscure
notions to mathematically tractable ones. If it is right, the analysis
shows that we can in fact sidestep all of these difficult concepts, that we
can give a mathematically precise definition of the logical truths of a
language if we can just define the notion of truth for a slightly ex
panded languageor, what comes to the same thing, if we can define
the notion of truth relative to an arbitrary interpretation or
d-sequence.
This is a tremendous advantage, one we should not undervalue.
And it is an advantage not shared by representational semantics. When
we are doing representational semantics, we appeal to modal notions
from the very outset, in assessing the adequacy of our class of models
and our definition of truth in a model. In contrast, Tarskis account
equates the logical truth of a sentence with the ordinary truth of
another sentence, one that makes a nonmodal, nonepistemic, nonsemantic claim about the world, about the world as it actually happens to be.
The source of this advantage is, of course, the reduction principle (Hi).
Unfortunately, we cannot construe this striking technical advantage as
support for Tarskis analysis itself: we can hardly argue that the analy
sis is correct because it would simplify our lives if it were correct. Still, it
is important to acknowledge this benefit and to locate its source.
Now consider for a moment principle (Hi). I will not spend much
time discussing the abstract acceptability of this principle. Unadorned
and unmodified, its implausibility could hardly be more apparent. Our
natural inclination is to reject the principle out of hand, to reject it for a
very simple reason: universal generalizations have no particular claim
to logical truth; they, like any sentences, can be true by mere happen
stance. And when such a sentence just happens to be true, there is no
guarantee that its instances will be logically true. Some might, but then
again some might not.
v
The problem with the redufction principle is that the mere truth of a
universal generalization can, in general, guarantee nothing more than

i oo

The Reduction Principle

the truth of its instances. It cannot guarantee that its instances have
any other distinguishing characteristics. In particular, it cannot guar
antee that the instances will have any of the distinctive features,
whether modal or epistemic or semantic, ordinarily thought to set
logical truths apart from common, run-of-the-mill truths. Of course, if
the generalization itself is logically true, then the instances will be
logically true as well. This is guaranteed by the closure principle (ii).
But if the generalization is not logically trueif it is, say, a historical
truth, or an arithmetical truth, or a truth of physicsthen the in
stances will presumably be just as historical or arithmetical or physical.
Modifying the Principle (Part One)
Unmodified, the reduction principle is simply false. But as I have
stated it, this principle makes no mention of the set $ of fixed terms. So
before we count it too heavily against Tarskis analysis, we should
decide how the selection of $ figures into the principle. As I said
earlier, there are two ways this might work. Exactly which way we go
will depend on whether we construe Tarskis account as a completed
analysis of a fundamentally relational notion, one that varies with an
arbitrary choice of or as an incomplete analysis of a more or less fixed
notion of logical truth. If the latter, then the analysis must be supple
mented with some account of how we go about making the proper
selection of fixed terms.
It is clear that Tarski was not, himself, entirely sure which way to
view the account. His examples of -incomplete theories and the
Gdel incompleteness results pull in the former direction. For in order
to declare the -rule logically valid, or to get a Gdel sentence to come
out a consequence of its corresponding theory, we have to presuppose
great leeway in our selection of fixed terms. On the other hand, when
we allow such leeway, we often get extremely counterintuitive results.
This fact pushes in the latter direction, toward thinking that there is
somethingas yet unaccounted forthat makes some selections of $
definitely wrong and others definitely right. Tarskis ambivalence
on this question comes out most clearly in his concluding remarks,
where he describes this problem as the most important open question
left by his account (1956, pp. 418419).
I will consider the two possibilities in turn. The first is clearly the less
plausible construal, and so should be easier to set aside. Still, there are
some important observations to be made, even here. I will turn to the
second, more plausible construal in the following chapter.
According to the first view, Tarski has given us a completed analysis
of an irreducibly relational notion, the notion of logical truth with

The Reduction Principle

1o 1

respect to an arbitrary selection $ offixed terms. If this is how we construe


the account, then the required modification of principle (in) is straight
forward. It runs as follows:
(Hi')

If a universally quantified sentence is true, then all of its in


stances are logically true with respect to those expressions not bound
by the initial universal quantifiers.

The way we arrive at this modification should be clear. The selection of


fixed terms determines which expressions do not get replaced by
variables in our move from S to S', and hence which expressions do
not get bound by the quantifiers in the associated universal closure,
Vti . . . Vvn[S']. If the closure is true, then Tarskis account declares
the original sentence logically true with respect to that selection of $
indeed, with respect to any selection that includes all the unbound
expressions in the closure. If the closure is not true, then the sentence
does not come out logically true on that same selection.
Now, is this modified principle any more plausible than the original
reduction principle? It is certainly harder to assess. And the reason for
that is pretty apparent: it simply is not clear what intuitive notion this is
meant to be an analysis of. Do we really have a concept of logical truth
that is keyed to an otherwise arbitrary selection of expressions? Can a
sentence be logically true with respect to some expressions, but not
logically true with respect to others? If our answer is no, then principle
(Hi') must be rejected out of hand; it simply makes no sense as a
description of any ordinary concept of logical truth.
There are, though, some likely possibilities, some ways of viewing
our ordinary notion of logical truth as relativized in the manner sug
gested by principle (Hi'). I will consider what seems the most natural;
similar remarks can be made about any reasonable alternative.
When we describe a simple logical truth, for example,
Either Lincoln was president or Lincoln was not president
we often say that the sentence is true solely by virtue of the meanings of
a certain subset of its component expressions. In this case, the expres
sions in question are or and not. Our intuition here is twofold. First,
it seems that the truth of this sentence does not depend on how the
world happens to be, on whether Lincoln was, in fact, president. This
is the import of our judgment that the sentence owes its truth solely to
the meanings of its constituent expressions. But there is a clear sense in
which the meanings of some of the expressions play a secondary role in
this judgment. Indeed, the fact that this sentence is logically true does
not depend on the specific meanings of either the name Lincoln or
the predicate was president. So long as the name is a genuine name

102

The Reduction Principle

and the predicate a genuine predicateassuming, for example, that


neither behaves like Nixthe sentence will still be true no matter how
the world happens to be. Thus, our second intuition is that the logical
truth of this sentence is dependent on the meanings of the terms or
and not in a way that it is not dependent on the meanings of Lincoln
and was president. This difference, the fact that the meanings of the
latter two expressions are pretty much irrelevant, makes it natural to
describe this sentence as true by virtue of the meanings o for and not
alone. Just so, it would not be unnatural to say that the sentence is
logically true with respect to these two expressions alone.
Now contrast this with the sentence
Either everyone is happy or someone is not happy.
This second sentence also strikes us as true solely by virtue of meaning:
to recognize its truth, there is no need to check who is and is not happy.
But in this case our judgment does not rely only on the meanings o for
and not. To be sure, the specific meaning of is happy does not
matterno expression of the same semantic category, no genuine
predicate, would alter this judgment. But the meanings of everyone
and someone most certainly do matter. For example, ifeveryone had
meant something elsesay, if it had meant no one or every dogthen
this sentence could easily have been false. Clearly, the logical truth of
this sentence depends on the specific meanings of everyone and
someone in a way in which it does not depend on the specific meaning
o fis happy.Thus, we might say that this sentence, unlike the previous
one, is not logically true with respect to or and notalone. Instead, it is
logically true with respect to the four expressions, or, not, everyone,
and someone.
Take one more example. Consider the sentence
Either Leslie is a man or Leslie is not a bachelor.
Once again, this sentence strikes us as true simply by virtue of the
meanings of certain of its component expressions. But here, the rele
vant expressions include the predicates is a man and is a bachelor.
Again, the specific meaning of Leslie makes little difference; this
sentence is true solely by virtue of meaning, and remains true so long
as Leslie functions as a genuine name. Thus, it would be natural to
describe the sentence as true by virtue of the meanings o for,not, is a
man, and is a bachelor. And by analogy with the earlier cases, we
might say this sentence is logically true with respect to these same four
expressions.
What we have here is a perfectly understandable notion that could
well pass for a relativized version of logical truth, for the notion of

The Reduction Principle

103

logical truth with respect to an arbitrary selection offixed terms. Of course, it


seems most natural to apply the term logical truth when this concept is
relativized to expressions of traditional interest to logicians, expres
sions like or and everyone. When the collection includes such ex
pressions as is a man and is a bachelor, we might want to revert to a
term like analytic; logical truth seems a bit of a stretch. But what is
important is that the basic idea, the idea of a sentence that owes its
truth to nothing more than the meanings of some (perhaps proper)
subset of its expressions, is easily extended to the general case. The
concept that results is something like this. Certain sentences are true
merely by virtue of the meanings of their expressions. With some of
these sentences, this fact will depend on the meanings of all of the
sentences constituent expressions equally. But with others, including
all the examples we have looked at, the fact depends only on the
specific meanings of certain of those expressions, plus very general
assumptions about the semantic categories of the remaining expres
sions. This is what gives rise to an intuitive relativization of our concept
of logical truth, what makes sense of the notion of a sentence being
logically true with respect to an arbitrary selection of expressions. The
selected expressions are those whose specific meanings, as opposed to
general semantic category, are relevant to the sentences analytic truth.
When the selected expressions include only those of a traditional
logical sortconnectives, quantifiers, and so forththis relative
notion coincides with one standard conception of logical truth. The
basic idea of this conception is often described as follows: a sentence is
logically true if it is true solely by virtue of the meaning of the logical
vocabulary it contains. To the credit of the revised principle (m')> this
notion admits of a natural extension to completely arbitrary selections
of the logical vocabulary. In the extreme, if all the expressions in a
sentence are considered logical, the sentence will be logically true
just in case it is analytic.
Now, does Tarskis analysis capture this notion of logical truth? The
answer is clearly no. We can see this very easily from principle (Hi').
According to this view of logical truth, principle (Hi') says that if a
universally quantified sentence is true, then all of its instances are true
solely by virtue of the meanings of the expressions not bound by the initial
quantifiers. But that claim is patently false: the mere truth of the univer
sally quantified sentence gives us no guarantee of this. This is easy to
see, whether in abstraction or by looking at the multitude of simple
counterexamples. A sentence of the form Vvi .. . Vv[S'] can be true
for any number of reasons. Most obviously, indeed paradigmatically, it
might be true simply because themembers of the appropriate satisfac
tion domains, for whatever diverse reasons, all happen to satisfy S'. Thus,

104

The R eduction Principle

every individual satisfies if x was president then x was a mansome


because they were men, some because they were never president, some
for both reasons at once. But this gives us no license to conclude that
the instances of this sentential function are true solely by virtue of the
meanings of their constituent expressions. Indeed, they are not. The
fact that all individuals satisfy this particular sentential function is not
guaranteed by the meanings of i f . . . then, was president, and was a
man; it is simply a matter of historical fact. And the fact that its
instances are true is obviously no more a matter of meaning alone.
For any sentence S and set $ of expressions, it makes sense to ask
whether S is true solely by virtue of the meanings of the members of
If $ contains every expression in S, then this simply comes down to the
question of whether S is analytic, true by virtue of the semantic rules of
the language. But if $ does not contain certain expressions appearing
in S,say, ei,
, enthen the question is somewhat more compli
cated. Presumably, what we want to know is whether S is true by virtue
of the particular meanings of the members of given our background
assumptions about the semantic categories of e\ through en. Now, if
this is our general conception of logical truth, then it is clear that
Tarskis account does not capture it. For all that Tarskis account
requires is that the following closure be true:
(G)

Vt/! . . . Vvn[ S(elv) ]

But this requirement is, as we have seen, too weak to guarantee that S is
true solely due to the meanings of the expressions in For this closure
could well be true for all manner of reasons, reasons quite apart from
the purely semantic characteristics of its parts. It might be a mere
historical truth, an obscure arithmetical or set-theoretic truth, even a
purely coincidental truth. In none of these cases will its instances be
logically or analytically true.
Now, it is important to see that (G) is by no means irrelevant to the
logical or analytic truth of 5. On the one hand, the mere truth of (G)
clearly provides insufficient grounds for concluding that 5 is true
solely by virtue of meaning. But it should be equally clear that if S owes
its truth to nothing more than meaning, and if this fact depends only
on the semantic categories of e\ through en, then (G) will indeed be true.
Think of it this way. Suppose that (G) is false, but that S is in fact
logically true. Then it is obvious that the logical truth of S must depend
on the specific meanings of one or more of the expressions exthrough
en. For the falsity of (G) can arise only if there is at least one d-sequence
that falsifies S simply by assigning different interpretations to ex . . . en. But if
we have constructed the satisfaction domains properly, these interpre
tations do not change the semantic categories of e.\ through e. So
either 5 was not logically true in the first place, or its logical truth

The Reduction Principle

105

depended on the particular meanings of e\ through en. In either case, S


would not be logically true with respect to the members of $ alone.
A couple of examples here might help. First, consider again the
sentence
Either Leslie is a man or Leslie is not a bachelor.
This sentence seemed logically true, in our extended sense, with re
spect to the four expressions or, not, is a man, and is a bachelor.
Since our judgment here presupposes only that Leslie behaves se
mantically as a name, it is clear that the following closure must be true:
Vx[x is a man or x is not a bachelor].
If we found a counterexample to this generalizationsay, a female
bachelorthis would show either that the original sentence is not
analytically true or that its analyticity somehow depends on the specific
meaning of the name Leslie. Of course, the fact that we do not find a
counterexample does not alone allow us to conclude that the original
sentence is true solely by virtue of meaning. To see that, we need only
apply the same test to the sentence
Either Leslie was a man or Leslie was not president.
Thus, the simple truth of the above generalization, though clearly
necessary, is not sufficient to show that its instances are logically true on
this selection of expressions.
The situation is similar with our other examples. Take, for instance,
the sentence
Either Lincoln was president or Lincoln was not president.
The logical truth of this sentence does not depend on the specific
meanings of Lincoln or was president. Consequently, it is clear that
the corresponding universal closure must be true:
VxVP[x P or not x P].
Any counterexample to this generalization would show either that the
original sentence was not logically true, or that its logical truth de
pended on the specific meanings of the name Lincolnor the predicate
was president. Thus, the fact that the original sentence is true solely
by virtue of the meanings of or and not does guarantee the truth of
Ilie corresponding closure. But the mere truth of the closure does not
assure us that its instances are xrue by virtue of meaning alone, any
more than it did in the previous case.
I11 Ixitli of these examples if we generalize furthersay, if we
replace is a bachelor or not with appropriate variablesthe result-

i o6

The Reduction Principle

ing closures turn out to be false. This simply shows that the logical or
analytic truth of the original sentences did indeed depend on the
specific meanings of these expressions, not just on their general se
mantic category. But notice that even if the closures had in fact been
true, this would not alone have guaranteed that the original sentences
were true solely by virtue of the meanings of the remaining expres
sions. It would mean they were either true by virtue of meaning alone,
or true by virtue of facts about the world. But of course that is true of
any sentence whatsoever.
We do not usually think of logical truth, the ordinary concept, as
relativized to a completely arbitrary selection of expressions. This in
itself might be sufficient reason not to construe Tarski as offering a
completed analysis of an irreducibly relational notion. Still, it is clear
that there are various ways we can view logical truth as so relativized,
without doing injustice to the intuitive notion. In the above discussion,
I have considered the most natural: take logical truth to be a form of
analytic truth and relativize to the semantic function of the selected
expressions. But we could have chosen to emphasize some other dis
tinctive feature of the ordinary conceptsay, the fact that logical
truths are necessarily true or can be known a priori. Thus, the above
examples all express necessary truths, and the fact that they manage to
do so is peculiarly dependent on certain of the expressions they con
tain. In which case we might take the notion of logically true with respect
to the members of $ to mean something like expresses a necessary truth
by virtue of the expressions in 3\ But none of these relativized ver
sions of our ordinary concept is captured by Tarskis analysis. This, of
course, is quite obvious, once we give it a moments thought: Abe
Lincoln was president is not a necessary truth, or an a priori truth, or
an analytic trutheven when we take into consideration all of its
component expressions. Yet it comes out logically true, according to
Tarskis account, if we include all of these expressions in
What is important to note, though, is that if we construe Tarskis
account in this wayor misconstrue it, as the case may beit becomes
perfectly clear exactly how the analysis has gone astray. The account
takes a merely necessary condition for logical truth to be a sufficient
condition. For if a sentence 5 is logically true with respect to a set $ of
expressions, where this notion is taken in any of the ways suggested
above, then it follows that the corresponding universal closure Vvi .. .
Vvn[ S' ] will indeed be true. But the converse, namely principle (in'),
simply does not hold. Recognizing this will become important in Chap
ter 11, when we discuss the significance of the completeness theorem
for first-order logic. For the moment, though, let us put it on the back
burner.

8
Substantive Generalizations

According to the reduction principle, any instance of a true universal


generalization is logically true. Unmodified, this principle is clearly
wrong. And it is equally clear that the basic problem with the principle
is not solvedindeed, is not really even addressedby the modifica
tion incorporated into (Hi'). Before looking at the alternative modifica
tion, we should first consider just what we are up against. What,
exactly, is the defect that our modified principle must avoid?
The key problem is this. When we equate the logical truth of a
sentence with the ordinary truth of a universal generalization of which
it is an instance, we risk an account whose output is influenced by facts
of an entirely extralogical sort. Clearly, the question of whether the
sentence
(1)

If Leslie was president then Leslie was a man

is a logical truth does not depend on the sorts of historical facts that
determine the truth or falsity of the generalization
(2)

Vx[if x was president then x was a man].

As it happens, (2) is true, and so any account that equates the logical
truth of (1) with the simple truth of (2) will mistakenly declare the
former logically true. But of course the basic problem with the account
would remain even if (2) happened to be false. In that case the account
would issue the right assessment of (1), but certainly not because it
coincides with our ordinary understanding of logical truth, or even
offers a reliable test for that property. The analysis would be just as
faultyit would still entrust thelogical status of (1) to the political
contingencies described by (2)though in that case the defect would

io8

Substantive Generalizations

not show up in the actual assessment of (1). But only thanks to the way
those contingencies happened to fall out, not thanks to the definition
itself.
This point is a simple one, but still easy to overlook. Let me change
the example slightly to emphasize it. Suppose we were presented with
a definition that ties the logical truth of
(3)

If Leslie is a member of the Senate then Leslie is a man

to the ordinary truth of


(4)

Vx[if x is a member of the Senate then x is a man].

Clearly, we would consider this analysis unacceptable, whether or not


there were any woman senators. During those congressional terms in
which there are none, the flaw in the account would be highlighted by
the incorrect claim that (3) is logically true. But during terms in which
there are woman senatorsas is now the casethe very same defini
tion would not suddenly be judged an adequate account of logical
truth. This even though it would notand at present does notissue
a faulty assessment of (3). All we need note is that (3) is not logically
true, and neither would it have been logically true had all senators been
men, that is, had (4) come out true.
This simple observation shows two things. Most obviously, it shows
that the suggested account does not capture, or even come close to
capturing, the ordinary concept of logical truth. For the extension of
that property, as we ordinarily understand it, is completely indepen
dent of the makeup of the Senate. Indeed, none of the key characteris
tics that we attribute to logical truths depend on the substantive, extralogical facts that determine the truth or falsity of (4). Thus, (3) is not a
necessary truth, or an a priori truth, or an analytic truth, and neither
would it have been any of these had (4) come out true.
More important, though, our observation shows that the suggested
account is not a reliable test for logical truth, one whose extension is
sure to be right. Indeed, by equating the logical status of sentences like
(3) with the simple truth or falsity of substantive, nonlogical claims like
(4) , we clearly forfeit any hope of an internal guarantee of extensional
adequacy. For the extension of the account is determined by facts
here, facts of a political or historical sortthat are entirely indepen' dent of either the analysis itself or of the property of logical truth.
Obviously, there is no way to tell whether our assessment of (3) comes
out right, and hence whether the definition is extensionally correct,
without checking the makeup of the Senate. The analysis itself, on its
own, simply cannot guarantee this.

Substantive Generalizations

log

It is because of these failings that accounts based on principle (Hi), at


least in its unmodified form, seem intuitively unacceptable. The key to
their failure is the dependence of their assessments on extralogical
features of the world. If our assessment of the logical status of a
sentence rests on substantive facts about Abe Lincoln, or about the
presidency, or about anything of the sort, then either the assessment
will as a matter offact be wrong, or it would have been wrong had the facts
in question been otherwise. Either possibility is equally damaging to
our claim to have captured the ordinary concept of logical truth or to
have devised a reliable test for that property.
Modifying the Principle (Part Two)
It is clear that with certain selections of
the assessments made by
Tarskis account are subject to extralogical influence, depending, as
they do, on the truth values of substantive generalizations like (2) and
(4). This is most obvious when we include names or predicates among
the fixed terms: if Abe Lincoln or was president are included in
then it is hardly surprising when we find that the output of the account
depends on nonlogical facts about Lincoln or the presidency. But it
seems equally apparent that if we exclude these expressions from
and if they are not definable in terms of the remaining expressions,
then our assessments will not depend on facts about that particular
individual or that particular property. What this suggests is obvious:
maybe we can avoid an account that is subject to extralogical influence
by imposing restrictions of some sort on the selection of Specifically,
perhaps we can sidestep the basic problem with principle (Hi) by in
cluding in ^ only expressions of a distinctively logical sort. The
plausibility of the second construal of Tarskis account rests on exactly
this assumption.
We need to explore this possibility in some detail. The idea is that we
should not equate the logical truth of a sentence with the truth of just
any generalization of which it is an instance. Rather, the logical status
of a sentence should be tied only to generalizations of a very special
sort: those that contain in their matrix1nothing but variables bound to
the initial universal quantifiers and constant expressions of a distinc
tively logical sort. Thus, the logical truth of Abe Lincoln was presi
dent will not be determined by the truth values of:
[Abe Lincoln was president]
or:
Vxfx was president]

lio

Substantive Generalizations

or:
VP[Abe Lincoln P]
but rather by the truth value of:
VxVP[x P].

This simply because the first three closures contain expressions of an


intuitively nonlogical sort, and so the outcome of our test might de
pend on facts of a similarly nonlogical sort.
If we construe Tarskis account in this second way, the assumed
modification of principle (in) is fairly clear. The modified principle
would run roughly like this:
(iii',)

If a universally quantified sentence is true, and the constant


expressions appearing in its matrix are of a distinctively logical sort,
then all of its instances are logically true.

Now, the notion of a distinctively logical expression is rather vague,


but we will not worry about that just yet. Our present concern is
whether this construal of the account, however it is ultimately spelled
out, could render the analysis immune to the kind of extralogical
influence that plagues the general, quantihcational approach. Will the
output of the definition still depend, as we would expect from prin
ciple (Hi), on intuitively irrelevant features of the worldsay, historical
or physical or mathematical factsor has that dependence been se
vered by the more careful choice of fixed terms reflected in (Hi")?
However reasonable (Hi") may initially seem, it in fact suffers from
exactly the same defect as the original principle (Hi). In other words,
even when we hold fixed only distinctively logical expressions, the
output of Tarskis account remains dependent on completely non
logical facts. This is true no matter how narrowly we construe the notion of a
logical expression, and whether or not we impose the cross-term restrictions
employed in the standard, first-order semantics.
The reason this has been overlooked is that with very weak lan
guages, such as those we have considered so far, we can so arrange it
that the definitions are extensionally correct in spite of this faulty
dependence. But that speaks no more in favor of the analysis than the
observation that we can safely tie the logical truth of
(3)

If Leslie is a member of the Senate then Leslie is a man

to the simple truth of


(4)

Vx[if x is a member of the Senate then x is a man]

so long as the voting public cooperates.

Substantive Generalizations

i l l

The Size of the World


Viewed in this second way, Tarskis account rests on a quite straight
forward assumption. The assumption is that facts of a nonlogical sort
can influence the outcome of his test only if expressions of a nonlogical
sort are included in the set of fixed terms. This assumption is neither
obviously right nor obviously wrong. What is clear is that the converse
of the assumption is true: if $ contains expressions like names and
predicates, then the account will certainly be subject to all manner of
extralogical influence. Further, it is clear, or at any rate relatively clear,
that certain kinds of extralogical influence can be excluded by banning
names and predicates from
Certainly, it is hard to see how facts
involving specific individuals and particular properties could affect
the outcome of the definition if there were no way to refer to those
individuals or those properties.
Still, not all facts of a nonlogical sort involve specific individuals or
properties. Let us begin by considering the most obvious examples:
facts concerning the size of the universe, the number of individuals that
exist. Take, for instance, the following sentences, touched upon briefly
in Chapter 5:
: 3x3y(x =y)
<r$\ 3x3y3z(x ^ y / s y ^ z A X ^ z )
For each n, the sentence <rn says that there are at least n objects.
According to the standard conception, none of these should come out
logically true: the size of the universe is surely not a matter of logic.2
But if we consider the existential quantifier, the identity predicate, and
the truth functional connectives to be distinctively logical expressions,
then Tarskis account equates the logical truth of these sentences with
the simple truth of the following (trivial) closures:
[o-2]: [3x3y(x = y) ]
[0-3]: [3x3y3z(x ^ j i a ^ z a x ^ z ) ]
Clearly, some of these sentences are true. Exactly how many de
pends, of course, on the size of the universethat is, on how many
objects there happen, in fact, to be. If the universe is infinite then all of
the sentences are true, and so will be mistakenly judged logically true.
If it is finite, then only a finite number of them will be judged logically
true. But the important point is not how many of these sentences the
definition gets wrong, but rather th^fact that the assessments here are
dearly dependent on a nonlogical state of affairs. This is exactly the
defect that seemed so apparent when we first considered the un-

112

Substantive Generalizations

modified principle {in). As we noted then, whether a sentence is logi


cally true should not depend on substantive, extralogical facts,
whether historical or physical or mathematical. This is the problem
with the original principle {in), and it still infects {in"), at least if this
new principle ties the logical truth of <rn to the simple truth of [o-].
Though perhaps true, [o-] is not a logical truth, and neither of course
is its instance, <rnOn their own, these examples are not tremendously persuasive. For
there are two quite different responses that immediately suggest them
selves. First, none of the above sentences comes out logically true in the
standard interpretational semantics for first-order languages, and this
fact is indeed independent of how many individuals the universe as a
whole happens to have. Of course, since the standard semantics treats
the quantifiers as variable and then imposes the cross-term restrictions
that this move ultimately necessitates, this response presupposes some
resolution of the problems discussed in Chapter 5. Still, we should look
for a solution to the present difficulty in this direction. Second, even if
cross-term restrictions cannot be made consistc.it with Tarskis analysis,
we could always question whether the identity predicate is in fact a
distinctively logical expression, and so whether it should be included
in 3*. Here, too, we might find a solution to this problem. Let us
explore these two suggestions in turn, to see whether the assessments
made by the resulting accounts are independent of questions about the
size of the universe.
The first suggestion would have us vary the domain of quantifi
cationthat is, the interpretation of 3. Once we do this, the logical
status of the sentences <r2, crs, . .. no longer depends in any way on the
size of the universe. The reason for this is simple: no matter how small
or large we assume the universe to be, whether finite or infinite, we can
always interpret 3 in such a way that it quantifies over a very small
subcollection of the actually existing objects, including a subcollection
of one. Thus, if 3 is interpreted to mean, say, some sixteenth president of
the United States, rather than the unrestricted something, then all of the
<rn will clearly be false. For in fact there were not two distinct sixteenth
presidents, or three, or four, and so forth.
Instead of talking about reinterpreting 3, consider how this same
tactic converts to the quantificational framework. To do this, we need
an existential quantifier variable, E, whose satisfaction domain consists
of various subcollections of the universe, various quantifier restriction
sets.3 According to the present suggestion, the logical truth of <r2,
0-3, . . . is dependent on the truth values of the following closures:
V[o-2(3 IE) ]: VE[ExEy{x + y)]
V[o-3(3/j ]: VE[ExEyEz{x y A y z c \ x z ) \

Substantive Generalizations

113

These closures say, in effect, that every subcollection of the


universethat is, every quantifier restriction setcontains at least two
(three, four, . . .) members. But sequences that assign a singleton to E
satisfy none of the closures constituent sentential functions. Conse
quently, all of these closures come out false, no matter how large the
universe actually happens to be. This is how Tarskis account avoids
declaring any of our original sentences logically true when we take the
existential quantifier as a variable term. This is the expedient built into
the standard interpretational semantics for first-order languages.4
Now, does this tactic solve the problem with Tarskis account, or
does it simply treat one of the symptoms? Admittedly, since the truth
values of the above closures are independent of the size of the uni
verse, the assessments of the <rn no longer depend on this extralogical
fact. And this is as it should be. But this hardly guarantees that the
resulting account escapes such influence elsewhere. Indeed, we do not
have to look far to find sentences whose assessment still depends on
precisely the same fact. Consider, for example, the negations of our
original sentences:
-10-2: i3x3y(x = y)
-10-3: i3x3y3z(x = y a y *

* z)

For each n, icr says that there are fewer than n objects in the universe.
Once again, none of these should come out logically true, no matter
how large or small the universe happens to be. But consider how the
standard account assesses these sentences. Treating the existential
quantifier as variable, the definition tags the logical status of these
sentences to the ordinary truth values of the following closures:
V[io-2(3AE)]: V[\ExEy(x y)]
V[icr3(3/)]: V[\ExEyEz(x = y Ay z/\ x z)]
Notice what these closures say. For each n, the sentence
(5)
V[icrn(3AE)]
claims that every subcollection of the universe contains fewer than n
objects. And this will be true just in case the largest subcollection of the
universenamely, the universe itselfcontains fewer than n objects.5
Thus, if the universe is finite, the present account will mistakenly
pronounce an infinite number of the sentences io-2, kt3, . . . logically
true; in that case, the account will be extensionally incorrect. Of
course, if the universe is infinite, hone of these sentences will be
declared logically true. But is that because the account has captured
our ordinary notion of logical truth? After all, these sentences are not
in fart logically true, but neither would they be logically true ii the

ii4

Substantive Generalizations

universe were finite. Yet according to the standard account, the sen
tence icrn is not logically true only because sentence (5) is falsethat
is, only because there are more than n objects in the universe.
Sentence (5) makes a perfectly ordinary claim about the world, one
that has little, if anything, to do with logic. If the world has fewer than
n objects, then (5) is true; if more than n, then it is false. When we trust
the logical status of ~i<rn to the truth or falsity of the substantive claim
made by (5), we put ourselves in the same position as we were with (1)
and (2) or (3) and (4). If there are fewer than n objects, our situation is
parallel to (1) and (2)or (3) and (4), during terms in which there are
no woman senators. Then, our account will be extensionally incorrect,
thanks to the truth of (5). If there are more than n objects, our
situation is more like (3) and (4)or (1) and (2), had a woman been
elected president. In that case, the defect in the analysis remains,
though it is disguised by the fact that (5) is false. But whichever is the
appropriate parallel, it is clear that with the current selection of fixed
termsthat is, the selection employed by the standard, first-order
semanticsthe assessments made by our test are still influenced by this
extralogical state of affairs. The account still suffers from the basic
defect of principle (m).
How does the standard semantics deal with this problem? After all,
the sentences i<T2, 10-3, . . . do not come out logically true according
to the usual, model-theoretic account, at least as it is ordinarily pre
sented; if they did, the analysis would have few, if any, defenders.
Exactly what feature of the standard semantics assures us that none of
these is declared a logical truth? Is there some subtlety about the
account that we have simply overlooked?
The answer is that nothing about the standard semantics assures us
of this, nothing whatsoever. We get our assurance from an assump
tion made quite independently of our account of the logical
propertiesan assumption, needless to say, about the size of the uni
verse. When we present the standard first-order semantics, we gener
ally do so within some set-theoretic framework or other. We build our
models out of objects cobbled together from the set-theoretic universe,
and in doing so we naturally assume various facts about that universe.
Generally, the specific framework we presuppose is that of ZermeloFraenkel set theory, but of course nothing about the analysis dictates
this particular choice, or even that our background theory should be ,\
set theory rather than a class theory or category theory or property theory.
Now, in the standard presentation, the only thing that assures us
that none of the above sentences comes out logically true is the axiom
of infinity assumed in the underlying set theory. It is this axiom thai
guarantees the existence of infinite models (that is, infinite restriction

Substantive Generalizations

115

sets for interpreting our quantifiers), and so that guarantees interpre


tations of 3 in which all of these sentences come out false. Or, to put it
in quantificational terms, it is this axiom that entails, for any n, the
falsity of (5), and so assures us that the account does not wrongly
declare icrn a logical truth. If the very same semantics were developed
within a theory that does not presuppose that axiomsay, KripkePlatek set theorythen our assurance would instantly evaporate. The
analysis itself provides no guarantee; the outside assumptions are what
do the trick.
It is important to see exactly what is going on here. The basic
problem to be addressed is this. Our assessments of the logical status of
102, no-3, >like our earlier assessments of (1) and (3), are depen
dent on a nonlogical feature of the world, the size of the universe. So
long as this dependence remains, our definition clearly has not cap
tured the ordinary concept of logical truth, nor do we have any inter
nal guarantee of its extensional adequacy. For none of these sentences
is a logical truth, and neither would any of them be logical truths if the
universe were finite. Whether or not these sentences are logically true
simply does not depend on the truth of substantive claims about the
size of the universe. But when we give the standard, first-order seman
tics for this language, do we solve this problem? On the contrary, far
from severing the faulty dependence, we simply annex to our seman
tics a sufficiently powerful assumption about the worldnamely,
the axiom of infinitywhich then provides us with the right as
sessments of these particular sentences. This is equivalent to solving
our incorrect assessment of (1) by electing a woman president: the
tactic may get the assessment right, but it hardly corrects the underly
ing defect.
We might dramatize the point in the following way. Suppose the
standard, interpretational semantics really did capture our ordinary
understanding of the logical properties. If this were the case, then it
would be inconsistentnot just wrong, but inconsistentfor a finitist to
hold that none of the sentences io-2, -10-3, . . . is logically true. But of
course there is nothing whatsoever about the finitists basic assumption
that makes this an incoherent position; quite to the contrary, it is the
only reasonable stand to take. The finitist is perfectly within his rights
to claim that the universe could have been larger than it happens to be:
although there might in fact be exactly n objects (both physical and
mathematical), there might have been n+1 objects, or n+2, and so
forth. From this, the finitist should obviously be allowed to conclude
that none of these sentences is logically true. But the standard account
of logical truth would rule out thisNconclusion: if there happen to be no
more than n objects, then -law i. ion+2. - a r e all declared logical

116

Substantive Generalizations

truths. This in spite of the fact that they are not, even from the finitists
perspective, either necessarily true, or analytically true, or knowable a
priori. This in spite of the fact that they have none of the distinctive
features ordinarily attributed to logical truths.
Note that this point does not depend on our agreeing with the
finitists position. Indeed, when it comes to the ontology of mathe
matics, I personally tend toward a rather naive platonism. But if we
even acknowledge that the finitists position is a coherent one, then it
follows that the standard account has gotten things wrong. For clearly,
from the finitists perspective, no claim about the specific size of the
universe is logically true, even though some such claim might, purely
as a matter of fact, be true. Appealing to the finitists position is just a
handy way to emphasize the defect in the account, to emphasize that
generalizations like (5) do indeed make substantive, extralogical
claims. This defect remains even if the finitist is in fact wrong.6
Indeed, we can put this point even more strongly. The problem
these sentences bring out remains even if we consider the finitist to be
necessarily wrongthat is, even if we take the axiom of infinity to be a
necessary truth. All we need recognize is that the axiom of infinity, and
its various consequences, are not logical truths. This is all that is re
quired to see that the output of Tarskis account is still influenced by
extralogical factsin this case, by the set-theoretic fact expressed by
the axiom of infinity. It is exactly such potential influence that makes
the original reduction principle (Hi) seem so implausible, and it is clear
that the influence survives the modification built into (Hi").
Let us pause for a moment and take stock. When we consider the
existential quantifier, the identity predicate, and the truth functional
connectives to be distinctively logical expressions, the output of Tar
skis account clearly depends on the size of the universe: whether <rn
comes out logically true is determined by whether there are more than
n objects in the universe as a whole. To block this dependence, and the
faulty assessments it would yield, the standard, model-theoretic ac
count varies the domain of the quantifier. But the output of the
resulting account is no less dependent on the size of the universe:
whether io- comes out logically true is still determined by the number
of objects in the universe as a whole. Here, though, the standard
account offers no remedy: it simply appeals to an external assumption
about the size of the universe, and leaves the faulty dependence intact.
Now, if we still want to claim that the standard semantics avoids the
intuitive defect in principle (Hi), there seems only one recourse avail
able. We must claim that the axiom of infinity does not express an
extralogical claim, and so that our account is not, at least on this
score, subject to extralogical influence. But this response is implausible
in the extreme. For if it is a logical truth that there are infinitely many

Substantive Generalizations

117

objects, then it must equally be a logical truth that there are at least
twenty-seven. So to execute this defense consistently we would have to
argue that, contrary to our initial impressions, all of the <rnreally should
be judged logically true. We might put it this way. The claim that there
are at least twenty-seven objects (0-27) is not a logical truth, by anyones
lights. Neither is the claim that there are fewer than twenty-seven
(1027). Yet if Tarskis account of logical truth is right, icr27 is not
logically true only because V[10-27(3IE)] is falsethat is, only because
there are at least twenty-seven objects. Clearly, the outcome of Tarskis
account here depends, by anyones lights, on matters of a nonlogical
sort.
The assessments made by the standard, first-order semantics are
influenced by at least one kind of nonlogical fact: the size of the
universe. In a moment we will see that they depend on other such facts
as well. At this point, though, let us briefly look at the second sug
gestion for dealing with the <jn\ treating the identity predicate as a
nonlogical expression. I will return to the standard account in a
moment.
If we exclude the identity predicate from $, Tarskis account
equates the logical truth of cr2, the claim that there are at least two
objects, with the ordinary truth of the closure
(6)

VR[3x3y(-vcRy)].

Roughly speaking, (6) says that no relation relates everything with


everything. Of course, the exact claim made by this sentence will
depend on how we specify the satisfaction domain for the relation
variable Rin particular, whether we take that domain to consist of
relations themselves, or instead extensions of relations, that is, sets of
ordered pairs. But intuitively, the claim seems false: certainly, the
relation of coexistence, or that of being either identical or not identical with,
relates absolutely everything in the universe with absolutely every
thing in the universe. And consequently, the suggested account would
not mistakenly judge 0-2 logically true, no matter how large the uni
verse happens to be. This holds as well for the other crn.7
Once again, our assessments of the crn have been divorced from
questions about the size of the universe. And this is as it should be. But
once again, the same problem simply recurs elsewhere. To see this,
consider the following three sentences.
a: VxVyVz(x is taller than 31a y is taller than z
>x is talle^than z)
: Vxi(x is taller than x)
y: Vy3x(x is taller than y)

118

Substantive Generalizations

The first two sentences say that taller than is transitive and irreflexive,
which of course it is; the third sentence claims (falsely, let us suppose)
that there is no tallest thing. Now conjoin these sentences and negate
the result:
(7)

-i(a a

y)

Since we are assuming that y is false, (7) will of course be true. But
clearly it is not logically true: it could have been the case (indeed might
actually be the case) that everything is shorter than something. (7) is
not a necessary truth, or an a priori truth; neither is it true solely by
virtue of meaning.
Now, according to the present strategy, we are not treating the
identity predicate as a logical expression. But since the identity pre
dicate does not appear in (7) this will have no effect, one way or the
other, on our assessment of this sentence. Here, Tarskis account
equates the logical truth of (7) with the ordinary truth of
(8 )

Vi?[i(VxVyVz(xi?y a yRz xRz) a Vxi(xRx) a Vy3x(xy))].

What (8) says is that every transitive, irreflexive relation has a minimal
element.8 If we take the satisfaction domain for the variable R to
consist of arbitrary sets of ordered pairs, then this closure will be true if
and only if the universe as a whole is finite. On the other hand, if we
take the satisfaction domain for R to consist of genuine relations,
rather than sets of ordered pairs, then (8) will certainly be true if the
universe is finite, and might be true if the universe is infinite as well. In
the latter case, the truth of (8) would depend on how those relations
happen to hold among the existing individuals; on whether, for exam
ple, there is in fact a tallest, or in fact a shortest, or in fact a largest, or in
fact a smallest, and so forth.
In any event, the truth of (8)and so our assessment of (7)
depends at the very least on the actual size of the universe, and per
haps on additional nonlogical facts as well. If every transitive, irreflex
ive relation happens to have a minimal element, whether due to the
finitude of the universe or for other reasons entirely, then our as
sessment of (7) will be incorrect. On the other hand, if some relations
do not have minimal elements, our assessment of (7) will be correct.
But is it correct because we have captured the genuine notion of logi
cal truth, or found a reliable test for that property? Surely not: sen
tence (7) is not a logical (necessary, a priori, analytic) truth, and neither
would it be a logical (necessary, a priori, analytic) truth if the relevant
facts had been differentsay, if the universe were finite, or if it were
infinite but somewhat homogeneous.
Notice that in moving from (7) to (8) we treated the various quanti-

Substantive Generalizations

11 g

fiers as fixed. But precisely the same dependence remains even if we


vary the domain of quantification, as we do in the standard semantics.
To see this, consider what happens when we vary the domain and
impose the standard cross-term restrictions. To simplify matters, as
sume that 8 is the sentence we get from (7) by replacing occurrences of
Vv with the equivalent i3ui. Thus 8 contains only the existential
quantifier, and so the interpretation of this quantifier is all we need to
worry about.
Now, if we were just to vary the domain of quantification without
imposing cross-term restrictions, Tarskis account would equate the
logical status of (7) with the truth or falsity of the closure
V Vfl[ 8(taller/J?,3/)].
But of course, the standard semantics also requires that the interpreta
tion of predicates be drawn from the domain of quantification, and so
we need to build this cross-term restriction into our closure. The
relevant closure will be the following:
(8')

V Vfl C EXE[ 8(taller/J?,3/)].

The standard account ties the logical truth of (7) to the ordinary truth
of (8'). What (8') says is that any transitive, irreflexive relation drawn
from any subcollection of the universe has a minimal element. Taking
R to be satisfied by arbitrary sets of ordered pairs, this sentence will
again be true just in case the universe as a whole is finite. It is, in fact,
simply equivalent to (8).
If the universe is finite, the standard semantics mistakenly declares
(7) a logical truth. This becomes obvious, of course, if we once again
adopt the finitists perspective. If there are only finitely many objects,
both physical and mathematical, then clearly no model will contain a
transitive, irreflexive relation with no least element. For then the
model would have to contain an infinite number of objects, and so too
would the universe as a whole, contrary to our assumption. Our mod
els are, after all, simply parts of the universe. Thus, if the universe at
large is finite, there will, as a matter of fact, be no models in which (7)
comes out false.
At this point, some readers will no doubt raise the following objec
tion. Suppose we grant, for the sake of argument, that the universe is
finite. In which case all actual models will indeed be finite as well. Still,
nothing stops us from claiming that the universe could, have been infi
nite, and so there could have been infinite models. In which case, even
though there may be no actual models in which (7) is false, there are, so
to speak, possible models that falsify (7). In which case, the only reason
(7) comes out logically true is that we are artificially limiting ourselves

120

Substantive Generalizations

to actual models, rather than considering all possible models. Once we


do that, (7) will no longer be deemed a logical truth, regardless of the
actual size of the universe.
To defend Tarskis account in this way is deeply and fundamentally
confused. Of course we need not deny that the universe could have been
infinite, or that there could have been models that falsify (7), even if in
fact there are not. Indeed, this is simply another way of saying that the
corresponding generalization, (8'), though perhaps actually true, could
have been false. To say that there are possible models in which (7)
comes out false is to say nothing more or less than that there could have
been an infinite domain from which we could have drawn a relation
that fails to satisfy the matrix of (8'). In other words, we are simply
observing that (8'), even if true, is not necessarily true.
But that, of course, is the whole crux of the issue. Tarskis account
equates the logical truth of a sentence with the ordinary truth of the
corresponding closure; the question of whether that closure could have
been false is entirely beside the point. The account does not require,
first, that the closure happen to be true and, second, that it could not
have been falsei.e., that it be necessarily true. If it did, then our earlier
problem with (1) and (2) would be solved without banning predicates
from 0s- After all, there could have been a woman president, even
though in fact there were none. Indeed, we could then even include
names among the fixed terms with apparent impunity. For the closure
[Abe Lincoln was president]
though actually true, certainly could have been false. The point, I trust,
is clear. Far from being a defense of Tarskis account, this suggestion
amounts to a wholesale abandonment of the analysis.
The reason this suggestion seems natural is pretty obvious. We all
recognize that (7) is not a logical truth. Or, to put it in terms of the
consequence relation, we all recognize that the claim that there is no
tallest object (that is, iy) is not a logical consequence of the mere fact
that the taller than relation is transitive and irreflexive (that is, a a ).
And we recognize these things quite independent of our beliefs or
assumptions, whatever they may be, about the actual size of the uni
verse. No doubt we simply reflect that there might have been no tallest
object, whether or not there actually is. Questions about whether the
universe happens to be or happens not to be finite are completely
irrelevant to this fundamentally modal (or perhaps epistemic) intu
ition. For when we ask whether iy follows from a a , we are inter
ested in how things could have been, not how they actually are or how
we believe them to be. This is why the appeal to possible models seems
so attractive, even though antithetical to the Tarskian account. But all

Substantive Generalizations

121

this shows is that Tarskis account, even with standard selections of ft


and even when supplemented with cross-term restrictions, fails to
capture any modal, epistemic or semantic features of the genuine
notions of logical truth and logical consequence.
These same sorts of examples can be multiplied at great length. For
example, it is a well-known theorem of algebra, due to Wedderburn,
that every finite division ring is a commutative field. Thus, if the
standard, model-theoretic account of consequence is right, it would be
inconsistent for the finitist to deny that the commutativity of * is a
logical consequence of the axioms for a division ring. But it clearly
does not follow from those axioms: even if we adopt the finitists
perspective and assume there are in fact only finite structures, there
surely could have been infinite structuressay, if the physical universe
had been infinite. In which case there could have been noncommutative
division rings, even if in fact there are none.9
Rather than run through further examples of this sort, let me simply
make one final point. The point is that this same defect shows up in
Tarskis account even if the only expressions we hold fixed are the
truth functional connectives. Consider, for example, the sentence
t 1: Lincoln was president Washington was president.
If the only expressions we consider distinctively logical are the truth
functional connectives, then Tarskis account equates the logical status
of this sentence with the ordinary truth value of the closure
(9)

VxVyVP[x P

y P].

If we take the satisfaction domain for the predicate variable P to


consist of arbitrary sets, then (9) will be true if and only if the universe
as a whole contains just one object. (If the satisfaction domain consists
of properties, (9) will be true just in case the universe contains only one
type of objectthat is, if all objects share the same properties.) It turns
out that for any n, we can construct a sentence rn which, like ti,
contains only truth functional operators, and whose corresponding
closure is true just in case there are, as a matter of fact, no more than n
objects (or n types of objects). But none of these sentences is logically
true, and neither would any of them be logically true if the universe
were finite (or if it had finitely many types).
Now, my point here is to emphasize that the problem with Tarskis
account has nothing to do with the question of whether we are mistak
enly including in $ expressions of a nonlogical sort. Surely, if any
terms deserve the status of distinctively logical expressions, the truth
functional connectives do. The problem does not come from an incor
rect choice of ft, but rather from the assumption that we can exclude

122

Substantive Generalizations

the influence of nonlogical facts by excluding nonlogical expressions


from the set of fixed terms. This assumption is simply mistaken. Prin
ciple (Hi") suffers from exactly the same defect as the original princi
ple (Hi).
Other Extralogical Influence
So far, I have emphasized how facts about the size of the universe
influence the assessments made by Tarskis account, and how this
influence remains no matter how tightly we construe the notion of a
logical expression. It would be a mistake, though, to think that this is
the only type of extralogical fact determining the output of the defini
tions. Indeed, it is not. It just so happens that this influence is relatively
striking, since it forces the standard account to rely on the axiom of
infinity, an obvious appeal to a nonlogical assumption.
Actually, it should by now be clear that many other assumptions,
besides the axiom of infinity, influence the outcome of the account. We
can emphasize this by taking a closer look at any of our examples
involving predicate or relation variables. Consider, for instance, the
sentence t \, and its corresponding closure (9). Notice that simply
assuming that the universe has more than one object, or even infinitely
many objects, does not alone guarantee that (9) is false. What we need
to know in addition is that two or more of the existing objects are
distinguishable by members of the predicate domain of the satisfaction
relation. As I mentioned earlier, if the predicate domain consists of
properties, then (9) would be true in any completely homogeneous
world, regardless of its size.
In the standard, interpretational semantics, what facts guarantee
that closure (9), and its counterparts for T2, ts, . . . , all come out false?
First is the assumption of the axiom of infinity; this is what guarantees
that there are two (three, four, ...) objects to be distinguished. But this
alone does not guarantee that the objects are distinguishable. When
we take the predicate domain to consist of arbitrary sets, what does this
job is the pair-set axiom. This axiom assures us that every object is (in
the now-relevant sense) distinguishable from every other, each being
the only member of its own singleton set.10 Given this assumption, the
truth value of (9) depends only on the size of the universe; without this
assumption, the closure raises additional issues as well.
Now, nothing about Tarskis analysis leads us to construe the vari
able P as ranging over sets rather than properties, and this decision
makes for a considerable difference in the claims made by generaliza
tions like (9). When we interpret such claims in the latter way, their
appeal to extralogical facts seems quite blatant: clearly, questions

Substantive Generalizations

123

about how many different types of objects exist is not a matter for logic
to decide. But even when we settle on the set-theoretic construal of
these variables, as is standardly done, the claims that result are, not
surprisingly, of a set-theoretic, not a logical, sort. When we give the
standard semantics, we do nothing to prevent this dependence, but
simply import all of our background assumptions about the universe
of sets in order to carry out our assessments. The assessments end up
depending on all of these assumptions, from the most powerful to the
most mundane. The only difference is that, as the underlying assump
tions become more powerful, the faulty dependence becomes increas
ingly hard to overlook.
Let us briefly look at another example of the same problem, one that
has frequently been discussed but repeatedly misdiagnosed. It is often
said that when we move to a second-order language, one that allows
quantification of predicate variables, logical truth becomes a relative
notion, one that depends on the underlying set-theory presupposed.
The reason people say this is that when we apply the standard account
of logical truth to a full second-order language, there are sentences
that come out logically true if we assume (say) the continuum hypothe
sis, but that do not come out logically true if we assume its negation.11
This is often taken to be an objectionable feature of second-order
logic: after all, why should the logical truth of a sentence depend on
such highly abstract set-theoretic claims, claims that are not, intuitively, a
matter of logic at all?
By now, what is going on here should not surprise us in the least.
What we have is a sentence of our second-order language, call it C,
whose logical status is being tied to the ordinary truth or falsity of a
certain generalization, say,
(10)

Vx. . . Vi/B[C'].

It turns out, though, that the facts described by (10) are of an entirely
extralogical sort: whether (10) is true depends on nothing more nor
less than the continuum hypothesisand clearly, neither it nor its
negation is a logical truth. But this is simply the faulty principle (m) at
work. The fact that our assessment of C depends on the substantive
claim made by (10) is no different from the fact that our assessment of
(7) depends on the size of the universe (the substantive claim made by
(8) ), or that our assessment of (3) depends on the makeup of the Senate
(the substantive claim made by (4)).
Of course there is a difference between (10) and (8), but not one of
much import. The truth value of (8) is guaranteed by the axiom of
infinity, which, though certainly not a matter of logic, is nonetheless a
far more comfortable assumption to make than either the continuum

124

Substantive Generalizations

hypothesis or its negation. It is because of this that we find it easier to


overlook this particular appeal to a nonlogical state of affairs when
giving our first-order semantics. In contrast, when we move to a
second-order language and come up against the somewhat daunting
visage of the continuum hypothesis, the appeal is rather hard not to
notice. But the difference here does not show that there is anything
peculiar about the logic of second-order languages, or that, as it is
sometimes put, second-order logic is really set-theory in disguise.
The problem lies with our faulty account of the logical properties,
which mistakenly equates the logical status of C with the ordinary truth
or falsity of (10). But this is no more or less mistaken than equating the
logical status of (7) with the truth value of (8), or the logical status of (3)
with the truth value of (4). Sentences like C are not logically true, and it
is only our allegiance to a faulty account of logical truth that makes us
think that they are (or would be, if the continuum hypothesis were
true).
Unmodified, principle (iii) is obviously false. The mere truth of a
universal generalization can only guarantee that its instances are true;
it cannot guarantee that they are logically true. Of course, if the
generalization itself is logically true, then the instances will be logically
true as well: that was the substance of the uncontroversial closure
principle (it). But if the generalization is not logically true, if it makes,
say, a substantive historical or physical or set-theoretic claim, then
neither will its instances be logical truths. The second construal of
Tarskis account relies on the assumption that no such substantive
claims can be made by generalizations of a particular sortnamely,
those that contain in their matrix only distinctively logical expressions.
But that assumption is simply false, as can be seen from the claims
made by such generalizations as (5), (8), (9), and (10); even if true,
none of these claims is logically true. And neither, of course, are their
instances.

9
The Myth of the Logical Constant

I have not claimed that when we apply the standard account of the
logical properties to certain simple, first-order languages we get an
incorrect extension. Indeed, as I explain in Chapter 11, the sentences
that come out true in all models of the standard first-order semantics
do in fact owe their truth to nothing more than the meanings of the
connectives, the quantifiers, and the identity predicate (assuming that
we employ the usual cross-term restrictions and that all the usual
axioms of set theory are true). And there is a perfectly understandable
reason for this. But the reason is not that we have a reliable account of
logical truth and logical consequence, one whose extension is sure to
be correct. Rather it is due to a combination of the weakness of our
first-order language and the strength of our underlying set-theoretic
assumptions. The world, in effect, simply compensates for our faulty
analysis.
Let me give an analogy. Suppose we applied Tarskis account to a
language containing names, truth-functional connectives, and the fol
lowing three predicates: is a man, is a bachelor, and is a senator.
Suppose further that we included in $ all expressions except names.
Thus, for example, we would equate the logical truth of
(11)

Leslie is a senator * Leslie is a man

with the ordinary truth or falsity of


(12)

Vx[x is a senator -* x is a man].

Similarly, the logical truth of


(13)

Leslie is a bachelor ^Leslie is a man


y

would he tied to the ordinary truth of

126

The Myth of the Logical Constant

(14)

Vx[x is a bachelor

x is a man].

Notice that this application of Tarskis account would have a quite


reasonable extension. Indeed, the only sentences that would be de
clared logically true are thoselike (13)that are true solely by virtue
of meaning. All others, for example (11), would not come out logically
true, thanks to the falsity of their corresponding generalizations. This
is because, given the way the world happens to bethat is, given the
fact that there is at least one woman senator, and at least one married
man, and at least one unmarried senator, and so forththe only true
generalizations that we will encounter are those, like (14), that are true
by virtue of meaning alone. The world, or rather that very limited part
of the world describable in our language, is sufficiently varied that all
of the substantive generalizations, such as (12), happen to be false. Of
course, there is no necessity to this: there might not have been women
senators or married men; it just happens that there are.
Now, no one would suggest that the above account captures either
our ordinary notion of logical truth or our notion of analytic truth (if
these are different). After all, it is clear that the proper extension here
is not due either to a proper analysis or to a proper selection of fixed
terms. On the contrary, the world happens to be compensating for an
obviously incorrect account of logical truth. We might emphasize this
in a couple of ways. First, we might note that had the relevant facts
been otherwisesay, had all male senators been married, or had all
senators been womenthe account would have produced a large
number of faulty assessments. What stands in the way of an incorrect
assessment of, for example, (11) is not an adequate analysis but a
woman senator. Second, we might observe that if the expressive power
of our language were slightly increasedsay, if we added the pre
dicate is president or is from New Jerseythen once again the
extension of the account would be thrown completely off. Then, true
generalizations like (2) would bring out the error in our analysis:
(2)

Vx[if x was president then x was a man].

This is precisely what is happening with the standard, interpretational semantics for first-order languages. The problem is not that the
account gets the wrong extension when applied to such languages.
Indeed, assuming all the standard axioms of Zermelo-Fracnkel set
theory, the only true generalizations that we encounter are in fact
logically trueand so too are their instances, by the closure principle
(**j. But here again the worldthat is, the set-theoretic universeis
simply compensating for an incorrect analysis of the logical properties.
What stands in the way of a large number of faulty assessments is

The Myth of the Logical Constant

12 7

simply the variety afforded us by the set-theoretic universeor rather


the limited portion of that universe describable in our language.
We can emphasize this in the same two ways. First, we can note that
with different background assumptionssay, if we assume a finite
rather than an infinite universethe extension of the account is
clearly wrong. For then, sentences like (7) are declared logically true.
Second, we can observe that by simply increasing the expressive power
of our languagesay, by adding second-order quantifiersthe ex
tension of the account is, once again, thrown completely off. Then,
generalizations like (10) are what bring out the failure of the analysis.
Now, it is important to see that in this case, as in the previous one, we
are not guaranteed a correct extension either by Tarskis general analy
sis or by our particular selection of fixed terms. What makes for the
correct extension are such things as the existence of an infinite number
of objects, the assumed distinguishability of those objects, the existence
of transitive, irreflexive relations with and without minimal elements,
and so forth. The reason it is crucial to recognize this is that our
attention is so easily drawn away from the real defect in the account
namely, the reduction principle, however modifiedand toward the
supposed issue of how to go about making the proper selection of
The assumption is that the reason Tarskis account works in this case
must be some special characteristic of the expressions we have held
fixed, and that the reason it fails in other cases is that we have held
fixed expressions without that peculiar characteristic. This is the
source of the so-called problem of the logical constants.
It is understandable how this issue comes to seem the key point. On
the one hand, it is perfectly clear that Tarskis general accountthat
is, when we allow arbitrary selections of 5 does not capture our
ordinary notion of logical truth. This is apparent both from the faulty
assessments it yields, and from simple reflection on the unmodified
principle (mj. On the other hand, it also seems that when we include in
only first-order quantifiers, truth functional connectives, and the
identity predicate, the definition produces a plausible extension
modulo our background assumptions about the universe, and the
special treatment given the quantifiers. From this it is all too easy to
conclude that the analysis is basically correct, but in need of a sup
plement. What is missing, we assume, is an account of the distinctive
characteristic that makes it right to hold fixed such expressions as
truth functional connectives, but wrong to hold fixed names, pre
dicates, and (according to some) second-order quantifiers.
Now, of course, for any finite number of expressions, it will always
be possible to find properties shared by all and only those expressions.
At the very least' we can simply list the expressions and take the

12 8

The Myth of the Logical Constant

property of being a member of that list. But when we see our goal as
that of supplementing Tarskis analysis, it seems clear that not just any
property will do. The reason for this is simple. Since Tarskis general
account captures none of the modal, epistemic, or semantic character
istics of logical truth and logical consequence, it seems that these
characteristics must somehow emerge from the sought-after sup
plement, from our account of what makes certain expressions genu
inely logical and others not. It would hardly do, for example, to add
the injunction to hold fixed only words spelled with fewer than four
letters, even if the injunction seemed to work. Such a supplement
would not make up for what is missing from the general account. It
would hardly explain why logical truths are, or are commonly thought
to be, necessary or a priori or true solely by virtue of meaning. It would
hardly persuade us that the account can be relied on to make the right
assessments.
This is why the task of characterizing the logical constants comes to
seem at once so important and yet so difficult. Indeed, most of the
burden of Tarskis analysis seems to shift to exactly this issue. But by
now it should be clear that the issue is based on a confusionnamely,
the assumption that when the account works, it works due to some
peculiar property of the expressions included in But this assump
tion is false: there is no property of expressions that guarantees the
right extension in these cases, none whatsoever. After all, any property
that distinguishes, say, the truth functional connectives from names
and predicates would still distinguish these expressions if the universe
were finite. But in that eventuality, Tarskis account would be extensionally incorrect. This observation alone is enough to show that it is not
any property of the expressions we hold fixed, the so-called logical
constants, that accounts for the occasional success of Tarskis defini
tions.
Here our earlier analogy will help drive the point home. Imagine
that our goal is to explain why Tarskis account produces a plausible
extension when we hold fixed is a senator, is a man, and is a bache
lor, but not when we also hold fixed is president. It would clearly be
misguided to look for our explanation in some characteristic of ex
pressions that distinguishes the former terms from the latter. Cer
tainly, it would be easy to find a variety of properties that distinguish
these expressions; at worst, we could appeal to a list. But we will not
find any property guaranteeing our success when we hold fixed the
first expressions while explaining our failure when we hold fixed the
second. For what accounts for that difference is not a property of
expressions at all, but simply characteristics of the worldfor exam
ple, the fact that there is a woman senator, but not a woman president.

The Myth of the Logical Constant

129

Looking for such a property in the standard, first-order case is equally


misguided.
The problem of the logical constants is a red herring. It has drawn
our attention away from the real defect in Tarskis analysis in pursuit
of a criterion that will never be found. When Tarskis definition works,
it works for a very simple reason, one that has nothing to do with any
special characteristics of the expressions in 5- The reason is this. For
any given language and any given selection of 3 the account associates
a particular universal closure with each sentence of the language.
Among the associated generalizations, we will find claims of various
sorts. Some of the generalizations will themselves be analytic or logical
truths, and these closures will naturally be true. Others will be logically
false. But still others will make substantive claims about the world,
claims that may or may not be true.
Associated Closures
Logically
false

Substantive
generalizations

Logically
true

For a given language and a given selection of


the question of
whether the output of Tarskis account will seem reasonable comes
down to the question of whether any of these intermediate, substantive
claims happen to be true, or whether, in contrast, the only true clo
sures are also logically true. And this will just depend on the world, on
exactly the substantive issues expressed by the generalizations. It
might depend on whether there are any woman senators, or on
whether there are any transitive, irreflexive relations without minimal
elements. It might depend on whether the universe is finite or on
whether there are uncountable cardinals smaller than the continuum.
These are the sorts of substantive claims that will appear among the
associated closures.
Logically
false

Substantive
generalizations
true

false
Logically
false
y

Logically
true

Substantive
Logically
generalizations
true
_________ \_____________
false

true

130

The Myth of the Logical Constant

Now, in some cases we may be fortunate: if the relevant portions of


the world are sufficiently varied, if none of the substantive generaliza
tions come out true, then the account will not issue any faulty declara
tions of logical truth. For the only sentences that will be judged logi
cally true will then be instances of generalizations that are logically true,
and these instances will, by the closure principle (**), be logically true as
well. In these cases, but only in these cases, Tarskis definition will yield
a reasonable assortment of logical truths. But we succeed here not
because principle (Hi) is correct, however modified, or because we have
chosen the right logical constants. Our success is due to principle (ii)
and simple good fortune.
Extensional Adequacy
When we apply Tarskis account to an arbitrary language, there is no
way to guarantee that it will be extensionally correct. No matter how
tightly we constrain the selection of 5 the output of the definition will
still depend on the truth values of various substantive, nonlogical
claims. If any of these turn out to be true, then their instances will
mistakenly be declared logically true. And that, of course, is beyond
our control: we can decide which expressions to include in but we
can hardly just decide that the resulting, substantive generalizations
must all be false. That decision is not up to us, but up to the world, to
the historical or physical or mathematical matters described by the
generalizations.
We cannot guarantee, antecedently, so to speak, that a given appli
cation of Tarskis definition will not overgenerate, that it will not
declare sentences logically true and arguments logically valid that in
fact are not. To be sure, in the paradigmatic, first-order case, I have
already alleged that the account does not, as a matter of fact, over
generate. But it is important to see that this is by no means obvious,
and certainly does not follow from anything we have said so far. In
Chapter 8, we surveyed a few of the more straightforward generaliza
tions on which the extension of this application depends. Those partic
ular claims happened to be false, given the set-theoretic assumptions
we standardly make. But of course the extension of the account de
pends on infinitely many such substantive generalizations, most far
more complex than those we bothered to look at. Nothing we have
seen so far precludes one of these coming out true, perhaps thanks to
some complex but universal characteristic of sets, or due to a subtle
and surprising algebraic fact akin to Wedderburns theorem. And if
any of these generalizations happen to be true, then even this paradig
matic application of the model-theoretic account will have the wrong

The Myth of the Logical Constant

131

extension. Clearly, if we think its extensional adequacy is somehow


obvious, even assuming the usual axioms of set theory, we are simply
fooling ourselves.
The claim that the standard semantics for first-order languages does
not overgenerate requires an external justification. It does not follow
from any characteristic of Tarskis definition, or of the expressions
held fixed, or of the language itself. So far, the only evidence for this
claim that we can point to is, to use Hilberts term, experimental: we have
not yet run into any examples of sentences declared logically true or
arguments declared logically valid that in fact are not. Better evidence
would be a proof showing that, in this particular case, the only true
closures are in fact logically true. How we can get such a proof is a topic
we will take up later. At the moment, though, our concern is more
general than the first-order case.
What can we say, in a general vein, about the extension of Tarskis
account? First of all, we can say exactly why an application of the
account overgenerates, when it does. As I have pointed out, the ac
count can make this mistake only when it associates a substantive,
nonlogical generalization with some sentence, and when that general
ization turns out to be true. Clearly, it will not make the faulty as
sessment if the generalization is false, for then the sentence will not be
judged logically true. And of course if the associated generalization
does not make an extralogical claim, if it is itself a logical truth, then its
instance will be as well. But the fact that a generalization contains only
expressions of a traditionally logical sort in no way precludes its
making a substantive, nonlogical claim: this is, demonstrably, a myth,
one that fails already with the truth functional connectives.
The second thing we can say is that certain applications of the
account are guaranteed to overgenerate, and so guaranteed to have
the wrong extension. With certain languages, and certain selections of
we will find among the associated generalizations both a substantive
claim and its negation (or some other claim equivalent to its negation).
This is what happens, in the first-order case, when we hold fixed the
interpretation of both the identity predicate and the quantifiers. For
then we find the following sentences among the relevant closures:
[3x3y(x 4 y)]
[-i3x3y(x 4 y)]
Since both of these make substantive claims, the account will over
generate if either comes out true. But since one or the other of them
must be true, the account is sure to make a faulty assessment. When we

132

The Myth of the Logical Constant

vary the interpretation of 3, we replace these closures with the follow


ing two:
V[xEy(x =h y)]
VE[\ExEy(x

y)]

These still make substantive claims, but since they are not straightfor
ward negations of each other, the account is no longer sure to fail. If
both turn out to be false, as I trust they do, then neither of their
instances will be wrongly accused of logical truth.
As we move to increasingly powerful languages, this problem be
comes harder to avoid. For example, the situation crops up with full,
second-order languages whether or not we vary the domain of quanti
fication. Thus, it turns out that here we find among the associated
generalizations not only sentences equivalent to the continuum hy
pothesis, but also sentences equivalent to its negation.1Consequently,
whichever way the hypothesis goes, this application of Tarskis defini
tion will overgenerate, declaring some sentences logically true because
of their true, but not logically true, closures. This is hardly surprising,
since as we increase the expressive capacity of the members of
we increase the likelihood that some substantive claim and its negation
(or a claim equivalent to its negation) will be among the generalizations
determining the extension of the account.
Both of these remarks have to do with the problem of overgene
ration. What can we say about the complementary problem of un
dergeneration? Is it possible for Tarskis definition to judge valid
arguments invalid, or to judge sentences not logically true when in fact
they are? The answer, of course, is yes. But this can happen only when
the logical truth of the sentences, or the validity of the arguments,
depends essentially on the meanings of one or more expressions not
included in To repeat a trivial example, if the interpretation o for is
not held fixed, then the logical truth
Lincoln was president or Lincoln was not president
will not be so declared.
Though this example is trivial, the general problem can hardly be
shrugged off. If our goal is to study the logical properties of a given
language, the only way to ensure that Tarskis definition will not
undergenerate is to include every expression in But as soon as we do
this we are sure to encounter the opposite problem, that of overgeneration. Suppose, for example, that our target language is (or includes)
the language of elementary arithmetic. When we apply the standard,
interpretational semantics to this language, our specification of tin*

The Myth of the Logical Constant

133

logical properties falls short of their genuine extensionas Tarski


himself, indirectly, pointed out. For example, instances of the co-rule
(even perfectly effective versions of it) are wrongly judged invalid. Of
course, it is easy to see why these instances are not declared validtheir
validity depends on the meanings of various expressions not held
fixedbut this does not give us a solution to our problem, namely,
specifying the genuine extent of the consequence relation for the
given language. If this is our goal, Tarskis account does not, in gen
eral, allow us to steer a course between the complementary hazards of
over- and undergeneration.
This is not to say that an application of the model-theoretic account
can never get the extension exactly right. This can happen if, first of
all, no valid arguments expressible in the language depend for their
validity on any expressions not included in and if, in addition, all the
associated generalizations that determine the extension of the account
are either false or, if true, logically true. These two circumstances will
sometimes come together. They come together, for instance, when we
apply the account to the simple language of Chapter 3, holding fixed
only the truth functional connectives. It happens that no valid argu
ments expressible in this language depend on the specific meanings of
Lincoln, Washington, was president, and had a beard. This might
not have been the case had we included other predicates as well
perhaps was an elected official or sported unshorn facial hairor
had we included expressions of other semantic categories, such as
adjectives and adverbs. But we did not. It also happens that here the
only true generalizations are indeed logically true. Of course, many of
the relevant closures make substantive claims, for example (9), but
these turn out to be false.
(9)

VxVyVP[x P >yP]

These two circumstances do not come together, however, either in the


language of elementary arithmetic (where we run into the first prob
lem) or in languages with higher-order quantifiers (where we run into
the second).
What is important to recognize is that there is no reason to expect,
for an arbitrary language, that there will be any single selection of $
that gets the extension exactly right, that neither overgenerates nor
undergenerates. The reason for this should, by now, be clear. To avoid
undergeneration we cannot exclude from $ any expression whose
meaning plays an essential role in any valid arguments expressible in
the language. (This will depend\not only on the expression itself, but
on the entire vocabulary of the language.) But this general characteris
tic of expression^figuring essentially into valid argumentshas

134

The Myth of the Logical Constant

nothing to do with the question of whether substantive claims can be


made employing just those expressions plus an initial string of quanti
fiers. And it has even less to do with whether those substantive claims
turn out to be truein other words, with whether the account will, on
that selection of 5 overgenerate.
Clearly there can, and often will, be an irreconcilable tension be
tween our two goals here, the one inclining us toward a more inclusive
5, the other pushing in the opposite direction. Indeed, it is a trivial
exercise to construct simple languages in which no selection of logical
constants can possibly yield the right extension. For example, sup
pose we supplement the language of Chapter 3 with a binary connec
tive, O, with the following semantics:
p O q is true iff Lincoln had a beard and either p is true or q is
true.
When we do this, we end up with a language in which every sentence is
logically equivalent to some sentence of our original language. Yet it is
easy to show (and not surprising in the least) that no selection of $
yields a set of logical truths containing exactly the right sentences
that is, those equivalent to logical truths of the original language. If we
include O in the account overgenerates; if we exclude it, the account
undergenerates.
Of course, there is no reason to resort to such fabricated examples to
make this point. Indeed, it is precisely this tension that underlies the
need for cross-term restrictions in the case of languages with quantifi
ers. As we saw in Chapter 5, there is no way to include in $ all of the
expressions that figure essentially into the logically valid arguments
expressible in standard first-order languages, without also declaring
many sentences logically true (or many arguments logically valid) that
in fact are not. To combat this we are forced to exclude certain expres
sions from $ even though they do figure essentially into valid argu
ments, but then we must so restrict their interpretations that these
valid arguments are still so declared. The same tension arises, but is
not so easily solved, in the case of higher-order logics, or in any
first-order language with a rich stock of logically related predicate or
function expressions.
Clearly, when we do not avail ourselves of cross-term restrictions,
Tarskis definition fails more often than not: only with the very sim
plest languages will there be a selection of $ that neither overgenerates
nor undergenerates. Not surprisingly, when we allow arbitrary crossterm restrictions, our chances of getting an extensionally adequate
specification of the logical properties improves considerably. But re
gardless of whether such restrictions are imposed, and regardless of

The Myth of the Logical Constant

135

how the restrictions are chosen, extensional adequacy is by no means


automatic. For the assessments made by the account will still depend
on the outcome of a wide range of substantive, extralogical claims.
Until these are all shown to be false, there is no way to be sure that the
accounts positive assessments are not in error. The question of
whether the accounts negative assessments are in error, of whether it
undergenerates, can be even less straightforward. Roughly speaking,
this will depend on whether all of the logically relevant relations
among expressions are captured, either because the expressions are
included in $, and so their interpretations never varied, or because of
cross-term restrictions that constrain the permissible variations. I re
turn to this question in Chapter 11.

Logic from the Metatheory

Various characteristics distinguish logical truths from common, runof-the-mill truths, and logically valid arguments from those that hap
pen to have a false premise or a true conclusion. But Tarskis analysis
does not capture any of these characteristics, regardless of how tightly
we constrain the selection of $. Furthermore, we are not even guaran
teed that the definition will be extensionally correct when applied to a
given language, not even in the paradigmatic, first-order case. What,
then, makes Tarskis account seem so persuasive? Why has it received
such widespread acceptance?
No doubt to some extent, this acceptance is due to the conflation
already noted between representational and interpretational seman
tics. And perhaps it is partly due to Tarskis fallacy, in its various
versions. But there is a more subtle reason the account seems so
persuasive, one that I suspect has been by far the most influential. The
reason is this. In its standard application to simple first-order sen
tences, Tarskis account is capable of entirely persuading us both that a
sentence which passes the test is indeed logically true, and that one
which does not pass the test is not logically true. In other words, in this
particular case the account seems capable of convincing us of the
genuine logical status of individual sentences to which it is applied.
Faced with this fact, it is hard not to assume that, one way or another,
the account must surely be getting at some essential feature of our
ordinary notion of logical truth. How else would we be convinced of
the correctness of its individual assessments?
To understand what is going on here, we need to review two impor
tant points. The first is that Tarskis account does provide a necessary
(but not sufficient) condition for the relativized notion of logical truth.
That is, if a sentence 5 is true solely by virtue of the meanings of the

Logic from the Metatheory

t 37

members of $ (or expresses a necessary truth thanks to the members


of and so on), then the universal generalization the account associ
ates with 5 must indeed be true. And this holds no matter what expres
sions are kept fixed. Thus, the simple truth of the closure
[Abe Lincoln was president]
is indeed a necessary, but not sufficient, condition for the logical
(necessary, a priori, analytic) truth of
Abe Lincoln was president.
This is just to say that this sentence cannot be logically true unless it is
true. Less trivially, the simple truth of the closure
Vx[If x is a bachelor then x is a man]
is again a necessary, but not sufficient, condition for the logical truth
(with respect to i f . . . then, is a bachelor, is a man) of the sentence
If Leslie is a bachelor then Leslie is a man.
Here, if there is a counterexample to our generalization, then the
latter sentence either is not logically true or, if it is, this fact must
somehow depend on the specific meaning of the name Leslie. To see
that it is not also a sufficient condition, we need only consider such
generalizations as
Vx[If x is a man then x is a bachelor].
This generalization would be true if all men were bachelors. But of
course its instances would not, even then, be logically true. Finally, the
simple truth of the closure
VxVP[x P or not x P]
is a necessary, but not sufficient, condition for the logical truth (with
respect to or and not) of the sentence
Leslie was president or Leslie was not president.
Again, a counterexample to the above generalization would show that
our sentence is not true simply by virtue of the meanings of or and
not. But again, the condition is not sufficient, as witness such general
izations as
VxVyVP[x P or not y P].
This generalization would be (rue if the universe contained only one
(type of) object. But of course its instances would not, even then, be
logically true. ^

138

Logicfrom the Metatheory

The significance of this first point should be apparent. When we


show that a sentence fails Tarskis test (for any selection of $), then we
have genuinely shown that the sentence is not logically true with
respect to the expressions held fixed. Thus, in the standard semantics,
when we produce an interpretation of the names and predicates of our
first-order language that falsifies a given sentence, we can rest assured
that the sentence is not true solely by virtue of the meanings of the
traditional logical expressions, those we kept fixed in the process.
The problem with Tarskis account is that the mere absence of such an
interpretation, or, alternatively, the mere truth of the associated gen
eralization, cannot guarantee that our sentence is logically true. Simi
larly, if we can find an interpretation in which the premises of a given
argument are true but the conclusion false, then we have genuinely
shown that the conclusion does not follow solely by virtue of the
expressions held fixed. But the absence of such an interpretation does
not guarantee that it does so follow.
Given this, it is perfectly understandable how the account persuades
us that a sentence is not logically true or that an argument is not
logically valid, at least with respect to the expressions held fixed.
Indeed, here we are simply relying on a traditional technique em
ployed in independence proofs, familiar from the axiomatic method.
What needs explanation, then, is how Tarskis account can possibly
persuade us that a sentence is logically true. And it frequently seems to
do just that.
Here is where the second point comes in. As we have seen, it some
times happens, with a given language and a given selection of fixed
terms, that the only generalizations which come out true are those that
are, themselves, logically true. In such situations the account does not
mistakenly dub any sentence logically true. But this is simply because
the variety of the world compensates for our faulty analysis by falsify
ing the other generalizations. It is not because what the analysis actu
ally requires, the mere truth of the associated generalizations, in any
way guarantees the logical truth of their instances.
This is all well and good. But when we apply Tarskis account to an
individual, first-order sentence, say,
(15)

Lincoln was president or Lincoln was not president

it strikes us that, somehow or other, the logical truth of this sentence


has been quite clearly and unequivocally demonstrated. The question
is why we would have this impression if Tarskis account cannot pro
vide such a guarantee. Where would the assurance come from? There
is a perfectly straightforward answer, and it has nothing to do with the
adequacy of Tarskis account. Rather, it has to do with the adequacy,

Logic from the Metatheory

139

albeit trivial adequacy, of any account that replaces the reduction prin
ciple (in) with the closure principle (ii):
(ii)

If a universally quantified sentence is logically true, then all of


its instances are logically true as well.

As we will see, principle (ii) is where we derive the sensed guarantee.


Carnap's Observation
Let us start with a simple observation. It seems clear that if the truth of
a sentence follows logically from the recursive definition of truth for
the language in which it occurs, then that sentence must be logically
true. For if the sentence expressed a historical or physical or settheoretic claim, we would need some historical or physical or settheoretic premises in addition to the bare characterization of truth in
order to establish that the claim is in fact true. But if its truth simply
follows from the semantic properties of such expressions as or, not,
and all, the properties we characterize when we specify the truth
conditions of our sentences, then the sentence must surely be a logical
truth.
This is a common observation, one appealed to repeatedly, for
example, in various of Carnaps later works. Indeed, we might call it
Carnaps observation, just to give it a name.
Carnap's observation: If the truth of a sentence is a logical consequence
of the definition of truth for the language in which it occurs,
then that sentence is logically true.
Of course, since Carnaps observation presupposes the notion of logi
cal consequence, it will never yield a general account of the logical
properties of the sort Tarski hoped to achieve. But the observation
seems right nonethelessperhaps not tremendously significant, but
in accord with our ordinary understanding of logical truth.
Now consider how Carnaps observation, along with principle (ii),
can provide the assurance that principle (in) cannot. Clearly, if it is
possible to show, without appeal to any extralogical premises, that
the truth of a sentence of the form
Vvi .. . Vvn[ 5' ]
is a purely logical consequence of the semantic clauses in our definition
of truth, then we can rest assured that this generalization is logically
true. This simply by virtue of Carnaps observation, not due to any
significant account of logical truth. And naturally, by principle (ii) we
are then assured that its instances are logically true as well. This intu-

140

Logicfrom the Metatheory

itive assurance does not arise from any general account of logical
truth, certainly not Tarskis, but just from our plausible observation
plus the unexceptional principle (ii).
When we apply Tarskis account to a sentence like (15), what con
vinces us that this sentence really is a logical truth? The key lies in the
way we show that all sequences satisfy the sentential function *x?or not
x P. Our reasoning here takes the following line. First we note that
For any/, either/satisfies x P or/d o es not satisfy x P*
This is just an elementary logical truth of the metatheory. But it
follows from this logical truth, in tandem with our clause for not in
the definition of satisfaction, that
For any/ either/satisfies x P* or/satisfies not x P.
Finally, given our recursive clause for or, we have as an immediate
consequence that
For any/,/satisfies x P or not x P.
This, of course, is what had to be shown in order for Tarskis account
to issue a declaration of logical truth (with respect to or and not) for
sentence (15). That is, it is precisely the observation we need in order to
demonstrate the truth of the associated closure
(16)

VxVP[x P or not x P].

The important point is that in the process of showing that all se


quences satisfy x P or not x P we did not have to appeal to any
intuitively empirical facts (say, Lincolns presidency or the size of the
universe), or even to any set-theoretic claims (say, the pair-set axiom or
the continuum hypothesis). Indeed, we do not even have to know what
a sequence is in order to carry out our demonstration: all that is
required are the semantic clauses and the logic of the metatheory. Of
course, this would not always be the case, and certainly is not a require
ment of Tarskis analysis. Thus, if the universe had fewer than n
objects, all sequences would also satisfy the sentential function
](rn(3/E)
but to show this we would have to go out and count the existing objects;
logic, plus our definition of satisfaction, would no longer suffice. Simi
lar appeals to extralogical assumptions would be required in all the
other examples we have discussed.
Now, the fact that the above demonstration requires no appeal
external to the semantics of the language and the logic of the meta
theory provides us with a genuine assurance, quite independent of

Logic from the Metatheory

141

Tarskis account, of the logical truth of the associated universal closure


(16). This is just Carnaps observation. And this independent assur
ance, linked with principle (ii), is where we find our guarantee of the
logical truth of (15). The sensed guarantee could not flow from the
faulty principle (in), and conversely should not be thought to lend it, or
Tarskis analysis, any plausibility. For we are not assured that this
sentence is logically true because the associated generalization is
truethat is, by virtue of the sentences satisfaction of Tarskis defini
tion. This is clear from the fact that our assurance would instantly
evaporate if, in establishing the truth of (16), there were an essential
appeal to some extralogical factsay, the fact that all past presidents
have been men, or the fact that the universe has more than twentyseven objects. Rather, we are guaranteed that the original sentence is
logically true thanks to our assurance that the corresponding closure is
itself logically true.
In the first-order case, as in any application of Tarskis account, the
output of the definition depends on a large number of substantive,
nonlogical facts. But this particular application is one of those fortu
nate cases: the only true generalizations turn out to be those that are,
themselves, logically true. This does not mean that we do not appeal to
substantive facts when applying the definition to first-order sentences
and arguments. What it does mean, though, is that the appeals do not
arise while showing that a sentence is logically true or that an argument
is logically valid. These demonstrations require only the logic of the
metatheory and the semantics of our language, and this is why we find
them persuasive.
Where the appeals come into play is in demonstrating the con
versefor instance when we show, say, that the sentence
1y: iVy3x(x is taller than y)
is not a consequence of the sentences
a: VxVyVz(x is taller than y a y is taller than z
>x is taller than z)
: Vxi(x is taller than x).
Here we rely on (among other things) the axiom of infinity in con
structing an interpretation that satisfies the latter without satisfying
the former. But in such demonstrations our reliance on extralogical
facts does not diminish our persuasion, any more than it does when we
point to a woman senator as proof^that (11) is not a logical truth.
(11)

Leslie is a senator > Leslie is a man.

142

Logic from the Metatheory

In these cases, the existence of the counterexample, however obtained,


is sufficient for the purpose at hand. But once again, the mere absence
of such counterexamples does not suffice to show the converse.
The most persuasive feature of Tarskis account, or, equivalently, of
interpretational semantics, is its capacity to convince us of the logical
status of simple, first-order sentences to which it is applied. And half of
that capacity is genuine: since the account employs a necessary condi
tion for the relativized notion of logical truth, the fact that a sentence
fails Tarskis test really does show that it is not logically truerelative,
at any rate, to the particular expressions held fixed. But the other half
is just illusion. For what convinces us that a sentence is logically true is
not the fact that its associated closure happens to be true, or that there
happen to be no interpretations in which the sentence comes out false.
What convinces us is the fact that the absence of such interpretations
can be shown purely on the basis of the semantic rules of the first-order
language, those embodied in our definitions of satisfaction and truth.
This is what assures us that the sentence in question is logically true.
But the credit for this assurance does not go to Tarskis account of
logical truth. It belongs instead to principle (ii) and the logic of the
metatheory.
Strengthening the Account
Tarskis definition of logical truth is based on a faulty principle, the
principle that the instances of a true universal generalization are not
simply true, but logically true. Unmodified, this principle is obviously
wrong: any true sentence is, if only trivially, an instance of a true
universal generalization, namely itself, and so the requirement built
into the principle cannot possibly distinguish logical truths from ordi
nary, run-of-the-mill truths. But what is important to see is that the
principle does not gain any plausibility when the generalization is less
trivial, or if we require that the generalization contain only the tradi
tional logical expressions. We still have no reason to expect its
instances to be logically true.
The definition does occasionally get things right, though. Specifi
cally, it gets things right in precisely those cases where all the substan
tive generalizations associated with sentences happen to be false.
Then, the only positive assessments made by Tarskis account will be
directed toward instances of logically true generalizations, and these
assessments will of course be correct. This is not to say that it will be
obvious when we have such a fortuitous application: as I have empha
sized, we have not yet seen any firm evidence that the account does not
overgenerate even in the paradigmatic, first-order case. Still, in these

Logic from the Metatheory

143

cases our positive assessments can generate conviction; we may feel


assured that individual sentences that pass Tarskis test really are logi
cally true. And this is what misleads us. For the felt conviction issues
not from the fact that these sentences pass the test, but from the way they
pass it: in these cases, the truth of the relevant generalization follows
logically from the semantic rules of the language, without appeal to
intuitively extralogical facts. From this we can conclude that the gener
alization, and so its instances, really are logically true.
Let me conclude by noting two lessons to be learned from the fact
that Tarskis account succeeds and fails exactly where it does: a lesson
about what would be required to correct the account, and a related
lesson about the futility of trying. The only reliable way to avoid the
problem of overgeneration would be to incorporate some general
guarantee that the truth of substantive, extralogical claimssay, those
of a historical, physical, or mathematical sortcould not influence its
output. If we could do this, the account would no longer be based on
the faulty principle (Hi), but instead on some principle along these
lines:
If a universal generalization is true, but does not make a substan
tive claim, then all of its instances are logically true.
This new principle seems basically right. Indeed, it seems right be
cause it is nothing more than a vague restatement of principle (ii):
(ii)

If a universal generalization is logically true, then all of its


instances are logically true as well.

This, of course, brings us to the second lesson. The only hope for
coming up with an improved version of Tarskis account, a version
guaranteed to produce correct results, is in effect to replace principle
(in) with principle (ii). But once we recognize this, the futility of the
project becomes apparent. For in order to use principle (ii), we first
need an account of what it means for the generalization mentioned in
the antecedent to be logically true, for its truth not to be a historical or
physical or mathematical matter. But if we already had such a charac
terization of logical truth, the remainder of our new, improved
accountthat is, the part left over from Tarskis original
definitionwould be completely unnecessary. So correcting the de
fect in the account turns out to be precisely equivalent to solving the
original problem de novo. An adequate analysis of logical truth will not
be found by modifying Tarskis reduction principle.

11
Completeness and Soundness

The reduction principle is the linchpin of the model-theoretic account


of consequence. If some version of it were correct, the account would
certainly deserve the esteem in which it is held. But we have seen that
this principle is irremediably flawed, and that, as a result, we have no
guarantee that an application of the account to any particular lan
guage will be extensionally correct. The definition can both over
generate and undergenerate, declaring arguments logically valid that
in fact are not, and declaring them not when they actually are.
In the case of first-order languages, the completeness and
soundness theorems have been taken as providing a proof that a
particular deductive calculus correctly characterizes the logical conse
quence relation for these languages. The soundness theorem is tradi
tionally viewed as showing that the calculus does not overgenerate,
that whenever a sentence 5 is derivable from a set K of assumptions, 5
does indeed follow logically from K. Conversely, the completeness
theorem is thought to show that the calculus does not undergenerate,
that if 5 follows logically from K, then there is a proof of this fact within
the calculus. The goal of this chapter is to reconcile the morals of the
preceding chapters with the intuitions at work here.
It is clear that the traditional construal of completeness and
soundness can no longer be maintained. If the model-theoretic analy
sis can overgenerate, a soundness theorem by itself does not guarantee
the soundness of the deductive calculus in question. Just so, if the
model theory can undergenerate, a completeness theorem does not,
by itself, guarantee the completeness of the deductive system. The
usual interpretation of these theorems clearly presupposes the extensional adequacy of the model-theoretic account of consequence, a
presumption that is simply unjustified.

Completeness and Soundness

145

We have seen that the model-theoretic account will get things right
in one set of circumstances. If, first of all, none of the substantive
generalizations on which the output of the account depends turns out
to be true, then the definition will not overgenerate. Second, if none of
the valid arguments expressible in the language depend for their
validity on expressions whose interpretations we vary, then the defini
tion will not undergenerate, either. However, a seconds thought
shows that the relationship between these principles and the
soundness and completeness theorems is far from straightforward.
What work are the soundness and completeness theorems doing?
Do they in fact guarantee anything at all about the intuitive notions of
logical truth and logical consequence? To answer these questions I will
make a slight detour. I am not the first person to raise questions about
the significance of the completeness theorem for first-order logic. In a
well-known article entitled Informal Rigour and Completeness
Proofs, Kreisel distinguishes between what he calls intuitive validity
and the model-theoretic notion of truth in all set-theoretic structures.
This distinction leads Kreisel to an alternative view of the significance
of the completeness theorem. Although Kreisels starting point is in
correct, for reasons that will become clear, his strategy is one we will
find useful in our own reconciliation.
Kreisels Observation
The main thrust of Kreisels article is to emphasize that we can prove
rigorous results about informal notions, a contention with which I
wholeheartedly agree. As a case study, he considers the intuitive no
tions of logical validity (what I have been calling logical truth) and
logical consequence. Kreisels aim is to show that, in the case of firstorder logic, we can rigorously establish that the intuitive notion of
validity, which he abbreviates as Val, is extensionally equivalent to the
set-theoretic definition standardly given.
The definition that Kreisel has in mind (which he denotes by V) is
that a sentence has property V just in case it is true in all models (or
structures, as Kreisel prefers to call them), where the domain of quantifi
cation is a set in the cumulative hierarchy. Kreisels worry is that this does
not correspond exactly to the notion Val. As he expresses the problem:
T he intuitive m eaning o f Val differs from that o f V in one particular: V(a)
(merely) asserts that a is true in all structures in the cumulative hierar
chy, . . . while Val(a) asserts that a is true in all structures. (1969, p. 90)

To drive home the difference between these two notions, Kreisel


considers a sentence a in the language of set theory. Intuitively, it

146

Completeness and Soundness

seems that if Val(a) then a must be true as a statement about the


cumulative hierarchythat is, where the domain of quantification is
the collection of all sets. But V(a) assures us only that it must be true in
all set-theoretic structures. But the cumulative hierarchy itself is too
big to be among these structures. Because of this, Kreisel rightly
points out, it is not at all clear that sentences true in all set-theoretic
structures will be true in all structures.
Kreisels main point is that, insofar as V takes into consideration
fewer structures than Val, it cannot be a trivial matter to go from V(a)
to Val (a). His use of a set-theoretic example introduces an additional
point, though. For in this case, it turns out that the intended interpreta
tion of the language is among the structures canvassed by Val, but not
among those canvassed by V. Of course, as I have emphasized several
times, one of Tarskis key requirements is that the intended interpreta
tion of an expression be in the satisfaction domain associated with that
expression, otherwise we risk declaring sentences logically true that in
fact are false. This requirement is violated when our language is the
language of set theory and our model-theory uses only structures from
within the cumulative hierarchy.
Kreisel claims that in spite of these seeming difficulties, the com
pleteness theorem allows us to establish that V has the same extension
as the intuitive notion of logical validity. His reasoning is as follows. He
first notes that, since the standard deductive rules of first-order logic
are intuitively sound, we know that if a first-order sentence a is de
rivable, D(a), then it is logically valid.1That is,
(1)

Va(D(a)->Val(a)).

Second, since it is obvious that all set-theoretic structures are struc


tures, truth in all structures (Val) implies truth in all set-theoretic
structures (V). That is,
(2)

Va(Val(a) -> V(a)).

However, the completeness theorem for first-order logic tells us that


any sentence true in all set-theoretic structures is derivable. That is,
(3)

Va(V(a) -> D(a)).

Putting these three together, we see that Val, V, and D are, for firstorder languages, extensionally equivalent:
\fa(Val(a) +* V(a) +* D(a)).
For our purposes, there is a serious flaw in Kreisels argument. The
problem has to do with the interpretation of Val. If Val simply means
truth in all structures, then the argument is correct, though its moral is

Completeness and Soundness

147

not exactly what Kreisel implies. But if Val really is the intuitive notion
of logical validity (or logical truth), then step (2) is quite dubious. The
problem is that Kreisel simply identifies, without argument, the intu
itive notion with the model-theoretic notion of truth in all structures.
Needless to say, this is precisely the identification against which I have
been arguing.
Let us reexamine the two steps of Kreisels argument that involve
Val. But to avoid the above conflation, I will reserve Val for the notion
of truth in all structures, and introduce LTr for the intuitive notion of
logical truth or validity. With this disambiguation, step (1) splits into
two possible claims, namely:
(1)

Va(D(oc) - Val(a))

(1')

Va(D(a)>LTr(a)).

It turns out that both of these are legitimate, though they require
slightly different justifications. (1') holds simply because the deductive
system is intuitively soundthat is, it allows us to derive only logically
true sentences. To recognize the truth of (1), however, we need to
observe that the validity of the rules of our deductive system holds in
all of the interpretations canvassed by Val. In the case of a standard
first-order system of deduction, both of these follow by a routine
examination of the rules on a case-by-case basis. But notice that (1) is
more sensitive than (T) to the details of the deductive system in ques
tion. For example, if our deductive system included the <t>-rule, or a
rule allowing us to conclude Man(x) from Bachelor(x), then (1)
would certainly fail, even though (T) might not.
How about step (2)? Here we have the following split:
(2)

\fa(Val(a) - V(a))

(2')

Va(L7V(a) V(a)).

Clearly, (2) follows trivially from the fact that every model in the
cumulative hierarchy is a model, the same reason we gave before. But
(2') is quite another matter: it is simply the bald assertion that logical
truths are true in every model in the cumulative hierarchy. But in fact
we do not know that the logical truths of any given first-order language
will be a subset of either V or Val. To suppose that they are is just to
suppose that the model-theoretic account (whether V or Val) does not
undergenerate. If there is an argument for this contention, it must be
something quite specific to the first-order language in question, since
we have seen that it does not hold in general.
Kreisels argument goes thrbugh for the notion Val. What it shows is
that, in the first-order case, truth in all structures is equivalent to truth

148

Completeness and Soundness

in a restricted collection of structures. To drive this point home, let us


generalize the argument a bit. Let M be any class of models and write
Valji for truth in all structures in M. Thus V is the special case of Valji
where M consists of all structures whose domain is a set in the cumu
lative hierarchy.
What do we need in order for the argument given above to general
ize? We need to know that the class M is rich enough to serve as a basis
for the proof of the completeness theorem. More fully, let us call a class
rich if it satisfies the condition that any first-order sentence 5 which is
true in all models in M is derivable. For example, the collection of
countable structures is rich. We can clearly replace V by Valjm
throughout Kreisels proof, and the argument goes through so long as
M is rich, since then we have Val( a) >Valja(ot) * D(a) * Val(a). Thus,
one moral of Kreisels argument is that when we apply the modeltheoretic account to first-order languages, it does not matter whether
we use all structures, all set-theoretic structures, or all countable struc
tures. These will all have precisely the same extension.
What Kreisels argument does not show, however, is that this exten
sion coincides with the set of logical truths of any given first-order
languagesay, the logical truths of the language of elementary arith
metic. For, in spite of the argument, it is far from clear that Valji (or Val
itself) does not undergenerate. To see this, we need only recall that
rich classes are in general forced to contain unintended models for our
first-order language. Thus, for example, if our language is the lan
guage of first-order arithmetic, then a rich class will perforce contain
nonstandard models of arithmetic. But what guarantee do we have
that an intuitive logical truth in the language of arithmetic will be true
in these nonstandard models, or that an intuitively valid argument will
preserve truth in them? For example, if some version of the a>-rule is
logically valid, as Tarski argued, then there will indeed be logical
truths which are not true in all models in M, let alone true in all models.
The Problem of Overgeneration
Recall that the goal of this chapter is to determine the significance of
completeness and soundness theorems for the intuitive notions of
logical truth and logical consequence. In particular, what bearing do
they have on the question of whether a given application of the modeltheoretic account either overgenerates or undergenerates?
If Kreisers argument were correct when construed as an argument
about LTr, then it would settle both of these questions; as it is, though,
it does not directly address either. All it shows us is that the three
notions Val, V, and D, are coextensive. This is a significant observation,

Completeness and Soundness

149

to be sure, but not the one we are after. It tells us nothing about how
the intuitive notions of logical truth and logical consequence relate to
their model-theoretic (or proof-theoretic) counterparts.
Still, it does suggest a partial solution. Indeed, as the reader may
already have noticed, Kreisels argument can be combined with (T) to
settle the overgeneration question, at least in the first-order case.
Recall that (T) is the observation that the deductive system used in the
proof of completeness is intuitively sound, that only genuine logical
truths are derivable in the system.
(1')

Va(D(ot)^> LTr(a)).

This observation holds for any first-order language, whether the lan
guage of elementary arithmetic, the language of set theory, or the
simple language of Chapter 5. But Kreisel has shown, using complete
ness, that any first-order sentence that is true in all models is derivable:
Va(Val(ot) -> >()).
Combining these two, we get the result we need. In the case of firstorder languages the model-theoretic account does not overgenerate:
Va(Va/(a)

LTr(a)).

How does this bear on our observation that the model-theoretic


account overgenerates only when some of the substantive generaliza
tions associated with sentences of the language turn out to be true?
The relationship is roughly this.2 Suppose we have a first-order sen
tence
(4)

S(P, c)

where the displayed P and c are the only constituent expressions not in
the set ^ of fixed terms. Sentence (4) will be declared logically true by
the model-theory only when the following closure is true:
(5)

VXV3c[ S(X, x) ]

Now the completeness theorem tells us that whenever we have such a


true closure, the original sentence (4) is provable. By the fact that our
deductive system is intuitively sound, this is enough to guarantee that
(4) is a genuine logical truth. This is all we need for the above argu
ment to go through.
But note that there is something more that we can recognize. Since
(4) is provable in our system without any assumptions, we can also
prove (5) in the same system (or in a minimal second-order extension
of it, if P is not degenerate), oy generalizing on the parameters P and c.
In other wotds, the completeness theorem (plus the recognizable

150

Completeness and Soundness

soundness of our deductive system) guarantees that if (5) is true, then


it is itself a logical truth. This is how we can establish that all of the
substantive generalizations that the model-theory associates with sen
tences of the language are indeed false. This is how we can show that
our application is of the fortuitous sort:
Logically
false

Substantive
generalizations

Logically
true

true
false
Our modification of Kreisels argument obviously generalizes, yield
ing a useful strategy for showing that a particular model-theoretic
account does not overgenerate. The strategy is simple to state, though
not always possible to implement. Find a set of derivation rules for the
language in question that, first of all, are intuitively valid and, second,
are provably complete with respect to the model-theoretic account.
When this can be done, we are assured that the model theory does not
wrongly declare sentences logically true or arguments logically valid.
The recognizable soundness of the deductive calculus transfers over,
via the completeness theorem, to the semantic account.
Of course, we know the strategy cannot always succeed, because the
model-theoretic account does sometimes overgenerate. For example, I
argued in Chapter 9 that there are second-order sentences which are
not logical truths but which are declared such by the model-theoretic
account. If so, then it follows that there is no sound deductive system
(effective or not!) that is complete with respect to the standard, secondorder model theory. This is partially substantiated by the well-known
result that no such effective system exists, a consequence of Gdels
incompleteness results.
In Chapter 9, we saw that there is no internal guarantee that an
application of the model-theoretic account will not overgenerate. Even
when it does not, there is no way to recognize this fact from the analysis
itself, from characteristics of the language, or from the expressions
held fixed. What we can now see, though, is that an external guarantee
can sometimes be found, a guarantee derived from the presumed
soundness of our deductive calculus, in tandem with a completeness
theorem showing that the semantic account reaches no further than
the syntactic.
The Problem of Undergeneration
The reason the completeness theorem is so called is that it purports to
establish that a given deductive calculus does not undergenerate, that

Completeness and Soundness

151

it is complete. We have used the theorem, in contrast, to show that an


application of the model-theoretic account of consequence does not overgenerate, in effect to show that our semantic account is, in the firstorder case, sound. Is there any way to prove the converse, to show that
our first-order model theory (or an extensionally equivalent deductive
system) does not undergenerate?
The most straightforward answer, unfortunately, is no. If our aim is
to characterize the set of logical truths (or the logical consequence
relation) for an antecedently given first-order language, then there is
no general way, short of fixing all of the expressions in the language, to
guarantee that the model theory captures them all. Indeed, once we
focus on any interesting first-order language, such as the language of
elementary arithmetic, it seems clear that the standard model theory
does undergenerate. It is only our uncritical adoption of the modeltheoretic analysis that has obscured this simple point.
Still, it is possible to extract from the model-theoretic account some
substantive observations about the intuitive notions of logical truth
and logical consequence. The trick is to shift attention from the logical
properties of any particular language to the logical properties com
mon to a range of languages.
In Chapter 7, we noted that Tarskis unmodified definition of logical
truth (that is, prior to the use of cross-term restrictions) provides a
necessary condition for what we there called the relativized concept of
logical truth. That is, if a sentence 5 is logically true, and if this fact
depends only on the meanings of some subset $ of its constituent
expressions, then 5 will remain true however we reinterpret the other
expressions (so long as our reinterpretations do not change the seman
tic categories of those expressions). We took this as showing that
Tarskis original definition would never undergenerate with respect to
the notion of logical truth relativized to the set $ of fixed terms.
We can reconstrue this as an observation about the logical truths
common to a collection of languages, those languages canvassed by the
model theory. In the case of Tarskis original account, these are the
languages that arise when our models provide (semantically wellbehaved) interpretations of the expressions not in $. Construed this
way, the observation is that the set of sentences that are logically true in
every such language will be a subset of the set of sentences declared
logically true by the model theory.
It turns out that when we cast our observation in this form, it
becomes completely independent of any details of Tarskis account.
Indeed, suppose we have any collection X = {Lm}m^ of languages that
share the same set of sentencesYbut differ in how these sentences are
interpreted. Note first of all that for any language LMin this collection,

152

Completeness and Soundness

the logical truths of LM must clearly be a subset of the truths of LM.


Modifying our earlier notation in an obvious way, we can express this
as follows:
(6)

L7r(LM) C 7>(Lm).

It follows from this simple fact that the logical truths common to the
languages in X must be a subset of the common truths of the languages.
That is:
(7)

n LTr(LM) C n Tr(LM).
MEM

MEM

Or, equivalently:
P i L7Y(Lm) C ValM.

MEM

Now, the model-theoretic account takes the set appearing on the


right-hand side of (7) to be the set of logical truths of each and every
language in X. While this simple identification is mistaken, what (7)
shows us is that, at least as an account of the common logical truths of the
canvassed languages, the account will not undergenerate. And unlike
our earlier observation, this observation holds even if our semantics
employs cross-term restrictions. Interestingly, it is entirely indepen
dent of how the model theory specifies the collection X of languages: it
does not even matter if expressions retain the same semantic catego
ries as we move from interpretation to interpretation.
This puts us in a position to give a Kreisel-like argument showing
that, in the first-order case, we can characterize exactly the set of
logical truths common to all languages of the form LM, where M
ranges over any rich collection M of models. Let us write CLTrjdcx) to
indicate that a is one of these common logical truthsthat is,
CLTrjiioi)

aE

f~l LTt(Lm).
MEM

Our argument will use the following three steps.


(1")

Va(D(a)^> CLTrM(a))

(2")

Va(CLTrM(a) -+ Valjdpt))

(3")

\fa(ValAa)^D (a)).

All of these observations have, in fact, been made earlier in the chap
ter. Step (1") is simply the observation that our deductive system is
sound, independent of which first-order language Lm is under consid
eration. Thus, if a is derivable in the system, it must be a common

Completeness and Soundness

153

logical truth of these languages. Step (2"), on the other hand, is a


restatement of (7), which holds for absolutely any collection of lan
guages. Finally, step (3") is a statement of the completeness theorem,
and follows from our assumption that M is a rich collection of models.
Combining these gives us a result analogous to Kreisels:
(8)

Va(CLTrM(ot)

ValM(a) + D(a)).

It is not entirely clear how significant this result really is, for all its
elegance. If our concern is to explicate the logical properties of a
specific first-order language, then (8) is of limited interest. Indeed, it
seems likely that the most significant logical truths and logically valid
arguments of a given language will be filtered out by shifting attention
to that portion of its logic common to a rich collection of languages.
From this perspective, we have done little more than redefine the
notions under investigation, and in such a way that the resulting task
has been stripped of many of the intuitions that motivated the pio
neers of modern logic, intuitions clearly at work in Tarskis original
attempt to characterize the consequence relation.
On the other hand, there is a different project in the context of
which (8) is of considerable interest. It would be misleading to think of
model theory as motivated solely by the goal of analyzing logical
properties and relations. A large part of its motivation can be under
stood only in relation to modern algebra. Indeed, a central concern of
the discipline from Tarski and Robinson on has been the systematic
understanding of notions and techniques of abstract algebra.
One of the most striking features of modern algebra is the technique
of simultaneously studying a wide collection of mathematical struc
tures, as when we investigate the properties of abelian groups. A key
insight was that one and the same proof can often be interpreted as
applying to all structures in the specified collection. By isolating the
common truths on which such a proof depends, we can obtain results
of striking generality. As a result, the practice in algebra is to group
structures together by means of a set of core truths called axioms,
and to construct proofs that rely solely on the core truths together with
the logical properties common to any interpretation of these truths.
From this perspective, a key concern is exactly the logical properties
common to a collection of interpreted languages, and so (8) acquires
added significance. It assures us that so long as our collection of
algebraic structures can be characterized by first-order axioms,3 the
consequence relation simultaneously captured by the model theory
and proof theory coincides with the specialized notion of consequence
used by the algebraist wherrreasoning about a range of structures.
This positive result is in striking contrast to the case where the collec-

154

Completeness and Soundness

tion of structures in question cannot be characterized using first-order


axioms, as in the case of torsion groups, archimedean fields, or finite
division rings. In these cases, the notion of consequence used by the
algebraist clearly outstrips that captured by the notions related in (8).
Recapitulation
In previous chapters, we saw that the model-theoretic account of.
logical truth and logical consequence will regularly and predictably go
astray: some applications overgenerate, others undergenerate, and in
some cases it fails both ways at once. In this chapter, I have tried to
reconcile these general observations with the intuition that, at least in
the first-order case, the analysis gets something right, and that the
completeness and soundness theorems play an important role in dem
onstrating that fact.
By modifying an argument of Kreisels, we saw that for first-order
languages the model-theoretic account does not overgenerate: no
argument declared valid by the model theory will be invalid. The
crucial observation is that the completeness theorem allows us to trans
fer the intuitive soundness of the deductive system over to the model
theory. The theorem assures us that any model-theoretically valid
argument is provable in the deductive system, and so is genuinely valid
if this system is sound. Note that, somewhat ironically, the real guaran
tee of validity is carried by the presumed soundness of the deductive
calculus, and not by the declarations of the model theory itself.
Reassuring as this is, it is also a bit unsatisfying. After all, one thing
we might hope for in a semantic account of consequence is an expla
nation of why valid arguments are valid, an explanation not given to us
by syntactic characterizations of this notion. But we now see that even
in cases where we can demonstrate that the model theory does not
overgenerate, our proof hinges on the presumed soundness of the
syntactic characterization.
Still, a proof is a proof, and we are better off in this case than we are
in the absence of a completeness theorem. In those cases where the
model theory outstrips the deductive calculus, we have no general way
of determining whether it is because the model theory overgenerates
or the deductive calculus undergenerates. Indeed, with second-order
logic, we seem to have both. The model theory declares the continuum
hypothesis (or perhaps its negation) to be a logical consequence of the
pair-set axiomhardly a plausible assessment. On the other hand, any
effective deductive calculus for the language will, if sound, fall short of
the intuitive consequence relation for the language.4 Here, the genu
ine consequence relation must fall somewhere in between the deduc
tive and model-theoretic accounts.

Completeness and Soundness

155

When we turn from the problem of overgeneration to the problem


of undergeneration, the situation is even less satisfactory. If we main
tain our original interest in the notion of consequence for a fixed
language, the model-theoretic account does undergenerate for all but
the most trivial languages, and so of course there is no way to show that
it does not. The only way to get around this is, in effect, to define away
the problem, by shifting attention to the notion of the logical truths
and logically valid arguments common to the range of languages can
vassed by the model theory. Then, although the model theory can still
overgenerate, it is guaranteed not to undergenerate for very simple
and straightforward reasons.
In cases where we have a completeness theorem, this trick allows us
to view our model theory (and our deductive calculus) as both sound
and complete, relative to this alternative notion of common logical
validity. The thing to remember here is that the role of the complete
ness theorem is to show the soundness of the model theory relative to
this new notion. The completeness of the model theory is simply
built into the definition of the alternative notion. Still, this gives us a
construal of the first-order completeness theorem that sheds some
light on the notion of consequence that is of interest to practitioners of
modern algebra.

12
Conclusion

In the early part of this century, it was not uncommon for philoso
phers and logicians to conflate the notion of logical consequence with
that of derivability in a deductive calculus. For example, Carnap often
promoted the view that languages, both natural and artificial, came
equipped with three sorts of rules. Two of these fell under the general
heading of syntax: the rules of grammatical syntax determined which
strings of symbols were grammatically correct sentences, and the rules
of logical syntax determined which sequences of sentences were logi
cally valid arguments. The third set of rules governed, among other
things, the semantics of the language, but at the time Carnap had little
to say about these additional rules.
According to Carnaps picture, a deductive system for a language, its
logical syntax, was essentially independent of the languages seman
tics. The question of whether one sentence followed logically from
another came down to the question of whether a derivation of the one
from the other could be constructed by means of the conventionally
adopted logical rules, just as the question of whether a given string of
words made up a sentence came down to whether it could be formed
using the conventionally adopted grammatical rules. Of course, nei
ther the logical syntax nor the grammatical syntax could be entirely
divorced from the semantics. Presumably, the semantics would not
declare a string of symbols to be meaningful if the grammatical syntax
declared it ill-formed. Similarly, if the logical syntax declared modus
ponens a valid rule, then the semantics could hardly assign the mean
ing or to the symbol if . . . then. But the view was that the syntactic
rules fixed the logic, and thereby placed constraints on the semantics,
not the other way around.

Conclusion

157

Nowadays, we see this identification as a confusion. We recognize


that the logical consequence relation is determined not by an indepen
dent deductive regime but by the semantic rules of the language. After
all, if the term and expresses the usual truth function, then nothing
more is needed to guarantee the validity of an inference from A and
B to A. Independent rules of logical syntax could at best reiterate
that fact, and at worst contradict it. If the former, the rules would be
idle; if the latter, flatly wrong.
The real harm in identifying consequence with derivability is that it
distracts attention from genuine issues in logic toward artificial ones
that arise from the conflation. For example, according to Carnaps
early picture it makes no sense to ask whether a deductive system, a
languages logical syntax, is sound and complete. Since the deductive
system is what gives rise to the consequence relation for the language,
it automatically gets that relation exactly right. From this perspective, a
more pressing issue might be the question of which deductive system,
among sundry equivalent systems, corresponds to the languages gen
uine logical syntax: Does that syntax take the form of an axiomatic
system, or is it instead a system of natural deduction? Is a particular
rule primitive to the system, or is it a derived rule instead?
Clearly, our understanding of deductive techniques has changed
considerably in the intervening decades. This is not to say that such
techniques have been, or ever should be, abandoned. But we now see
them as serving a rather different purpose. A deductive system pro
vides a way to study a languages consequence relation, to prove results
about it, perhaps even mechanize it. But it does not determine or give
rise to that relation. This is why the question of whether a particular
deductive system for a particular language is sound and complete is
always a sensible, and indeed important, one to ask.
Identifying logical consequence with model-theoretic consequence
is as mistaken as identifying it with derivability. The question of
whether one sentence follows logically from another does not come
down to whether there are interpretations that make the latter true
and the former false; logically valid arguments can fail this test, while
invalid arguments can slip by it. Though the model-theoretic account
may sometimes get the extension exactly right, as may deductive char
acterizations, this is not because either of them captures, or comes
close to capturing, the genuine concept.
Tarskis conflation spawns as many confusions, as many distracting
issues, as Carnaps. Take, for example, the so-called problem of the
logical constants. We saw how this alleged problem immediately evap
orates once we recognize exactly why the model-theoretic account is
sometimes right and sometimes wrong. The reason has nothing to do

158

Conclusion

with any shared characteristic of the expressions held fixed, but rather
with facts about the world. The effort spent trying to find such a
characteristic, trying to maintain the analysis while making sense of its
haphazard behavior, would be more profitably spent on genuine issues
surrounding logical consequence.
Another example, and perhaps a more important one, is the muchdebated question of whether second-order logic is really logic. What
motivates this odd question is the fact that claims like the continuum
hypothesis are declared logically true by the standard model theory,
and yet such claims seem clearly beyond the scope of logic. But once we
recognize this as a case where Tarskis account overgeneratesand
more generally, once we recognize overgeneration as a natural and
predictable hazard of the model-theoretic techniquethe issue takes
on an entirely different light. Every genuine language has its conse
quence relation, its sentences that follow logically from others. This is
as obviously true of higher-order languages as it is of languages where
model-theoretic techniques yield more plausible results. And whether
or not we have sure-fire ways to characterize this relation, it seems clear
that the relation is a legitimate concern of logic. To claim otherwise, to
say that the logic of some languages is not logic, is just to abdicate the
disciplines natural charter.
Similar remarks can be made in cases of undergeneration. It is a
mistake to think that the logic of, for example, the language of elemen
tary number theory is confined to that characterized by the usual
model theory, or that the consequence relation that arises from the
meanings of predicate or function terms is any less significant than the
logic of connectives and quantifiers. Once again, it is only the con
flation of logical consequence with model-theoretic consequence that
inclines us to think otherwise. Once again, there is more logic to be
studied than we might otherwise have thought.
It is always important to ask whether our model theory overshoots
or undershoots the logic of a particular language. And the answer to
this question will frequently be yes. But as with deductive techniques,
this does not mean that model theory should simply be abandoned.
For as we have seen, model-theoretic techniques, when properly un
derstood, can yield genuine insight into a languages consequence
relation. For example, combined with an intuitively sound deductive
system and a proof of completeness, the model-theoretic account al
lows us to precisely specify significant portions of that relation, the
portions common to the range of languages surveyed by the model
theory.
Properly understood, both deductive and model-theoretic tech
niques can be put to good use. Both provide tools that can profitably be

Conclusion

159

used in studying the consequence relation. But it is in no ones interest


to identify the consequence relation with either the model-theoretic or
the proof-theoretic notion. To do so buys only an illusion: the illusion
that the relevant technique is incapable of going astray. In the end, this
illusion can have only one of two results: either we uncritically accept
the techniques faulty declarations or we confine the scope of logic to
domains where the technique happens to work. In either event, we
shortchange the vision that motivated the founding fathers of modern
logic.

Notes

1. Introduction
1. T hough the m odel-theoretic definitions have come to be standard, the
terminology still varies considerably. I have adopted the terminology used
by Chang and Keisler (1973). Models are also sometimes called structures,
valuations, assignments, interpretations, o r model structures. Term s for the
relation of truth in a model vary accordingly, with holds in and is satisfied by
sometimes replacing is true in.
2. Tarski (1936); all page references in this book are to the English transla
tion in Tarski (1956). Some writers attribute the model-theoretic defini
tions to Tarskis m onograph on tru th (1933), but this is simply an error.
3. See H ilbert (1929), p. 8 . T h e rem ark was m ade in o rder to motivate the
completeness problem for first-order logic, the problem solved by Gdel
that same year.
4. For a m ore detailed discussion of the historical relationship between
Tarskis analysis and the model-theoretic definitions, see Etchemendy
(1988).
5. T hat is, N is the intersection of every set (hence, the smallest) that has the
following two properties: ( 1 ) it contains 0 ; and (2 ), if it contains a num ber x,
it also contains 5(x) = x + 1. In symbols:
N = D{ A I 0 E A a Vx(x E A > j (x ) E A)}

2. Representational Semantics
1. Tow ard the end o f his article, Davidson hedges this claim, rem arking that
absolute tru th goes relative when applied to natural language (1973,
p. 85). T h e hedge is needed because o f indexical sentences: Davidson
allows that these are tru e only relative to a speaker, time, and place of
utterance. I think the properSnove here is not to relativize tru th to an
occasion o f us^, but rather to i cognize that the ordinary notion of truth

16 2

Notes to Pages 1328

applies not to sentences but to statements (the actual uses o f sentences) or


to propositions (the claims m ade by such uses). O f course, if the property
o f tru th applies primarily to statem ents o r propositions, and only deriv
atively to sentences, the same m ust presumably be said of logical truth and
logical consequence. I will set aside such issues in this book, though, since
they are irrelevant to my objections to Tarskis analysis.
2. It may be unfair to say that Davidson implies that relational tru th has no
bearing on absolute tru th , since at one point he says there is a perfectly
clear sense in which absolute tru th is a special case of relative truth (1973,
p. 79). Davidson does not, however, explain what sense he has in mind, and
later, when discussing the concepts illuminated by theories of relational
tru th , tru th itself is conspicuously absent (cf. p. 79 and p. 83).
3. O f course, even a Davidsonian theory of absolute tru th does not alone
give us the absolute values o f o u r sentences. Before we can know
w hether the monadic tru th predicate is applicable to a given object lan
guage sentence, we must know m ore than ju st the appropriate T-sentence,
o r the theory from which it falls out. But naturally the goal of a semantic
theory is not to tell us such things.
4. I assume throughout this book that the sample languages, though frag
ments o f English, are syntactically as well-behaved as artificial languages. In
particular I assume unique readability, though I try to avoid parentheses.
5. Naturally o u r theory o f tru th in a row could easily have used a standard
maximal reference column, so long as the vocabulary of atomic sen
tences rem ained finite. We can think o f the construction of a reference
column for a given target sentence as providing the class of models for the
smallest truth-functional fragm ent containing that sentence. Seen in this
way, tru th tables embody a general technique for building the minimal
model-theoretic semantics capable o f handling a particular sentence. O f
course, they are still incapable o f illuminating any semantic properties of
infinite sets o f sentences.
6 . Kaplan (1975), p. 216. Few mathematical logicians view Tarskis analysis
this way, since they generally come to it via an entirely different tradition,
that of abstract algebra. It would seem quite anomalous to view a structure
satisfying, say, the group axioms as having much o f anything to do with
possible worlds.
7 . 1 am setting aside here certain im portant issues about the notions of
analyticity and necessary tru th in particular, issues that arise with sen
tences, like I am here now, that express contingent propositions but
cannot, by virtue o f their meaning, be uttered falsely (see Kaplan, 1978).
With languages containing such sentences, the situation is more compli
cated, and requires a finer taxonomy than these traditional notions
provide.

3. Tarski on Logical Truth


1. Bolzano (1837); page references are to the translation (1973).
2. Bolzano actually considers logical tru th (Allgemeingultigkeit) to be a relation
between propositions (Satze an sich) and com ponent ideas (Vorstellungen an

Notes to Pages 2 8 33

163

sich), not between sentences and expressions. T o facilitate comparison


between Tarski and Bolzano, I gloss over this difference. As I note later
(Chapter 3, note 5), this is actually unfair to Bolzanos account, but my
purpose is to illuminate Tarskis, not Bolzanos, analysis.
3. Bolzano (1973), pp. 187ff.
4. Notice, though, that this assumption places a heavy burden on o u r gram
mar. For example, when we replace the expression snow with the expres
sion grass is pink and snow we get the false sentence Grass is pink and
snow is white or grass is pink and snow is not white. So for the assumption
to be correct, the gram m ar cannot ju d g e snow and grass is pink and snow
to be o f the same grammatical type. F urther, such judgm ents must be
motivated by reasons o th er than the fact that the substitution of the latter
for the form er is capable o f rendering a logically true sentence false, since
otherwise we risk a disguised circularity in o u r definition o f logical truth.
In the present case it seems plausible that such a motivation can be found;
for one thing, substitution o f the latter for the form er often renders a
grammatically pro p er sentence ungram m atical (e.g., I hate snow becomes
I hate grass is pink and snow). But occasionally the grammatical motiva
tion is not nearly so clear, as when Q uine, a supporter of a Bolzano-style,
substitional definition o f logical truth, classifies expressions like o r and
an d as syncategorematic, in effect placing each of them in a category of
one (see Quine, 1970, especially pp. 27ff and 49ff). I will not discuss this
problem at length, but will assume that if the exchange o f two expressions
always, or by and large, preserves grammaticality, then the expressions are
o f similar grammatical types. In fact, though, proponents o f the substi
tional definition need a much stronger assumption than this when dealing
with natural languages; see C hapter 3, note 13.
5. H ere, my characterization o f Bolzanos theory in terms o f sentences and
expressions, rather than the original propositions and ideas, does the account
some injustice. Bolzanos original definition makes logical truth dependent
not on the expressive resources o f any particular language, but rather on
what might be called the conceptual resources o f the realm o f ideas. I will,
however, continue attributing to Bolzano the simplified, linguistic version
o f the theory.
6 . Sentential function is the term Tarski uses, by analogy with Russells
notion o f a propositional function. T he expression now sounds a bit odd,
open form ula or open sentence having become standard.
7. If a language contains variable binding operators, such as quantifiers, and
explicitly displays bound variables, then sentences must be taken to be
sentential functions with no unbound o r free variables.
8 . Using schematic machinery analogous to that introduced for (1), Tarskis
T-schema would come out as:
(T) . . . is true (in L) if and only i f . . .
Tarski generally used X wher^T have * ... and /> where I have ..
requiring that ^X be replaced with a nam e o f the sentence replacing
T hus, Tarskis actual statem ent o f the schema ran as follows:

164

Notes to Pages 3440

(T ') X is true (in L) iff p.


T arskis (T ') is, by itself, less perspicuous than (T), but (T) has the draw
back th at its initial symbol (i.e.,
) m ight be mistaken for a quotation
nam e o f a succession o f three dots (though only by a philosopher). In fact, I
have used
as an independent schematic device (like X) whose
intended relation to the other schematic placeholder (the three dots on the
right) is set forth in the instantiation conditions. My policy is to make the
actual expression o f schemas as perspicuous as possible, though this may
involve employing certain symbols (e.g., the single quotation marks in (T)
and ( 1 )) as m ere orthographic com ponents o f a larger schematic device
( . . . and .. n . . respectively).
9. See C hapter 3, note 8 , for a statem ent o f the T-schema. It is what provides
the connection between the right hand side o f ( 1 ) and the right hand side
of (2), so long as the replacem ent for . . . n . . . is a sentence of L.
10. This is not a standard account o f sequences. (Finite sequences are gener
ally taken to be ordered n-tuples, and infinite sequences to be functions
whose dom ain is the set o f natural numbers.) Tarski adopts the standard
notion o f a sequence, assuming an appropriate ordering of the variables.
Allowing sequences to be functions directly from variables simplifies things
considerably, in particular when we tu rn to sentential functions with vari
ables o f different types.
11. It should be emphasized that the expressions / ( V ) and / ( y T are com
plex names (akin to T o m s father and Sams father), and that they do
not contain variables (as do Vs father and ys father). T he expression
V occurring in / ( V ) names an object (the variable x), ju st as the
expression T om occurring in T om s father names an object (the per
son Tom).
12. Linguistic entities can, o f course, stand in the relation o f satisfaction to
certain sentential functions for example, was president does indeed
satisfy x is a predicate. But the relation that would em erge from (3.2)
would preclude the right sorts o f linguistic entities even here. Thus, on
analogy with (3.2) we would have to say that was president satisfies x is
a predicate; was president is, however, the name of a predicate, not
itself a predicate.
13. This has been considered an im portant advantage of the substitutional
account, but in fact the advantage is illusory. In natural languages, exam
ples abound in which expressions that seem to be of the same grammatical
category differ semantically. Thus, perhaps the clearest grammatical cate
gory in English is th at o f noun phrases, but this includes such expressions
as George W ashington, J uly 4, 1776, every president, and no presi
den t. T h e radical diversity here shows that even the genuine substitutionalists (Bolzano and Quine) must assume some implicit semantic crite
rion of substitutability. If the noun phrase every president were taken to
be substitutable for the noun phrase George W ashington, it would wreak
havoc on the substitional test: Every president had a beard or every
president did not have a beard would then be a false substitution instance

Notes to Pages 4255

165

o f George W ashington had a beard or George W ashington did not have a


beard.
14. This rem ark is intended as a simple observation, not a rejection of a
particular philosophical tradition. We can certainly devise languages in
which all expressions nam e objects o f various types and in which the
concatenation o f any two of these expresses, say, function application (or
perhaps, taking a cue from Wittgenstein, depicts some concrete relation).
But it is clear that ordinary English and hence fragments of ordinary
English like the object language we are consideringdo not in fact operate
in this way. For one thing, if they did so function we would have no need
for the various complex techniques o f nominalizing verb phrases in order
to place them in subject position.
15. It also, we should note, commits us to the view that properties are a type of
object, and hence capable o f being nam ed by expressions of the meta
language.
16. T h ere are also alternatives available with names. We could, for exam
ple, take names to denote collections o f properties, following Richard
Montague. I will not explore these possibilities.
17. For example, if a language contains only the basic expressions Snow is
white, Grass is g reen, and and, then it will in fact have no logical truths
whatsoever. T here will, however, be logically valid argum ents for exam
ple, the inference from Snow is white and grass is green to Grass is green.
18. Thus, if we added if . . . then to the language of note 17, we would not
encounter the first problem. However, the valid argum ent m entioned
there does not depend in any way on the expression i f . . . then, and so we
might expect it to come out valid even if this expression were excluded
from
But the conditional would not then be logically true.
19. T here are actually standard counterexam ples to this. Thus, some would
claim that all sequences satisfy It is not the case that Ralph believes jc is a
spy, though Ralph believes the shortest spy is a spy is true. I will not
consider these so-called opaque contexts, except to note that the notion
o f satisfaction implicit in such judgm ents is immediately ruled inadequate
by schema (1). W hether this should be held against the judgm ents or the
schema I leave to the reader to decide; my inclination would be to hold it
against the latter.

4. Interpretational Semantics
1. I should m ention that Tarski uses the term model in his article, though
not in the same way I have used it here. Stated in my terminology, Tarskis
use is the following: a d-sequence is a model o f a set K of sentences ju st in
case it d-satisfies every m em ber o f K. This corresponds, as Tarski points
out, to a standard use o f m odel in mathematics; if a d-sequence provides
an interpretation o f a set o f axibms on which they all come out true, then it is
commonly saief to be a model o f those axioms.

166

Notes to Pages 6582


5. Interpreting Quantifiers

1. H enceforth, I will take predicates to be interpreted by sets rather than


properties, as is usually done.
2. We commonly define tru th in a model by employing the auxiliary notion of
the satisfaction of a formula by a sequence in a model. H ere a slight confusion
m ight arise, for on the original Tarskian conception, models are them
selves sequences (d-sequences) and tru th in a model is itself satisfaction
(d-satisfaction). Jo h n Kemeny was the first to employ such auxiliary se
quences and the notion o f satisfaction-in-a-model (note: not by a model).
See Kemeny (1956). His technique has since become standard.
3. In a language th at explicitly displays bound variables, the satisfaction
clause would ru n as follows (with E an existential quantifier variable and
V a nam e variable): Sequence/ satisfies ExM iff for some E/x-variant/ ' o f f,
/ ' satisfies M. We t a k e / ' to be an /x-variant o f / j u s t in c a se /'(V ) is a
m em ber o f/(* ) and f'(v ) = f(v ) for all v V.
4. T h e set-theoretic paradoxes force a certain idealization here. On the in
tended interpretation, som ething quantifies over a class too large to be a
genuine set: the class containing everything. Thus, in any traditional set
theory the satisfaction dom ain we have described must actually omit the
intended interpretation o f o u r expression. For purposes of giving a se
mantics for natural languages, these facts may be viewed as quirks o f the
mathematical theory o f sets, however im portant they are to that theory.
T he designers o f natural language were, after all, unaware o f the settheoretic paradoxes.
5. W hen o u r semantics is applied to standard symbolic languages, the two
alternatives em erge in the following way. O n the one hand we can take the
specification of a universe set as providing the appropriate interpretation
for the variable term 3 as it appears in sentences o f the form 3xM. Alterna
tively, we can think o f 3 as fixed and take all sentences of this sort as
abbreviations for their relativization to an implicit predicate U, where the
relativization (BxM)* o f 3xM is 3x(l/x a M*). According to this view, we are
providing an interpretation o f the variable expression U, while the inter
pretation o f 3 is held fixed.
6 . O f course, we could avoid that by treating identity as variable, but this in
tu rn gives counterintuitive results, declaring invalid such argum ents as:
Fa, a b, so Fb.

6. Modality and Consequence


1. T h e continuum hypothesis is the claim that any infinite set whose cardi
nality is less than that o f the reals has the same cardinality as the natural
num bers. T h e pair-set axiom says that for any objects jc and y, there is a set
whose only m embers are x and y. Well-known results of Gdel and Cohen
show that there are models o f the standard axioms o f set theory in which
the first-order statem ent o f the continuum hypothesis is true and others in
which it is false. T his shows that the continuum hypothesis is not a model-

Notes to Pages 8392

167

theoretic consequence o f those axioms. T h e question o f w hether it also


shows that it is not a logical consequence o f the axioms depends, o f course,
on the relation between m odel-theoretic consequence and logical conse
quence. But quite ap art from these results, even the most Platonistic
set-theoretician would not claim that the continuum hypothesis (or its
negation) is a logical consequence o f the extremely weak, pair-set axiom.
2. Tarski (1956), p. 411, my translation. For the Germ an text o f this passage,
see C hapter 6 , note 7.
3. Ibid., pp. 412413, my translation.
4. Ibid., p. 413, my translation.
5. Ibid., p. 417, my translation and emphasis. For the German text of this
passage, see C hapter 6 , note 8 .
6 . Indeed, this same argum ent is given by Bolzano in support o f his simpler
substitutional account. See Bolzano (1973), pp. 205206. Quine, in the
paragraph immediately following the passage quoted on page 8 8 , also
seems to succumb to the fallacy, o r at any rate to entice his readers to
commit it. A fter emphasizing the im portant epistemic characteristics of
logical tru th and logical consequence, he presents his substitutional defi
nition o f these notions, vaguely suggesting that the epistemic features cited
somehow follow from the definition. For example, he notes that if a
sentence is logically true, we may replace any o f its constituent expressions
(except those in 3r) without fear o f falsity. In general, though, there is no
way to tell that a sentence satisfies Q uines definition without antecedently
knowing that its substitution instances are true. Thus, if we do not already
know the tru th values o f the relevant instances, then all we can be sure o f is
th at the sentence will not both have false substitution instances and satisfy
Q uines definition o f logical truth. Clearly, this assurance can hardly calm
any fears o f falsity we may have had to begin with.
7. Tarski (1956), p. 411, my translation and emphasis. T he Germ an text
runs: Diese Tatsache spricht, wie m ir scheint, fr sich selbst: sie zeigt, da
d e r formalisierte Folgerungsbegriff. . . sich mit dem blichen keineswegs
deckt. Inhaltlich scheint es doch sicher zu sein, da d er allgemeine Satz A
aus d er Gesamtheit aller speziellen Stze Ao, Ai, . . .An, . . .im blichen
Sinne folgt: falls n u r alle diese Stze w ahr sind, so mu auch d e r Satz A
wahr sein (1936, pp. 2-3).
8 . Tarski (1956), p. 417, my translation and emphasis. Wie mir scheint, mu
jem and, d er den Inhalt d er eben angefhrten Definition begreift, ges
tehen, da sie dem blichen Sprachgegrauch recht gut angepasst ist; das
erleuchtet noch in strkerem Grade aus verschiedenen ihren Konsequen
zen. So kann man insbesondere auf G rund dieser Definition beweisen, da
jed e Folgerung aus lauter wahren Aussagen wahr sein m u (1936, p. 9).
9. Tarski (1956), p. 414, my translation. Kann es niemals Vorkommen, da
die Klasse K aus lauter wahren Stzen besteht, zugleich aber die Aussage S
falsch ist" (1936, p. 6 ).
10. Tarski (1956), p. 414. Kann\diese Beziehung durch empirisches Wis
sen . . . in keiner Weise beeinfmt w erden (1936, p. 6 ).
11. Tarski (1956),^p. 415. So m u die Aussage S' wahr sein, falls n u r alle
Aussagen d er Klasse K' wahr sind" (1936, p. 7).

168

Notes to Pages 96117


7. The Reduction Principle

1. In languages containing substitutional quantifiers, tru th is defined by a


straightforw ard induction on sentences: we assume each type o f variable is
associated with an appropriate substitution class, and then the sentence
Uv[S] is declared true iff S(v/e) is true for each expression e in the appro
priate substitution class. T h e definition presupposes that tru th is welldefined for sentences not containing the substitutional quantifier. For a
detailed discussion o f substitutional quantification, see Kripke (1976).
2. For a discussion o f objectual quantifiers that bind nonstandard variables
(e.g., predicate variables) see Boolos (1975). Boolos gives a treatm ent of
satisfaction similar to the one I develop in C hapter 3.
3. For simplicity, I will use the term universal generalization to apply to any
sentence that begins with a string o f universal quantifiers, even though that
string may have length zero. For purely heuristic purposes, I indicate the
universal closure o f a sentence (that is, a sentential function with no free
variables) by enclosing it in brackets. T he read er is free to imagine a
vacuous universal quantifier standing in front o f these brackets.

8. Substantive Generalizations
1. T h e m atrix of the closure Vi>i . . . Vi/n[S'] is the sentential function S'.
2. O f course, until we clarify what sorts o f individuals we count as part o f the
universe, it is hard to say what kind o f fact the size o f the universe is. T he
size of the physical universe say, the num ber of elementary particles is
presumably a contingent, physical fact. T he size o f the set-theoretic uni
verse is presumably a noncontingent, set-theoretic fact. But neither of
these are issues to be settled by logic alone; both are substantive, extralogical facts. T he point I will make does not depend on which way we go
here.
3. For a definition o f satisfaction for sentential functions containing quanti
fier variables, see C hapter 5, note 3.
4. Note that we can here ignore the cross-term restrictions used in the stan
dard semantics, since o u r sentences contain only the identity predicate,
whose interpretation we are holding fixed.
5. For simplicity, I am assuming that the range o f the variable E consists of
arbitrary subcollections o f the universe. If the range consists o f sets, then
the relevant question is about the size o f these, not the size of the universe
as a whole. Similar points can be made, though, whichever way we go.
6 . If we expand the language to include other cardinality quantifiers for
example, there exist uncountably many o r there exist inaccessibly
m any all o f the same points can be made. But then the outcome will
depend on w hether there are sets with uncountable (or inaccessible) car
dinalities, rather than ju st some infinite cardinality. Many mathematicians
who accept the existence o f infinite sets still question these stronger as
sumptions.
7. In fact, the tru th value o f (6 ) is not as clear as it might seem. Indeed, if the

Notes to Pages 118132

169

satisfaction dom ain for the relation variable consists o f all sets of ordered
pairs, and the satisfaction dom ain for the individual variables consists of all
objects (including sets), then (6 ) is actually true according to standard set
theories. (This is d ue to the set/class distinction imposed on us by the
set-theoretic paradoxes.) In which case, the present account would still
mistakenly declare 0 - 2 logically true (and the rest of the o-, as well). This is a
bit ironic, since the usual set-theoretic assumptions are what we earlier
relied on to get a proper assessment o f 1o-B; here, they would result in an
improper assessment o f a n. T o get the right assessment while keeping the
set-theoretic construal o f (6 ), we would again have to vary the interpreta
tion o f 3.
8 . If something is at least as tall as everything else, then we say it is a minimal
elem ent o f the taller than relation. A relation can have m ore than one
minimal elem ent; for example, if everything were precisely the same
height, each individual would be a minimal element o f both the taller than
and the shorter than relation.
9. Once again, I should emphasize that my appeal to the finitists position is
simply m eant to dram atize the problem with Tarskis account. T h e prob
lem does not depend on any endorsem ent o f the position, or even on the
assumption that the axiom o f infinity, and the existence of noncommutative division rings, are contingent truths. Even if our views about m athe
matical objects lead us to conclude that these are necessary truths, which I
happen to believe, they are surely not logical truths. (If they were, then so
too would be <72, c r O u r ju d g m en t o f the logical status of such
sentences as (7) is surely not dependent on o u r belief in the axiom of
infinity, a fact brought out nicely by the finitists position.
10. T he pair-set axiom says that for any x and y, there is a set whose only
members are x and y. Since V and y can be instantiated to a single object a,
this axiom guarantees the existence o f the singleton set {a}.
11. T here are many ways o f arriving at such sentences. For example, let N(X)
and R(X) be second-order formulas that are satisfied by a set iff it is
isomorphic to the natural num bers o r the real numbers, respectively. Since
the relation Card(X) < Card(Y) is also definable in the second-order lan
guage, the closure

R(Y) a Card(X) < Card(Z)


Card(Y) < Card(Z)]

VXVF VZ[7V(X) a

is equivalent to the continuum hypothesis, and any instance of it will be


declared logically true if and only if the continuum hypothesis is true.
(Once again, as with (8 ) and (8 '), it does not m atter here if we impose the
standard cross-term restrictions.)

9. The Myth of the Logical Constant


1. For example, using N, R, andjCard as in C hapter 8 , note 11, one of the
following generalizations will be true, depending on which way the contin
uum hypothesis goes:

170

Notes to Pages 146154


VDV = 9(D) [-1 EX, Y,Z(N(X) a R(Z) a Card(X) <Card(Y) < Card(Z)) ]
VDV = 9(D) [EZ(R(Z))

X, F, Z(N(X) a fi(Z) a Card(X) < Card(Y)

< Card(Z)) ]
H ere, D ranges over interpretations (domains) for the first-order quanti
fiers and E ranges over interpretations (domains) for the second-order
quantifiers. T he usual cross-term restriction is imposed by requiring that
the latter be the powerset o f the form er. W hen we move to so-called
generalized structures this restriction is loosened, and both of the resulting
generalizations, though still substantive claims, come out false.

11. Completeness and Soundness


1. Note that the appeal here to intuitive soundness is not a reference to the
soundness theorem, Va(D(a) > V(a)), which would not give Kreisel what he
needs. Rather, it is an observation about the intuitive correctness o f the
first-order deductive calculus in question.
2 . 1 say roughly because the following argum ent ignores complications
arising from cross-term restrictions. T h e argum ent can be extended to
cover that case as well, b u t only at the cost o f obscuring the main point.
3. More precisely, what we need is a set K of axioms such that (1) each o f our
algebraic structures satisfies K, and (2) every structure M from our rich,
collection Ji satisfying K is one o f the algebraic structures in question.
4. T he argum ent here is basically the same as that given by Tarski for the
first-order language o f arithem etic, using Gdels incompleteness results.

Bibliography

Aristotle. 1941. Prior Analytics. T rans. Richard McKeon, in The Basic Works of
Aristotle. New York: Random House.
Bell, J. L., and M. Machover. 1977. A Course in Mathematical Logic. Amsterdam:
N orth-H olland.
Bernays, Paul. 1922. Review o f Behm ann, 1922.Jahrbuch ber die Fortschritte
der Mathematik 48: 11191120.
Bolzano, B ernard. 1837. Wissenschaftslehre. Sulzbach.
---------1973. Theory of Science. T rans. B urnham Terrell. Dordrecht: D. Reidel.
Boolos, George. 1975. O n Second O rd er Logic. Journal of Philosophy 72:
509-527.
Chang, C. C., and H. Jerom e Keisler. 1973. Model Theory. Amsterdam: NorthHolland.
Copi, Irving. 1972. Introduction to Logic. New York: Macmillan.
Davidson, Donald. 1973. ln Defense of Convention T . ln Hughes Leblanc,
ed., Truth, Syntax and Modality. Amsterdam: North-Holland.
Etchemendy, John. 1983. T h e Doctrine of Logic as Form. Linguistics and
Philosophy 6 : 319334.
--------- 1988. Tarski on T ru th and Logical Consequence."Journal of Symbolic
Logic 53: 5179.
Gdel, Kurt. 1929. ber die Vollstndigkeit des Logikkalkls. Diss., University of
Vienna. Reprinted, with translation, in Gdel, Collected Works. Volume I.
Ed. Solomon Feferm an, et al. O xford: O xford University Press, 1986.
Hilbert, David. 1929. Probleme der G rundlegung der Mathematik. Mathe
matische Annalen 102: 19.
---------and Wilhelm Ackermann. 1928. Grundzge der theoretischen Logik. Ber
lin: Springer. Second edition (1938) translated as Principles ofMathematical
Logic. New York: Chelsea, 1950.
Kaplan, David. 1975. What is Russel|^s T heory o f Descriptions? R eprinted in
D. Davidson and G. H arm an, ed., The Logic of Grammar. Encino, Califor
nia: D ick en so n /

172

Bibliography

---------1978. O n the Logic o f Demonstratives./oum al of Philosophical Logic 8:


81-98.
Kemeny, John G. 1956. A New A pproach to Semantics, Journal of Symbolic
Logic 21: 1-27, 149-161.
Kreisel, Georg. 1969. Inform al Rigour and Completeness Proofs. Reprinted
in J. Hintikka, ed.. The Philosophy of Mathematics. London: Oxford Univer
sity Press.
Kripke, Saul. 1976. Is T h ere a Problem about Substitutional Quantification?
In G. Evans and J. McDowell, ed.. Truth and Meaning. Oxford: Oxford
University Press.
Mates, Benson. 1965. Elementary Logic. New York: O xford University Press.
Padoa, Alessandro. 1901. Essai d une thorie algebrique des nombres entiers,
prec 6 d 6 d une introduction logique une theorie deductive quelconque.
Bibliotheque du Congrte International de Philosophie, Paris, 1900. Paris:
A rm and Colin. T ranslated as Logical Introduction to Any Deductive
T heory. In Jean van Heijenoort, ed., From Frege to Gdel. Cambridge,
Mass.: H arvard University Press.
Quine, W. V. O. 1961. Two Dogmas o f Empiricism. In Quine, From a Logical
Point of View. Cambridge, Mass.: H arvard University Press.
--------- 1970. Philosophy of Logic. Englewood Cliffs, N.J.: Prentice-Hall.
---------1972. Methods of Logic. New York: Holt, Reinhart, Winston.
Tarski, Alfred. 1933. Pojcie prawdy wjfzykach nauk dedukcyjnych. Prace Towarzystwa Naukowego Warszawskiego, Wydzial III matcmatycznofizycznych, no. 34. Warsaw. T ranslated into German, with postscript, as
Der W ahrheitsbegriff in den formalisierten Sprachen. Studia Philosophica 1 (1935): 261405. G erm an version translated into English as T he
Concept o f T ru th in Formalized Languages. In Tarski, 1956.
--------- 1936. ber den Begriff der logischen Folgerung. Actes du Congres
International de Philosophie Scientifique 7: 111. Translated into English as
O n the Concept o f Logical Consequence. In Tarski, 1956.
---------1953. Undecidable Theories. W ritten with A. Mostowski and R.M. Robin
son. Amsterdam: North-Holland.
---------1956. Logic, Semantics, Metamathematics. O xford: Clarendon Press.

Index

Ackermann, Wilhelm, 7, 171


Analyticity, 25, 78, 101-106, 108, 126
A prioricity, 82, 88-89, 106, 108
Argument form, 49
Aristotle, 81-82, 171
Bell, J. L., 82, 171
Bernays, Paul, 7, 171
Bolzano, Bernard, 7, 27-30, 49, 167n6,
171. See also Substitutional account of
logical truth/consequence
Boolos, George, 168n2, 171
Carnap, Rudolf, 139-140, 156-157
Carnaps observation, 139
Chang, C. C., 161nl, 171
Church, Alonzo, 5
Churchs thesis, 5 -6
Closure principle, 98; and Tarskis ac
count, 126-127, 130, 131, 139-141,
143
Cohen, Paul, 166nl
Completeness theorem, 3-4, 6, 85, 144155, 158
Computability, 5
Continuum hypothesis, 82, 123-124, 132,
158
Copi, Irving, 82, 171
Cross-term restrictions, 68; inconsistency
of, with Tarskis account, 69-79; and \
failure of Tarskis account, 110, 119, V
134-135, 152
.

Davidson, Donald, 12-13, 171


Davidsons puzzle, 12-13, 17-20
Deductive accounts of consequence, 2-3,
5-6, 8-9, 83-85, 156-157
D-satisfaction, 54
D-sequence, 53
Extensional adequacy, 3-4, 6, 8-9, 11,
80-81, 83, 85, 108, 130-135, 144-155
Fixed terms, 28, 30, 32. See also Logical
constants
Gdel, Kurt, 5, 7, 161n3, 166-167nl,
171. See also Completeness theorem;
Incompleteness theorems
Hilbert, David, 6, 7, 171
Identity predicate, treating as nonlogical
constant, 117-118
Incompleteness theorems, 84, 100, 150,
170n4
Infinity, axiom of, 114-117
Instantiation principle, 98
Intended interpretation, 56, 146, 166n4
Interpretational semantics, 51-56, 65-68;
vs. representational semantics, 51, 5764, 66, 77-79. See also Cross-term
restrictions

174

Index

Kaplan, David, 23, 162n7, 171


Keisler, H. J., 161nl, 171
Kemeny, John, 166n2, 172
Kreisel, Georg, 11, 145-155, 172
Kripke, Saul, 168nl, 172
Kripke-Platek set theory, 115

Quantification: and interpretational se


mantics, 65-79; and logical truth, 9 5 124; substitutional, 96, 168nl; objectual, 96, 168n2
Quine, W. V. O., 88, 163n4, 164-165nl3,
167n6, 172

Logical constants, 28, 80, 100, 109-110,


125-130, 157-158
Logical truth vs. logical consequence, 11,
47

Recursion theory, 5
Reduction principle, 98-99; first modifi
cation of, 101-106; second modification
of, 110-124
Representational semantics, 10, 20-26;
and logical properties, 25; vs. interpre
tational semantics, 51, 57-64, 66, 77-79
Robinson, Abraham, 153
Robinson, R. M., 172
Russell, Bertrand, I63n6

Machover, M., 82, 171


Mates, Benson, 82, 171
Meaning postulates, 71-72, 77. See also
Cross-term restrictions
Model-theoretic account of logical truth/
consequence, I, 55
Model-theoretic semantics, 51. See also
Interpretational semantics; Representa
tional semantics
Montague, Richard, 165nl6
Natural numbers, inductive definition of,
9
Necessity, 24-26, 78, 81-94, 106, 108
Nix, 40-41
w-incompleteness, 83-84, 100
&>-rule, 84, 100, 133
Opaque contexts, 165nl9
Overgeneration, 8, 130-135, 144-145,
148-150, 154-155, 158
Padoa, Allessandro, 7, 172
Pair-set axiom, 82, 122
Persistence, 30-32, 36-38, 48-50; and
cross-term restrictions, 69-70
Possible models, 119-120
Possible worlds, 12, 23, 25, 78
Principle (t). See Instantiation principle
Principle (it). See Closure principle
Principle (in). See Reduction principle
Principle (in'). See Reduction principle,
first modification of
Principle (ttt"). See Reduction principle,
second modification of
Propositions vs. sentences, 161-162nl,
^J62n7, 162-163n2, 163n5

Satisfaction, 8, 27, 33-47, 50, 165nl9


Satisfaction-preserving argument form, 49
Second-order logic, 6, 123124, 132, 158;
generalized structures for, 169-170nl
Sentential functions, 32; distinguished, 32
Soundness theorem, 3 -4 , 6, 11, 144-155
Substantive generalizations, 107-124,
129-135, 142-143, 150
Substitutional account of logical truth/
consequence, 28-30, 89, 96, 163n4,
164-165nl3; failure of persistence, 3 0 32, 39-41, 48-49, 69-70
Tarskis account of logical truth/consequence, 45-49
Tarskis fallacy, 85-94
Truth-preserving argument, 48
Truth, 8; absolute vs. relative, 12-22;
tables, 14-17, 22
T-schema, 33-34, I63-164n8
T-sentences, 12-13, 162n3
Turing, Alan, 5
Undergeneration, 8, 132-135, 144-145,
150-155, 158
Universal closure, 96
Variable terms, 28; vs. variables, 32
Wedderburns theorem, 121, 130
Zermelo-Fraenkel set theory, 114

r
U n iv e r sit e
S o r b o n n e n o u v e ll e
S e r v ic e c o m m u n
DE LA DOCUMENTATION

Pensez aux autres lecteurs


prenez soin de ce livre

Você também pode gostar