Escolar Documentos
Profissional Documentos
Cultura Documentos
Abstracts submitted for the conference New Thinking about Scientific Realism
Contents
A [1]: General Scientific Realism .................................................................................................................................................. 2
1) Morteza Sedaghat - A Practicalist Defense of Scientific Realism ............................................................................... 2
2) J Wolff A new target for scientific realism debates ................................................................................................. 3
3) Samuel Schindler Kuhnian theory choice, convergence, and base rates ................................................................ 3
4) Raphael Scholl Realism from a causal point of view: Snow, Koch and von Pettenkofer on Cholera ...................... 4
5) Mark Newman Scientific Realism, the Pessimistic Meta-Induction, and Our Sense of Understanding .................. 4
6) Axel Gelfert Experimental Realism and Desiderata of Manipulative Success ......................................................... 6
7) Jack Ritchie I could be wrong but it just depends what you mean: explaining the inconclusiveness of the
realism-anti-realism debate................................................................................................................................................ 7
8) Dean Peters Observability, perception and the extended mind ............................................................................. 7
9) Adam Toon Empiricism for cyborgs - ....................................................................................................................... 9
10) Curtis Forbes An Existential Approach to Scientific Realism ............................................................................... 9
11) Andrew Nicholson Are there any new directions for scientific realism?........................................................... 10
B [2]: Truth, Progress, Success and Scientific Realism ...................................................................................................... 11
12) Michael Shaffer Farewell to the Realism/Anti-realism Debate: Practical Realism and Scientific Progress....... 11
13) Juan Manuel Vila Prez A Critique of Scientific Pluralism The Case For QM .................................................. 12
14) Danielle Macbeth Revolution and Realism? ...................................................................................................... 13
15) Nora Berenstain Scientific Realism and the Commitment to Modality, Mathematics, and Metaphysical
Dependence ...................................................................................................................................................................... 14
16) John Collier Information Can Preserve Structure across Scientific Revolutions ................................................ 15
17) Juha Saatsi Pessimistic induction and realist recipes: a reassessment.............................................................. 16
18) Mario Alai Deployment vs. discriminatory realism ............................................................................................ 16
19) Gauvain Leconte Predictive success, partial truth and skeptical realism .......................................................... 18
20) Sreekumar Jayadevan Does History of Science Underdetermine the Scientific Realism Debate? A
Metaphilosophical Perspective......................................................................................................................................... 19
21) Hennie Ltter Thinking anew about truth in scientific realism ......................................................................... 20
C [3]: Selective Realisms ............................................................................................................................................................... 22
22) Xavi Lanao Towards a Structuralist Ontology: an Account of Individual Objects .............................................. 22
23) David William Harker Whiggish history or the benefit of hindsight? ................................................................ 23
24) Christian Carman & Jos DezLaunching Ptolemy to the Scientific Realism Debate: Did Ptolemy Make Novel
and Successful Predictions? .............................................................................................................................................. 24
25) Timothy Lyons Epistemic Selectivity, Historical Testability, and the Non-Epistemic Tenets of Scientific
Realism.............................................................................................................................................................................. 25
26) Peter Vickers A Disjunction Problem for Selective Scientific Realism ............................................................... 26
27) Raphael Kunstler Semirealists dilemma............................................................................................................ 27
28) Elena Castellani Structural Continuity and Realism ........................................................................................... 28
29) Tom Pashby Entities, Experiments and Events: Structural Realism Reconsidered............................................ 29
30) Angelo Cei The Epistemic Structural Realist Program. Some interference. ...................................................... 30
31) Kevin Coffey Is Underdetermination a Problem for Structural Realism? .......................................................... 31
32) Michael Vlerick - A biological case against entity realism .................................................................................... 32
33) Rune Nyrup Perspectival realism: where's the perspective in that? ................................................................. 33
D [4]: The Semantic View and Scientific Realism ................................................................................................................ 34
34) Alex Wilson Voluntarism and Psillos Causal-Descriptive Theory of Reference ................................................ 34
35) Alistair Isaac The Locus of the Realism Question for the Semantic View .......................................................... 35
36) Francesca Pero The Role of Epistemic Stances within the Semantic View ........................................................ 36
E [5]: Scientific Realism and the Social Sciences.................................................................................................................. 37
37) David Spurret Physicalism as an empirical hypothesis ...................................................................................... 37
F [6]: Anti-Realism .......................................................................................................................................................................... 38
38) Moti Mizrahi The Problem of Unconceived Objections and Scientific Antirealism ........................................... 38
39) Emma Ruttkamp-Bloem The Possibility of an Epistemic Realism ...................................................................... 40
40) 1
Yafeng Shan What entities exist ........................................................................................................................ 40
41) Daniel Kodaj From conventionalism to phenomenalism ................................................................................... 42
42) Fabio Sterpetti Optimality models and scientific realism .................................................................................. 43
Page 2 of 46
the false negative rates (i.e., the rate of a theory being successful if false, and the rate of a theory not being
successful if true, respectively) are very low. And apparently, setting the values for the base rates is elusive. If
probabilities are construed objectively, then it looks as though we have no way of finding out about them. If, on the
other hand, probabilities are construed subjectively, then both the realist and antirealist can set the priors as they
please. A rational debate about realism can then no longer be had (Magnus and Callender 2004).
This paper will argue that the Kuhnian picture of theory choice suggests a strengthened defense of scientific
realism. On the Kuhnian picture of theory choice, it is normally the case that a theory possesses some virtues but
not others. The following amplified No-Miracles-type argument (NMtA) then suggests itself: it would be an
unlikely coincidence if a theory were to possess all the five standard virtues and not be true.
When formalizing such a NMtA, error rates now need to be fixed for each of the theoretical virtues, giving the
NMtA more leverage than the traditional NMA. Furthermore, it will be shown that there are principled and non-
arbitrary grounds for setting the error rates at particular levels, whilst the principle of charity towards the
antirealist is observed. Setting the error rates in this way will then (non-arbitrarily) determine the base rates. The
base rate neglect charge is defeated. Interestingly it turns out that the Kuhnian picture of theory choice allows the
realist to concede that the base level of true theories is rather low and still have it her way.
Given the principled reasons for setting the error rates, the antirealist can now no longer simply insist that the
base rate be lower. She must challenge the fixing of the error rates by argument. Magnus and Callenders
skepticism about the resolvability of the realism debate is thus rebutted.
4) Raphael Scholl Realism from a causal point of view: Snow, Koch and von Pettenkofer on Cholera
In current debates about scientific realism, much deserved attention is paid to the problem of unconceived
alternatives, which P. Kyle Stanford has developed in a rich series of detailed and well chosen case studies.
However, it has not yet been explored how the problem of unconceived alternatives presents itself in sciences
which can be broadly described as causal and mechanistic (for instance, molecular biology). There are reasons to
think that the traditional problem as formulated by Stanford does not present itself: In causal inference, the space
of possible hypotheses tends to be exhausted by the contradictories P is a cause of Q and P is not a cause of Q
(this was emphasized by Peter Lipton in his debate with Bas Van Fraassen about the argument from the bad lot).
While it may be difficult to determine which of these is true, there is no obvious room for unconceived alternatives.
Nevertheless, there is ample room for debate about causal claims. First, we may question whether causal relevance
or salience has been successfully demonstrated (for example if confounding is suspected). Second, we may ask
what causal co-factors C are necessary for a given cause P to exert its effect on Q, and how often these co-factors
are in fact realized. Third, we may debate the existence and relevance of alternative causal pathways promoting or
preventing the occurrence of Q. Fourth, we may define event types at too coarse-grained or too fine-grained levels
of description. Fifth and finally, there may exist unknown potential causes whose causal relevance we have not yet
explored. If this characterization is correct, we would expect it to affect the dynamics of actual scientific debates in
the causal and mechanistic sciences. In a detailed case study of John Snows, Robert Kochs and Max Joseph von
Pettenkofers work on cholera in the 19th century, I will show that the categories outlined above illuminate large
parts of the actual debates about the causation and mechanism of cholera. I conclude that there are important
parts of science where unconceived alternatives as traditionally conceived are not the primary problem for
scientific realism.
5) Mark Newman Scientific Realism, the Pessimistic Meta-Induction, and Our Sense of Understanding
In his (2005, 2006, 2011, 2014) Michael Devitt argues that an adequate version of the Pessimistic Meta-Induction
must show not only that we have frequently got things wrong in our unobservable posits, but also that despite
methodological improvements, we have not been getting increasingly right. His view is that we now have much
more sophisticated and rigorous scientific methods than in previous centuries, so appeals to historical errors such
as Phlogiston, Caloric, and the Luminiferous Ether are irrelevant to current optimism about our theories. This
amounts to the claim that the PMI fails as an argument against scientific realism unless we have evidence against
our highly reliable current scientific methods.
J.D. Trout has provided the seed of just such undermining evidence. In his series of papers (2002, 2005, 2007) he
argues that the sense of understanding, which often reflects our feeling of having grasped the underlying causal
nature of some part of the world, has contributed to a long historical train of scientific errors. The sense of
understanding, he argues, is highly unreliable, and yet is at least part of the reason scientists accept theories (he
cites Ptolemy, Galen, and the alchemists). This alone does not provide enough evidence to undermine Devitts
claim, but Mark Newman (2010) has argued that the error 4 of taking intelligibility of a theory too seriously led
directly to the most noted PMI cases: the acceptance of Phlogiston, Caloric, and the Luminiferous Ether. Unless
scientists are nowadays ignoring intelligibility as a part of their selection criteria for theories, (a dubious claim),
then when combined with Trouts thesis Newmans work shows, contra Devitt, that we do currently have reasons
Page 5 of 46
to think contemporary scientific methodology is suspicious. Their work therefore provides a new reason for
thinking the PMI plausible after all.
One might think we ought to recommend scientists cease using their sense of understanding when evaluating
theories. I dont pursue that line, for their sense of understanding may still have epistemic benefit. Instead, I
believe a different account of understanding can be used by the realist to respond to the PMI, and I provide a
theory along those lines. In contrast to the internalist sense of understanding about which Trout sows seeds of
doubt, this is an externalist account, one which avoids the subjective and unreliable phenomenology of
understanding, instead opting for objective, testable evaluative criteria. I argue that when certain understanding
criteria are satisfied by a theory, and combined with independent background evidence for underlying physical
principles of that theory, they are found to track with successful theories in the history of science, and fail to track
those ultimately false theories which fall on the scrap heap of science. This account of understanding therefore
provides a reason to reject the PMI: when scientists have selected theories that they understood in this technical
sense, and they had independent evidence for the underlying physical principles of the theory, they have selected
correctly. I think it plausible to believe our current scientific methods incorporate just such a condition of
understanding on adoption of new theories.
I offer the following externalist account of understanding, which I call the Inferential Model of Scientific
Understanding (IMU):
(IMU): S understands scientific theory T iff S can reliably use principles Pn constitutive of T to make goal-conducive
inferences for each step in a problem-solving cycle which reliably results in solutions to qualitative problems relevant
to that theory.
This definition of understanding theory demands a fair amount from the scientist, and I think when it is satisfied
we have good reason to be initially optimistic about the theory in question. I argue that past failed theories never
enabled scientists to fully satisfy this condition, though our current theories do so. The key to satisfying this
definition is a scientists ability to use underlying physical principles to make correct inferences regarding each part
of a specific problem-cycle relevant to the theory. The problem-solving cycle requires satisfaction of four steps: (i)
correctly describing the problem by constructing a mental model; (ii) selecting correct background theoretical
principles relevant to the problem; (iii) applying those background principles to a specific problem case; (iv)
planning the problem-solving sequence from start to finish.
Theories that are intelligible in this way, I argue, possess the property of being reliable indicators of truth when
they also have background independent evidence for those physical constitutive principles. To defend this claim I
make the following argument:
To possess the resources for a scientist to satisfy (IMU) a theory must incorporate correct underlying principles
(principles incorporating properties which explain the behavior of the properties of the problem) which are used
to make correct qualitative inferences about the problem at hand. For instance, we have independent evidence for
the principle of conservation of mechanical energy. On the other hand although Phlogiston, Caloric, and the
Luminiferous Ether had explanatory underlying principles like thisprinciples which explained why their
particles had the properties they did, these principles were not independently confirmed. Contemporary theories we
consider approximately true do have such independently confirmed principles. Thus, we have reason to think this
externalist account of understanding tracks with correct theories and can be used to defend realism against the
PMI.
I close by considering why we need an externalist account of understanding, after all, if the internalist sense of
understanding can similarly point to constitutive principles which have to possess independent evidence, why
bother with externalist understanding? I answer that the sense of understanding, unlike my externalist account,
does not direct us to those physical principles which must be independently confirmed in order to secure theory
justification.
References
Devitt, Michael (2005) Scientific Realism. In The Oxford Handbook of Contemporary Philosophy, Frank Jackson and
Michael Smith, eds. Oxford: Oxford University Press: 767-91.
______. (2006). The Pessimistic Meta-Induction: A Response to Jacob Busch. Sats Nordic Journal of Philosophy 7:
127-35.
______. (2011) Are Unconceived Alternatives a Problem for Scientific Realism? Journal for General Philosophy of
Science 42: 285-93.
______. (2014) Realism/Anti-Realism. In The Routledge Companion to Philosophy of Science, Maricn Curd and
Stathis Psillos, eds. New York: Routledge.
Newman, Mark (2010) Beyond Structural Realism: pluralist criteria for theory evaluation Synthese 174: 413-443.
Trout, J.D. (2002) Scientific Explanation and the Sense of5Understanding Philosophy of Science, 69, pp. 213-233.
______. (2005) Paying the Price for a Theory of Explanation Philosophy of Science, 72, pp. 198-208.
______. (2007) The Psychology of Scientific Explanation Philosophy Compass 2: 564-91.
Page 6 of 46
7) Jack Ritchie I could be wrong but it just depends what you mean: explaining the inconclusiveness of the
realism-anti-realism debate.
We can identify in broad terms two ways of characterising the realism-anti-realism debate in the philosophy of
science. One, made famous by van Fraassen, concerns whether the aim of science is truth or something less
ambitious like empirical adequacy. The other concerns whether we have good reasons to believe in the claims of a
scientific theory that go beyond what has been or can be observed. The first is concerned with a descriptive
matter: what is the goal of science. The second is deals with a normative matter: what ought we to believe. In this
paper I argue that the second debate, at least in its contemporary form, has run its course.
Realism-anti-realism debates of this second kind focus on two main arguments: the No Miracles Argument and
Pessimistic Meta-Induction (or sophisticated variations of it like Stanfords Problem of Unconceived Alternatives).
Realists typically claim we ought to believe our scientific theories because they are empirically successful and the
best explanation of that empirical success is that they are true or approximately true. Anti-realists seek to
undermine this argument by appealing to the history of science. They point to cases of empirically successful
theories in which key theoretical terms such as ether failed to refer so undermining claims that such theories
were true or even approximately true. The realist in turn will say that closer examination of these cases provides
evidence of a more sophisticated kind of referential continuity which vindicates realism; and so on. It is this
dialectic which is doomed to be forever inconclusive. The problems with the standard debate can be grouped into
two kinds: epistemological and semantic.
Epistemological problems
All parties to the debate should be fallibilists. They should admit that we might be wrong in unknown ways about
what we believe about the world. Given this is so, what is sometimes called the prospective challenge for realism
falls away. If the anti-realist expects that the realists should be able to identify elements within a theory which they
know will be retained whatever the future development of science, then that is a hopeless demand for clairvoyance
and at odds with fallibilism. If all that is meant by the prospective challenge is that realists should be able to make
plausible but fallible claims about which parts of their theories they consider to be best supported by the evidence
and most likely true, then that challenge can easily be met. Scientists and others often have a good idea of which
parts of a theory are most secure. However, they would also admit as good fallibilists that they may be and
probably are wrong in some of the details. So providing case studies in which scientists best guesses about what
aspects of a theory would be retained prove nothing unless you reject fallibilism.
Semantic problems
A presupposition of the debate is that in order to make sense of claims or denials of approximate truth we need to
have a well-defined reference relation (or if you are a structural realist perhaps a well-defined semantic similarity
relation). We need something like this, the thought goes, to be able to make some sense of the idea that current
scientists are talking about roughly the same sort of things as past sciences; and we need to do that in order for it
to be even plausible that past theories are approximately true. A great deal of labour then has gone into the project
of coming up with the correct theory of reference. I suggest there is a common way of understanding this project
which is fundamentally misconceived: it is a mistake to think of theories of references as providing a set of
naturalistically specifiable conditions which relate words to some aspect of the world. I illustrate the folly of this
approach by appeal to some well-known paradoxes found in the writings of Stephen Stich and Huw Price.
What we are in fact be given by various theories of reference or semantic continuity are conditions for interpreting
the past theory in the terms of our current theory. Once we recognise that we are interpreting in this way we can
see that claims and counterclaims about continuity are mediated by non-factual judgements of reasonableness. We
must judge what we think is the most reasonable or the most charitable interpretation of past science. I contend
using the example of phlogisten and the ether that the facts of these cases underdetermine realist and anti-realist
interpretations. Both are possible and both may be judged reasonable.
The upshot of this is that once the bad epistemology and metasemantics are cleared away from the realism-anti-
realism debate, there is nothing substantive to be argued about. There are realist ways of articulating the history of
science which emphasise continuities; and anti-realist ways which emphasise discontinuities. What we know of
the history of science allows either story.
I conclude by briefly showing how the permissibility of either a realist or an anti-realist interpretation of the
history of science plays a role in van Fraassens arguments for Constructive Empiricism, particularly in his
characterisation of what Jean Perrin was doing in his famous Brownian motion experiments. I conclude by arguing
if there is an interesting realism-anti-realism debate it is of the kind van Fraassen directs us to and in that debate
the history of science will play a quite different role.
7
8) Dean Peters Observability, perception and the extended mind
In this paper, I will sketch a general account of perception modelled on the account of cognition provided by the
extended mind thesis (Clarke and Chalmers,1998). This account potentially has many applications, but I focus on
Page 8 of 46
one, namely the notion of observability in the scientific realism debate. Scientific realism states that scientific
theories can make (approximately) true claims about not only the observable world, but also about unobservable
entities and processes. A major competing view is constructive empiricism (van Fraassen, 1980), which states that
we have no warrant for believing claims about unobservables i.e. those mediated by scientific instruments and
that the goal of science is thus empirical adequacy. I will argue that selective scepticism in respect of
unobservables is untenable, given the theory of perception sketched out.
Churchland (1985) is a forerunner of this approach, arguing that which perceptual capacities we actually happen
to possess is too contingent a matter to be epistemically significant. If humans were, for instance, all born with
microscopes affixed to their left eye, our natural conception of what counts as observable would differ. Van
Fraassen (1980, 1985) replies that what counts as an observable phenomenon is a function of what the epistemic
community is (that observable is observable-to-us). To the constructive empiricist, Churchlands argument illicitly
presupposes that his hypothetical humanoids are already part of our epistemic community.
The correct response to this, I will argue, is to emphasise that belief in the outputs of our native perceptual
capacities also require justification. Psillos (1996) gestures in this direction when he argues that van Fraassens
arguments against inference to the best explanation apply just as well to the ampliative inference from empirical
success to empirical adequacy as they do to the inference from success to truth. The deeper point is that inference
to the best explanation is required to warrant any claims about the external world, that is, to warrant treating our
senses as perceptual capacities. Consider the standard sceptical worry, that I, the observer, am a brain in a vat, and
that the objects I am apparently observing do not exist. This worry is undermined by the fact that certain
features of objects are best explained by their actual existence. Russell (1912) emphasises the spatiotemporal
coherence of objects. A cat, for instance, is first observed in one part of the room, then later in another, without
appearing at each intermediate point. Dennett (1992) emphasises that we actively seek out new perceptual
contact with objects. So to simulate the existence of an object would require preparing for any possible interaction
the brain might wish to have with it, resulting in a combinatorial explosion in the number of simulated states
required. Thus, although it is logically possible that our perceptions of objects are illusory, the vat-keeper would
have a far easier time simply providing actual objects!
So, there is no obvious way to argue for the existence of even ordinary observable objects without inference to the
best explanation. This sort of consideration could be wielded as a Psillos-style tu quoque against the constructive
empiricist, attacking at the level of observability rather than empirical adequacy. More interesting, however, is to
offer a positive general account of perceptual capacities, drawing on the extended mind thesis. Clarke and
Chalmers argue that a process should be counted as mental if and only if it is functionally integrated into ones
central cognitive processes, i.e. is reliably accessible, has a high bandwidth connection to the centre, etc. Many
(but not all) brain processes meet this standard, and some external processes do as well. For instance, if an
Alzheimers sufferer uses a notebook to record important facts, Clarke and Chalmers claim that the contents of the
notebook should under certain circumstances be considered part of his memory.
Analogously, we might say that something counts as a perceptual capacity if and only if it is functionally integrated
into our other cognitive (epistemic) processes. Importantly, the key markers of functional integration
significantly overlap with those features that lead us to infer the existence of objects. Following Russells
argument, the output of a perceptual capacity should be coherent, both internally and with respect to the output of
other capacities. We would doubt that we were really detecting a cat if our visual image of it lacked spatiotemporal
coherence, or if we could hear it but were consistently unable to observe it visually. Following Dennetts argument,
a perceptual capacity should be a rich source of information across a wide variety of circumstances, and its output
should depend on where it is directed.
These criteria coherence, bandwidth, reliability and directability are not exhaustive. Nevertheless, different
sensors satisfy them to different degrees. Thus, the biological sensors possessed by humans will not necessarily
possess all these features to a greater extent than all artificial sensors. For instance, observations obtained via even
a simple light microscope are significantly richer in information than the output of the vestibular apparatus
(although the latter is usually more reliably available). Thus, to the extent that the outputs of such instruments
satisfy the stated criteria, the objects that they purportedly reveal should be counted as observable.
Van Fraassen might object that, unlike our native capacities, our artificial perceptual capacities are acquired.
However, depriving a newborn animal of vision for a time can render it permanently blind (Wiesel and Hubel,
1964), apparently demonstrating that visual capacity is in fact acquired. Moreover, it is implausible that full-
fledged perceptual capacities would be hardwired, as this would require wiring information to be genetically
encoded. Given faculties of learning or neural plasticity, environmental exposure ensures that perceptual organs
become functionally integrated. Of course, our native perceptual capacities are no doubt acquired by specialised
8
learning faculties. But it remains to be shown that the acquisition of non-native capacities by means of general-
purpose learning faculties differs in kind (as opposed to simply speed) from this more specialised process. I
conclude that there is no principled epistemic distinction between native and artificial capacities, and that the
observable/unobservable distinction is therefore refuted.
Page 9 of 46
make, and what kind of science will I practice if I choose to be a scientific realist? I have a similar hope for my
project to the one that Chakravartty has for his: while not a defence or condemnation of realism on its own, I hope
that knowing the historical and possible future effects of scientific realism on social, political, and scientific
practices will allow people a more informed choice when choosing to accept or reject it for themselves.
Analysis and Assessment
My analysis is threefold: first, I assess the consequences of realist attitudes in scientific practice through historical
examples, focusing on the contrast between metaphysically inclined and antimetaphysically inclined researchers
in late 19th century electrodynamics (Weber and Helmholtz, respectively). This part of my analysisfollowing
Robin Hendry (1995 and 2001)is meant to challenge some of Arthur Fines claims about how choosing a realist
or an anti-realist attitude makes little difference to scientific practice. I depart from many commentators (e.g.
Hendry 1995 and Psillos 2000) in not arguing that realism is the best philosophical framework for scientific
research tout court, but I do argue that there are specific circumstances (e.g. the measuring of natural constants,
property magnitudes, and theoretical parameters) in which a realist outlook seems more fruitful than its
alternatives, even if something similar can be said for its anti-realist rivals in other circumstances (e.g. an anti-
realist empiricist approach seems, historically speaking, to be much more useful when exploring novel phenomena
in the laboratory and developing new theoretical frameworks). Second, I assess the ways that realist attitudes can
resolve certain philosophical issues relating to the sciences, e.g. with respect to scientific revolutions and the
epistemology of the sciences. This part of my analysis is meant to mirror van Fraassens contention in The
Empirical Stance that an anti-metaphysical attitude can best resolve philosophical issues concerning the nature
and implications of radical theoretical change in science. I argue, on the basis of the now familiar optimistic
inductionand other work by Chakravartty, Psillos, and othersthat scientific realists have several very
perceptive solutions to the various philosophical problems brought up by scientific revolutions. With a consistent
metaphysical backing for the view it would seem that scientific realism, for the most part, faces few philosophical
challenges that appear to be better resolved by one of its alternatives. In my third and final section I assess the
prospects of realist attitudes in developing science and social policy. I argue that the efforts of realists to make
sense of scientific revolutions have generated perspectives on the sciences (e.g. Chakravarttys distinction between
detection and auxiliary properties) that implicitly claim to be capable of making novel predictions about the future
of scientific inquiry. If such perspectives prove consistent with the history of science and as it stands it seems
like they arethis would give us some ability to predict the course of future scientific change, and that would be
quite useful for various forms of science policymaking. At the same time, I note, overly conservative and
hegemonic views have often been supported by misguided realisms, so care must be taken in allowing realism too
much sway in social policy-making about the sciences.
11) Andrew Nicholson Are there any new directions for scientific realism?
Can the debate between scientific realism and anti-realism be satisfactorily resolved? Over the past thirty years
several authors (e.g. Fine (1986a, 1986b, 1990), Blackburn (2002) and Stein (1989)) have argued for a negative
answer to one or both of these questions. Instead of continuing to attempt to argue for the endorsement of either
realism or anti-realism, philosophers of science, we are told, should embrace a quietism (Fine, 1986) about such
matters. Although views of this sort are united in this claim, they differ over the issue of just what is wrong with
the realism/anti-realism debate. On the one hand, there are those, like Fine, who claim that there is a principled
distinction to be made between realist and anti-realist interpretations of science. However, the problem is that
this distinction is uninteresting or unimportant (Fine (1986a; 1986b)) or does not admit of a neutral basis for
adjudication (cf. Wylie (1986), Chakravartty (2011)). On the other hand, there are those, like Blackburn and Stein,
who claim that there is no principled distinction here at all. Clearly, the latter view is the more extreme, and the
threat which it poses to the project of exploring what new directions exist for scientific realism is consequently
more severe.
In this paper, I take up the challenge of defending the idea that there is a principled distinction to be had between
realist and anti-realist interpretations of science. Central to this task will be an examination of what I take to be
the most well-developed argument for the contrary thesis, that presented by Blackburn (2002). The central
argument in the explicit case made by Blackburn is to the effect that no coherent, principled distinction can be
made between the realists notion of belief and the constructive empiricists notion of immersion (what
Blackburn refers to as animation). I argue that although Blackburns explicit focus is on the debate between the
scientific realist and the constructive empiricist about the1appropriate attitude to take to the theoretical claims of
scientific discourse, his arguments readily generalise to the debate between the scientific realist and anti-realist.
In brief, the anti-realist, in order to adequately accommodate our, at least, instrumental reliance on science must
either endorse a reinterpretation of scientific discourse or endorse the adoption of an epistemic attitude towards
Page 11 of 46
such discourse which falls short of belief (cf. Stanford (2006)). However, it is the latter option which is preferable,
and accommodation of our full instrumental reliance necessitates the endorsement of the adoption of the attitude
of acceptance in such a way that we are animated (in Blackburns sense) by the relevant theory. Hence the
coherence of the distinction between realist and anti-realist interpretations of science depends upon the
coherence of the distinction between belief and animation, and so Blackburns argument applies directly to this
more general case.
As a result of this first part of the discussion, I frame the central puzzle which must be resolved if we are to meet
Blackburns challenge: We must establish a notion of animation which simultaneously satisfies the following four
desiderata: (i) the relevant notion of animation must differ from the notion of belief (where this latter concept is
rendered in a way which is intuitively acceptable and compatible with scientific realism); (ii) it should be capable
of grounding an anti-realist argument to the effect that we should not proceed past animation to belief; (iii) the
notion must be recognisably anti-realist; and (iv) it should accommodate our full instrumental reliance on
science.
In the second part of the paper, I present and assess three approaches one might adopt in establishing the
appropriate distinction between belief and animation. The first approach is based on Stanfords discussion in the
last chapter of his (2006), in particular on his attempt to develop a principled and coherent restricted theoretical
instrumentalism. The second approach aims to give the appropriate account of animation in terms of
contemporary accounts of pretense or make-believe. This approach draws heavily on the philosophical projects
of Walton (1990) and Yablo (1998) as well as recent work by Toon (2012). The third approach attempts to
account for animation by appealing to second-order cognitive attitudes. I argue that the first strategy does not
satisfactorily deal with Blackburns challenge since it either falls foul of desideratum (i) or desideratum (iv),
above. I then proceed to argue that whilst the second and third approaches are deficient as they stand, with
suitable amendment and combination they lay a potential foundation for a more satisfactory approach.
In the third and final section of the paper, I provide a preliminary sketch of this more satisfactory alternative.
References
Blackburn, S. (2002) Realism: Deconstructing the Debate. Ratio 15, 111-133.
Fine, A. (1986a). The Natural Ontological Attitude in The Shaky Game: Einstein, Realism, and the Quantum
Theory. Chicago: University of Chicago Press.
---------. (1986b) Unnatural Attitudes: Realist and Instrumentalist Attachments to Science. Mind 95, 149-
179.
---------. (1991) Piecemeal Realism. Philosophical Studies 61, 79-96
Stanford, K. (2006). Exceeding Our Grasp: Science, History and the Problem of Unconceived Alternatives. Oxford:
Oxford University Press.
Stein, H. (1989). Yes, butSome Skeptical Remarks on Realism and Anti- Realism. Dialectica 43, 47-65.
Toon, A. (2012). Models as make-believe: Imagination, fiction and scientific representation. Palgrave Macmillan.
Walton, K. (1990). Mimesis and Make-Believe. Cambridge, Mass.: Harvard University Press.
Wylie, A. (1986). Arguments for Scientific Realism: The Ascending Spiral. Philosophical Quarterly, 23 (3), 287-
297.
Yablo, S. (1998). Does Ontology Rest on a Mistake?. Proceedings of the Aristotelian Society, Supplementary Volume
72: 2296.
12) Michael Shaffer Farewell to the Realism/Anti-realism Debate: Practical Realism and Scientific Progress.
Traditionally scientific realism and scientific anti-realism have been regarded as deeply opposed views of the aims
of the sciences. On the one hand, scientific realists of all sorts hold that science aims (in part or in whole) to
discover true (or perhaps merely approximately true) theories. On the other hand, anti-realists deny that science
aims to discover true theories or even approximately true theories. Many anti-realists also contend that the aim of
the sciences is to produce theories that are practically useful in some important sense of that expression. One
prominent reason that anti-realists adopt this stance towards scientific theories is that they believe that there are
good reasons to hold that all (or even just most) scientific theories are (or must be) qualified by idealizations. 1 So,
according to this important line of anti-realist thinking anti-realists
1 claim that scientific theories are not, strictly
speaking, true. Thus the idealization argument against scientific realism constitutes a powerful attack on scientific
1
See Cartwright 1988 for the most thoroughly worked out version of this view.
Page 12 of 46
realism. 2 It is a direct threat to scientific realism because this argument attacks the feasibility of satisfying the
realists conception of the aim of scientific theorizing. Moreover, this argument is also supposed to support the
contention made by many anti-realists that the real aim of the sciences is only to produce practically useful
theories. This is typically because idealizing inherently involves making simplifications that are motivated by
practical concerns like computability. In other words, according to such anti-realists, we construct idealized (and
hence false) theories because they are practically useful to us and their usefulness is a result of their being simpler
and hence more computationally tractable.
However, the idealization argument crucially depends on the assumption that if scientific theories are qualified by
idealizations, then they are not true (or even approximately true). But, this is by no means an uncontroversial
assumption. For example, recently Shaffer (2012) has argued at great length that idealized scientific theories
ought to be regimented as special sorts of counterfactuals. The antecedents of these counterfactuals are idealizing
assumptions and the consequents are theoretical claims about the behaviors of physical systems. What is then
most important for the issues to be discussed here is that idealizing counterfactuals have perfectly ordinary and
well-understood truth conditions. Even more importantly, many such claims are simply true. So, the key
assumption behind the idealization argument against scientific realism is false. Scientific theories can be true even
if they are qualified by idealizing assumptions and the anti-realists contention that idealized theories must be
adopted for merely practical reasons is erroneous.
What this novel response to the idealization argument against scientific realism further suggests about the more
general realism/anti-realism debate is that this long running debate is predicated on a simple but deeply
important misunderstanding. The nature of this error concerns the compatibility of scientific realism and
scientific anti-realism in terms of the aims that they proscribe for scientific theorizing. In brief, the debate is
confused because scientific realism and anti-realism need to not be regarded as incompatible and these two views
of the aims of the sciences simply arent opposed in the way that the parties to the debate have traditionally
assumed. All that is required to see this is the recognition that there need not be only one aim of the scientific
theorizing. If we cede both realist monism and anti-realist monism about the aims of the sciences we are free to
adopt the view that the sciences aim to produce true (or approximately true) theories and the view that the
sciences also aim to produce theories that are practically useful because they are idealized. We can adopt a hybrid
view of the aims of scientific theorizing that captures the most basic insights of both the realist and the anti-realist.
Let us call this view practical realism and the core insight of this hybrid view is that literal truth and practical
usefulness must be balanced in solving scientific problems. So, while science aims at truth it also involves the use
of practically motivated idealizations that qualify theories. The adoption of practical realism is a crucially
important step in moving beyond the seemingly intractable stalemate that afflicts the debate concerning realism
and anti-realism. In adopting practical realism we can see that the realism/anti-realism debate simply dissolves.
As a result, the adoption of practical realism allows for us to get to the real work of articulating a more realistic
view of theorizing in the sciences that acknowledges a complex view of the aims of the sciences. So practical
realism involves the key recognition of the dual aims of scientific theorizing and thus allows for us to explore how
this dualistic notion of the aims of science impacts important issues in the philosophy of science like explanation
and confirmation. More specifically, practical realism raises all sorts of interesting questions about scientific
representation, degrees of idealization, scientific progress, etc. In this particular paper a novel concept of scientific
progress consonant with practical realism will be explicated. This notion of scientific progress will be framed in
terms of Shaffers (2012) contextualist theory of explanation and some of its most important implications will be
explored. More specifically the concepts of partial explanation and partial understanding involved in this notion of
progress are investigated.
Cartwright, N. (1983). How the Laws of Physics Lie. New York: Oxford University Press.
Shaffer, M. (2012). Counterfactuals and Scientific Realism. New York: Palgrave-MacMillan.
13) Juan Manuel Vila Prez A Critique of Scientific Pluralism The Case For QM
Scientifically speaking, Quantum Mechanics (QM) is the most successful theory ever made. Philosophically
speaking, however, it is the most controversial. Its basic principles seem to contravene our deepest intuitions
about reality, which are most patently exhibited in the metaphysical commitments of Classical Mechanics (CM).
In the last century many attempts to rejoin CM and QM have taken place, like Bohrs Kantian defense of the priority
of classical concepts (Bohr, 1927) or Bohms search for a classical limit trough the quantum potential R (Bohm,
1952). However, most interpretations suffer from one of two main serious difficulties: either they are thought to
be too restrictive and incapable of appreciating the revolutionary
1 features of QM, or else they are thought to be too
implausible given the strange ontological commitments required by the interpretation.
2
See Shaffer 2012.
Page 13 of 46
Scientific Pluralism (SP) has become an attractive middle ground between these two poles. A pluralist stance
respects the idiosyncratic features of each theory, while at the same time restricts their ontology to what is
required by the mathematical formalism.
An important historical example of SP in Quantum Physics can be found in Werner Heisenbergs book Physics and
Philosophy (1958). According to Heisenberg, the history of physics is a succession of theories, were each theory is a
closed system [abgeschlossene System]. A closed system is a system of axioms and definitions which can be
expressed consistently by a mathematical scheme (Heisenberg 1958, p. 92). In each system, the concepts are
represented by symbols which in turn are related by a set of equations, and the resulting theory is thought to be
an eternal structure of nature, depending neither on a particular space nor on particular time (Ibid. p. 93). Given
this systemic closure, each theory generates its own concept of reality, whose validity is not restricted by other
theories.
After Heisenberg, many sophisticated versions of SP have been recently proposed (Krause 2000, Longino 2002,
Chang 2007). Most of these versions engage critically with Heisenbergs notion of closed system. However, they
all share a common assumption which stems directly from Heisenbergs treatment of physical theories, and has
been barely discussed. I will call this assumption the Comprehension Thesis (CT). According to the
Comprehension Thesis (CT), to understand or comprehend something is to relate a multiplicity of elements
trough a finite number of non-arbitrary relations. The local version of this thesis is that each theory must be
internally comprehended. This is typically achieved through the fixation of the referents of some of the theorys
symbols. The symbols are then related to non-symbolic items trough what I call Principle of Referential
Persistence (PRP). A symbol persistently refers to an item of the world E iff refers to E in every occurrence of
. When each symbol becomes attached to its referent, the resulting articulation constitutes the ontology of each
theory.
Although defenders of SP typically ascribe to CT and PRP, the Pluralist denies any global application of CT, since
this would imply an inter-theoretical reduction of the many languages, methods and metaphysical commitments
into one total theory (or ToE), and the rejection of such a theory is precisely the starting point of any scientific
pluralism.
The aim of this paper is to show how this selected use of CT is unwarranted. Since SP lacks any alternative
conception of comprehension for global cases, any restriction of CT to the local case shows itself to be arbitrary.
As the argument develops, it will be suggested that the main reason for this internal weakness is that SP upholds,
along with Scientific Realism, a representational conception of scientific theories according to which a theory is a
description of the physical reality. This commitment is obviously manifested in the maintenance of PRP. It will be
argued that the central problem with Scientific Pluralism is that its restriction of CT is incompatible with its own
representational conception of scientific theories.
So the Pluralist must choose: either she abandons CT altogether or she fully applies it. If she chooses the former
option, it is impossible to distinguish a scientific pluralism from a instrumentalist account of scientific theories,
since the problem of comprehending those theories as being about something would be completely obliterated. If,
on the other hand, she chooses the latter alternative, then she is confronted with a reductive account of physical
theories, since her holding of PRP makes it impossible to avoid a Theory of Everything. As a conclusion, I suggest
that the only viable way to preserve scientific realism and avoid a reductive account of global comprehension is to
abandon PRP in favor of a more dynamic account of the way in which scientific theories relate to the physical
world.
millennium, the positional system of Arabic numeration together with its algorithms for performing calculations
had been developed as a form of paper-and-pencil reasoning to rival calculating on a counting board. And Vite in
the sixteenth century devised a symbolic notation suitable for algebraic manipulations. But this notation, like
Arabic numeration, was taken to have only an instrumental value. Until the work of Descartes, the basic
intellectual orientation remained that of the ancient Greeks; it was Descartes who learned to read the symbolism
of arithmetic and algebra as a fully meaningful language, albeit one of a radically new sort. And he did so through a
metamorphosis in his most basic understanding of space and spatial objects. Hitherto conceived as the relative
locations of objects, space was now, through a kind of figure/ground gestalt switch, to be conceived as an
antecedently given whole within which objects might (but need not) be placed, each independent of all the others.
Mathematics, from being about objects, was now to be conceived as a science of relations among arbitrary
quantities as expressed in equations. These equations made possible in turn the modern notion of a law of nature,
in particular, Newtons laws of motion.
Early modern Newtonian science offered a view of reality that was to replace our now-seemingly nave everyday,
sensory view of things. That nave view is wrong, a mere appearance of things to creatures like us; the view
afforded by the exact sciences is right, a picture of how things actually are. The kinetic theory of gases provides a
nice illustration of the idea: what appears to us as the heat, pressure, and expansion of a gas really is nothing more
than the increasing or decreasing motions of tiny particles. Kant then adds a further twist to this: our knowledge of
mathematical and physical reality is ineluctably shaped by the forms of our sensibility and understanding. We
cannot, even through our most successful scientific theories, know things as they are in themselves. Then, in the
nineteenth century, mathematical practice was again transformed to become, as it remains today, a practice of
deductive reasoning from concepts. And this transformation, by contrast with that of the seventeenth century,
constitutes a rebirth of mathematics as a whole. The aspirations of early modernity have finally been fully realized:
mathematics is revealed to be, has become, a purely rational enterprise. Although reason in its first appearance,
say, in ancient Greek mathematical practice, cannot constitute a power of knowingas, for example, perception is
a power of knowing (we can, in some instances, just see how things are)reason can, over the course of history,
through radical transformations in our forms of mathematical practice, become such a power. Astonishing though
it must at first seem, deductive reasoning from concepts can extend ones knowledge in contemporary
mathematical practice.
This new, purely rational and conceptual mathematics has enabled in turn a new form of fundamental physics that
does not merely use mathematics as, say, Newtons physics does, but instead simply is mathematics. There is no
physical correlate. And because this mathematics is purely rational, because it has been purged of all sensory
content, it is correct to say that the aspects of reality it revealsin special and general relativity and in quantum
mechanicsis maximally objective, the same for all rational beings. This, then, is a new form of scientific realism,
one that is structuralist without being quite what is generally meant by structural realism. What it shows is that far
from being incompatible with scientific realism, revolutions in the practice of science can be constitutive of
scientific realism.
15) Nora Berenstain Scientific Realism and the Commitment to Modality, Mathematics, and Metaphysical
Dependence
I show why the scientific realist must be committed to an objective, metaphysically robust account of the modal
structure of the physical world. I argue against Humean regularity theory on the grounds it is incompatible with
scientific realism and fails to be naturalistically motivated. I specifically address the Mill-Ramsey-Lewis view,
which states that laws of nature are those regularities that feature as axioms or theorems in the best deductive
system describing our universe. This view, also known as sophisticated Humeanism, is broadly incompatible with
scientific realism as it can offer no explanation of the success of inductive inference. The Humean about laws of
nature denies the existence of natural necessity. Since the Humean cannot explain why the regularities in our
world continue to hold from moment to moment, she cannot explain why inductive inference should be a
successful method of investigation. This does not sit well from the perspective of a scientific realist. One of the
driving motivations behind scientific realism is the thought that there must be an explanation for the success of
science. The use of induction is a cornerstone of scientific theorizing and investigation. If the Humean cannot offer
an explanation of the success of induction, neither can she offer an explanation of the success of science. Thus the
scientific realist must embrace a robust view of physical modality that involves natural necessity.
Causality, equilibrium, laws of nature, and probability are four notions that feature prominently in scientific
explanation, and each one is prima facie a modal notion. These modal properties are necessary to make sense of
our best scientific theories, and scientific realists cannot do
1 without them. Structural realists in the vein of
Ladyman and Ross [2007] sometimes suggest that this modal structure is primitive. I offer a new account in which
the modal structure of the physical world is metaphysically dependent upon mathematical structure.
The argument proceeds by way of analogy between the no-miracles argument for scientific realism about
unobservable entities and the indispensability argument for realism about mathematical entities. The no-miracles
Page 15 of 46
argument shows that we must be committed to unobservable or theoretical entities if we are to account for
sciences ability to explain and predict novel phenomena. Colyvans [2001] indispensability argument and cases
supporting it (such as Baker [2005]) show that facts about mathematical structures and relations can also explain
and predict features of the physical world. Just as no-miracles is taken to be an argument not just for theoretical
entities themselves but for a dependence relation (usually causal) between the theoretical entities and the
observable phenomena they explain, the indispensability argument must similarly be taken to be an argument for
a dependence relation between explanans and explanandum. This relation between explanans and explanandum is
what I take to be the relation of metaphysical dependence.
I argue that the modal structure of the physical world is derived from mathematical structure. The modal
properties of a physical system, such as limits on what values its physical quantities can take, derive from the
underlying mathematical structure that the system instantiates or approximates. Rather than taking mathematics
to merely usefully model physical systems, we should take mathematical structures to determine the modal
properties of these systems. In other words, the modal structure of a physical system is metaphysically dependent
on the systems underlying mathematical structure.
The metaphysical-dependence view of physical modality paves the way for a unified account of modal and
mathematical epistemology. It illuminates the nature of necessity in the natural world. And it accounts for the
incredibly successful applications of mathematics to empirical phenomena when it comes to explanation and novel
prediction. Further, once we understand that modal physical properties are grounded in properties of
mathematical structures, the enormous usefulness of mathematics in the natural sciences no longer borders on the
mysterious. Thus, the view provides a natural answer to the applicability problem. I show this account of the
applicability of mathematics to empirical science to be superior to those put forth by Pincock [2004] and Bueno
and Colyvan [2011].
16) John Collier Information Can Preserve Structure across Scientific Revolutions
Thomas Kuhn and Paul Feyerbend introduced the issue of semantic incommensurability across major theoretic
changes that we call scientific revolutions, though Hanson originated the problem. Feyerabend recognized that the
problem of semantic comparability arose because of problems in empiricism itself. I argue that the problem arises
from two widely held assumptions. The first is Peirce's criterion of meaning according to which any difference in
meaning must make a difference to possible experience. This is a sort of positivism, but it is not verificationist. The
second assumption is the verificationist view that the meaning of any statement is given by the conditions under
which it can be taken to be verified. Together these assumptions entail the infamous Quine-Duhem Thesis that any
two theories have extensions that are equally compatible with the evidence. This leads less directly to Kuhn's
Incommensurability Thesis, that two theories can be both incompatible and semantically incommensurate,
notoriously across major "scientific revolutions", undermining the idea of cumulative progress in science. Kuhns
own position is notoriously ambiguous, supporting both antirealism and what might be called sequential idealism.
In Second Thoughts on Paradigms he clarified his ideas and placed the problem on incompatible classifications
that have no clear common ground. The problem for the progressive realist, then, is to find some way to establish a
common ground for comparison of classes.
One of the more promising attempts at resolution is the Structuralist Approach to Theories, in which theories are
model theoretic structures isomorphic to parts of the world. It divides theories into the core theory, a set of models
with the laws dropped out but retaining the classes, and a set of observational models without any of the
apparatus of the theory. Unfortunately the approach to intertheoretic reduction advocated was shown fairly early
to permit incommensurability because of the indeterminacy of isomorphism across models.
Further restrictions are required. I will argue that a resolution using the theory of Information Flow developed by
Jon Barwise and Jerry Seligman can provide the extra restrictions, allowing even incommensurate theories to
share evidence. A consequence of this perspective is that the meaning issue is a red herring. Another is the
rejection of verificationism, which forces the meaning issue to the fore, as noted above. Kuhn argued in his later
work that the problem was due to incommensurable classifications across different theoretical contexts, with no
common context to provide a common semantic ground. If we assume that both an earlier theory and a later one,
or two competing theories in general, share a common basis of observational instances, then the problem is that
the two theories classify the common instances differently. Barwise and Seligmans approach to information flow
assumes that we have two classifications of tokens (instances) which bear a relation that has the characteristics of
what they call an infomorphism. If an infomorphism holds, then we can talk of an information flow from one
classification to the other (though the reverse need not be true). An infomorphism is a pair f of functions f, f
between two classifications A and C, one from the set of objects
1 used to classify A to the set of objects used to
classify C, and the other from C to A, such that the biconditional relating the second function to the inverse of the
first function holds for all tokens c of C and all types of A, f(c) A if and only if cC f( ). The biconditional is
called the fundamental property of infomorphisms. HereA is the classification by the classes in A of the instances
(tokens) of the classes. The problem of information flow across theoretical models through their empirical
Page 16 of 46
instances, I will argue, is exactly the problem of semantically comparing the theories, based on Kuhns idea that the
problem is one of classification. The fact that the two theories to be compared have the same empirical instances
(though perhaps under different classifications and thus names) helps considerably, but it does not solve the
problem of intertheortic semantic comparison.
I will set some desiderata for completing the requirements for an infomorphism across theoretical models, and I
will argue that they can be satisfied. I will further argue that unless they are satisfied, incommensurability is a very
real phenomenon that cannot be resolved by strictly empiricist means. My approach is in the spirit of signs in the
semiotics of C.S.Peirce.
19) Gauvain Leconte Predictive success, partial truth and skeptical realism
A common defense for scientific realism since Musgraves and Leplins works (Musgrave 1988; Leplin 1997) is to
argue that the truth of our scientific theories is the best explanation of their predictive success, i.e. their capacity to
make novel predictions. Yet, in order to resist Laudans pessimistic induction (Laudan 1981), these defense must
also explain why past theories enjoyed predictive success but are now considered as false. Most of scientific
realists, such as Worrall (Worrall 1989a, 120), Kitcher (Kitcher 1993, 140), Psillos (Psillos 1999, 108) or
Chakravartty (Chakravartty 2007, 45) reply that these past theories were not completely true or false but
approximately or partially true, and only their true aspects are still part of our current scientific theories.
Yet, for this reply to be effective, realists must give an independent criterion or procedure to sort out the true parts
of a theory from the others, otherwise the notion of partial truth is just an ad hoc evasion from the pessimist
induction. Furthermore, the different versions of scientific realism, such as Worralls structural realism, Psillos
orthodox scientific realism or Chakravarttys semirealism, differ precisely on which parts of theories must be
considered as ontologically committing. The elaboration of such a procedure is therefore crucial not only to defend
scientific realism, but also to choose among the many versions of this position.
This procedure is often based on an indispensability argument: if the truth of a theory is the best explanation of its
predictive success, the parts of a theory which are indispensable to its predictive success must be true. While a lot
of recent debates and critics of scientific realism have focused on the possibility to discriminate between false and
approximately true theories on the basis of their predictive success (Stanford 2000; Held 2011; Cevolani et
Tambolo 2013), few has been said about the possibility to use predictive success as a criterion to identify, inside
partially true theories, those parts worthy of belief.
In my paper, I present two objections to the indispensability argument and to this use of predictive success. These
objections come from the examination of the derivation of novel predictions from a theory and rely on the study of
a paradigmatic case: the prediction, from Fresnels wave theory of light, of a white spot in the center of the shadow
of a circular object (Worrall 1989b; Psillos 1995).
The first objection is that the indispensability argument is too liberal. The prediction of the white spot, as it is
exposed by Fresnel in one of the appendix to his Mmoire of 1819 (Fresnel 1866, 365372), implies several false
assumptions and idealizations with no physical meaning, such as the hypothesis of an infinite radius of an aperture
or the absolute velocity of the particle of ether. Many authors have underlined the role of fictions in explanation
and explanatory models (Batterman 2001; Cartwright 2004; Surez 2008; Mki 2011; Bokulich 2012). The white
spot example suggests that fictions are equally important in deriving novel prediction and building predictive
models. Then, if a lot of hypotheses, which are clearly idealizations or false simplifications, are necessary to the
derivation of novel predictions, then false parts of theories are also indispensable to their predictive success.
This objection may be avoided by drawing a distinction between the central and auxiliary hypotheses of a theory.
However, I show that if central hypotheses are only defined as the ones which were maintained in posterior
theories, this distinction is guilty of post hoc rationalization, and is of little use to determine which aspects of our
current theories are true. But if we use another distinction between central and auxiliary hypotheses, we are led
back to the initial problem of distinguishing between the different aspects of a theory, and we fall into a vicious
circle. 1
The second objection is that it is often possible to predict the same phenomena by different predictive processes,
which do not use the same parts of a given theory. The prediction of the white spot was first derived by Poisson
from Fresnels integrals; but Fresnel, in his appendix, offers a simpler solution [to the problem of the shadow of a
circular object] without using the integrals I have used in the preceding Mmoire to compute the other phenomena
Page 19 of 46
of diffraction. (Fresnel 1866, 365) In other words, there are different ways to predict a phenomenon within the
same theory, and the set of the indispensable elements for predictive success is not uniquely determined.
I show that one cannot argue that we are ontologically committed only toward the common elements used to make
the two predictions of the same phenomena, because these elements were not retained in posterior theories such
as Maxwells electromagnetic theory. Moreover, these elements constitute a very small part of the overall theory:
the realist position based on them would be so thin that it would be even more deflationist than Saatsi s minimal
explanatory realism (Saatsi 2005).
In the last part of the paper, I give other examples from various fields of empirical sciences of predictions of the
same phenomenon which use different parts of the same theory. I claim that this fact is compatible with (and
makes a good case for) skeptical realism. For the skeptical realist, scientific successes inclines us to believe that a
theory is partially true, but we cannot discriminate between the parts of this theory which are indeed true and the
ones which are mere useful fictions. In other words, predictive success may tell us that it is very probable that
some parts of our scientific theories are true, but, as a Mafioso arrested by the police, remains silent on the name of
these parts.
References
Batterman, Robert. 2001. The Devil in the Details. Oxford University Press.
Bokulich, Alisa. 2012. Distinguishing Explanatory from Nonexplanatory Fictions . Philosophy of Science 79 (5):
72537.
Cartwright, Nancy. 2004. From Causation To Explanation and Back . In The Future for Philosophy, 23045.
Oxford University Press.
Cevolani, Gustavo, et Luca Tambolo. 2013. Truth may not explain predictive success, but truthlikeness does .
Studies in History and Philosophy of Science Part A.
Chakravartty, Anjan. 2007. A Metaphysics for Scientific Realism: Knowing the Unobservable. Cambridge University
Press.
Fresnel, Augustin Jean. 1866. Oeuvres compltes dAugustin Fresnel. dit par Henri Hureau de Senarmont et mile
Verdet. Paris: Imprimerie Impriale.
Held, C. 2011. Truth Does Not Explain Predictive Success . Analysis 71 (2): 23234.
Kitcher, Philip. 1993. The Advancement of Science-Science without Legend, Objectivity without Illusions . Oxford
University Press 1.
Laudan, Larry. 1981. A Confutation of Convergent Realism . Philosophy of Science 48 (1): 1949.
Leplin, Jarrett. 1997. A Novel Defense of Scientific Realism. Oxford University Press.
Mki, Uskali. 2011. The Truth of False Idealizations in Modeling . In Models, Simulations, and Representations.
Routledge.
Musgrave, Alan. 1988. The ultimate argument for scientific realism . In Relativism and realism in science, 22952.
Springer.
Psillos, Stathis. 1995. Is Structural Realism the Best of Both Worlds? Dialectica 49 (1): 1546.
. 1999. Scientific realism: How science tracks truth. Routledge.
Saatsi, Juha. 2005. Reconsidering the FresnelMaxwell Theory Shift: How the Realist Can Have Her Cake and EAT
It Too . Studies in History and Philosophy of Science Part A 36 (3): 50938.
Stanford, P. Kyle. 2000. An Antirealist Explanation of the Success of Science . Philosophy of Science 67 (2): 266
84.
Surez, Mauricio (dir.). 2008. Fictions in science: Philosophical essays on modeling and idealization. Routledge.
Worrall, John. 1989a. Structural Realism: The Best of Both Worlds? Dialectica 43 (1-2): 99124.
. 1989b. Fresnel, Poisson and the white spot: the role of successful predictions in the acceptance of
scientific theories . In The Uses of Experiment, Studies in the natural sciences, David Gooding, Trevor Pinch, et
Simon Schaffer, 135157. Cambridge: Cambridge University Press.
20) Sreekumar Jayadevan Does History of Science Underdetermine the Scientific Realism Debate? A
Metaphilosophical Perspective
It is often argued that historical evidence intellectually compels us to organize our philosophical temperament
against scientific realism. This issue has been one of the subject matters of the scientific realism debate for more
than two decades. An outpouring of historical studies happened in the recent years as different factions
developed their own explanations as to what is retained across
1 theory-change. I evaluate the development of the
scientific realism debate in the recent two decades from a metaphilosophical perspective. I argue along the lines
of Juha Saatsi (2011) that current explorations in history of science are not enough to vindicate any single
position. Later, I build upon this claim that there is a sense in which, we may declare that history of science
underdetermines the scientific realism debate. This is because, in the debate, philosophical positions like
Page 20 of 46
structural realism and scientific realism couch historical case studies by narratives of their own. These narratives
appear to be lending support to the respective positions in the debate. In order to show this, firstly, I explore the
ways in which Stathis Psillos (1999) and James Ladyman (2011) interpret the Stahl-Lavoisier episode of
eighteenth century chemistry. Secondly, I investigate the attempts of Anjan Chakravartty (2007) and John
Worrall (1989) where they develop their own unique readings of the Fresnel-Maxwell episode in nineteenth
century optics. I show from the analysis of these two episodes that:
The all-inclusive nature of interpretations of certain notions ingrained in the debate ( e.g. 'structure') often
affect the evenness of philosophical positions. Notions like abstract/concrete structure and entities, which
are the backbones of many realist and quasi-realist positions get loosely interpreted over history of science.
Scientific realism (the version defended by Psillos) mostly works well as a position about current science
consisting of matured and predictively successful theories. Scientific realism does not possess the resources to
pin- point truth bearing constituents of past theories even with Psillos naturalistic approach of leaving the task
to the current practicing scientists.
Disparate historical episodes do not uniformly support any single philosophical position in the debate. That is, all
of history of science does not reflect a uniform epistemic attitude.
Therefore, the task at hand does not end here. The lack of sufficient historical evidence in favor of any position
compels us to think against a uniform epistemic attitude across science. We may entertain the view that different
philosophical ideals apply in different historical phases- a
hint found in the works of David Pappineau (1996), Juha Saatsi (2011) and Uskali Mki (2005). I introduce the
notion of an epistemic indicator, which is a domain specific virtue giving warrant to the belief in individual
philosophical positions. For example, notions like intervention and detection in the case of early nineteenth
century physics (favoring entity realism and semirealism) and abstract structural retention in nineteenth
century optics (favoring epistemic structural realism). These domain specific virtues are epistemic indicators for
particular historical phases. The epistemic indicator drives different epistemic attitudes conducive for different
phases of science. Revealing these epistemic indicators in historical cases is the key in shaping a non-uniform
pluralistic epistemic attitude to past and present science. Thus, I conclude by arguing that a metaphilosophical
angle into the scientific realism debate directs us to be pluralists in our philosophical temperament about
scientific knowledge. The pluralism can be asserted by combing history of science where unique epistemic
indicators are at play lending support to different positions.
There are four parts in this paper. In part one, I outline the historical trajectory of metaphilosophical perspectives
in the debate starting from the works of Alison Wylie (1986). I also argue as to why the genre of metaphilosophy
within general philosophy of science is pivotal in evaluating the overall progress of the debate in the last two
decades. In part two, I elaborate some of the recent attempts by thinkers who interpret specific phases of history
of science in favor of their positions (especially Psillos, Ladyman, Worrall and Chakravartty). In part three, the
inadequacy of historical evidence in lending support to individual positions as well as the all- inclusive nature of
interpretations of philosophical notions are scrutinized. Here, I elaborate the reasons as to why we should think
that history of science underdetermines the scientific realism debate. In part four, I introduce the notion of
epistemic indicators with which, a pluralistic epistemic attitude can be endorsed on scientific knowledge.
Selected Bibliography
Chakravartty, Anjan. (2007) A Metaphysics for Scientific Realism: Knowing the Unobservable, Cambridge:
Cambridge University Press.
Ladyman, James. (2011) Structural realism versus standard scientific realism: The Case of Phlogiston and
Dephlogisticated Air. Synthese 180(2): 87-101.
Maki Uskali (2005) Reglobalizing Realism by Going Local or (How) Should our Formulations of Scientific
Realism be informed about the Sciences. Erkenntnis 63: 231251.
Papineau, David (Ed.) (1996). Introduction. In Papineau, D., editor, The Philosophy of Science, chapter 1,.
London: Oxford University Press. 121.
Psillos, Stathis. (1999) Scientific Realism: How Science Tracks Truth, New York: Routledge.
Saatsi, Juha (2011) 'Scientic Realism and Historical Evidence: Shortcomings of the Current State of Debate', Vol 1,
EPSA Proceedings, Springer, 329-340.
Worrall, John. (1989) Structural Realism: The Best of Both Worlds?, Dialectica 43: 99-124.
Wylie, Alison. (1986), Arguments for Scientific Realism: The Ascending Spiral, American Philosophical Quarterly,
23: 287298.
and the plausibility of his interpretations thereof. However, the theory of what truth is seems to have been least
successfully revised in the light of the Laudans critique, although his critique on the theory on the theory of truth
in use by the scientific realists merely stated that the notion of approximate truth is presently too vague to permit
one to judge whether a theory consisting entirely of approximately true laws would be empirically successful
(Laudan 1981: 47).
This lack of a fundamental revision is surprising in the light of the central role of the concept of truth in claims
made by scientific realists. On the no-miracles account of realism, truth guarantees and explains the success of
science, or alternatively, the success of science can only be understood in the light of its truth. My central claim is
that [i] the current general scientific realist understanding of truth is incomplete, and obscures core features of
how science functions as part of how humans cognitively engage with the world, and [ii] this view of truth is out of
sync with some exciting new and older theories of truth recently developed e.g. the works of Davidson, Sher,
Lynch, and Nozick. In the proposed paper I will present a new theory of truth more appropriate and congenial to
the general model of science found in scientific realism.
An alternative theory of truth must be developed in very specific ways. Both Crispin Wright and Michael Lynch
emphasize that a theory of truth must use truisms (Lynch) or platitudes (Wright) about truth as fundamental
building blocks to guide the development of a theory. Although such a starting point is important, more must be
done. The strategy of reflective equilibrium, used by the political philosopher John Rawls to design his high impact
theory of justice, seems particularly apt in this case. The general intuitions [I prefer this word to truisms or
platitudes] about truth and the best philosophical explanations thereof must be worked into a new theory and
then a to-and-fro dialogue between intuitions and theory must trim and prioritise elements in the theory to ensure
maximum descriptive and explanatory power of those features of human life deeply influenced by the concept of
truth. In addition, the theory must be tested by means of various key examples from the history of science to
examine its explanatory value and problem-solving ability.
In developing my theory of truth, I use tools and building blocks emerging from recent philosophical work that
can enable us to design a more satisfying theory of truth with greater explanatory power. Briefly these building
blocks include the following: The first building block is to be clear what truth is about. For Donald Davidson a
theory of truth deals with the utterances of language speakers, i.e. what they say and write [Davidson 1990: 309].
Gila Sher judges the main issue of truth as our disposition to question whether things are as our thoughts say they
are (26). For Davidson the point of having a concept like truth is that it forms an essential part of the scheme we
all necessarily employ for understanding, criticizing, explaining, and predicting thought and action [Davidson
1990: 282]. In this context truth functions as a normative concept that serves as a fundamental standard of
thought (Sher)
Michael Lynch suggests more building blocks when he sets out three truisms about truth that ought to guide our
theorising, i.e. objectivity [the belief that p is true if, and only if, with respect to the belief that p, things are as they
are believed to be [70]; truth as a norm of belief [it is prima facie correct to believe that p if and only if the
proposition that p is true]; and truth as the end of inquiry [other things being equal, true beliefs are a worthy goal
of inquiry]. Another core requirement of a theory of truth is that it must tell us how truth is manifested in the
different domains of our cognitive life (19). Sher argues that these aspects of our cognitive lives should also guide
the development of a theory of truth: [i] the complexity of the world; [ii] humans ambitious project of theoretical
knowledge of the world; [iii] the severe limitations of humans cognitive capacities; and [iv] the considerable
intricacies of humans cognitive capacities. She argues for a theory that can accommodate that we use a variety of
routes to reach the world cognitively and thus claims that there are multiple routes of correspondence between
true cognitions and reality [6]
Shers defence of a composite correspondence theory of truth that can explain the substantial correspondence
(of one kind or another) between correct cognition and reality shows the direction one might want to go. In
addition, her a neo-Quinean model of knowledge that re-interprets and balances the analytic-synthetic distinction
and gives new contents to the centre-periphery distinction might become fundamental building blocks as well.
Lynch hints at another building block for a theory of truth when he calls truth a functional property of sentences
that supervenes on a distinct kind of propertiesand thus is multiply realizable (2009: 69). For him this means
that atomic propositions are true when they have the distinct further property that plays the truth role
manifests truth for the domain of inquiry to which it belongs (77). He accommodates truth as a concept with a
single meaning through the idea of a single property being manifested in this way, and views truth as plural in
the sense that different properties may manifest truth in distinct domains of inquiry (78).
Robert Nozicks offers building blocks in favour of a more epistemic theory of truth. His theory implies that
truth is a property correct at a specific time but it might be revised later. Truth thus is tentative and Nozick
2
thus rejects a timeless idea of truth such that a fully specified proposition would have a fixed and unvarying
truth value Nozick says [2001: 27]. Nozick does not presume that any proposition is wholly true, i.e. either a
flat-out success or a total failure, as he judges beliefs to have differing degrees of accuracy [Nozick 2001: 47].
Page 22 of 46
In my paper I present a new theory of truth through engaging philosophers like Davidson, Sher, Lynch, Nozick and
scientific realists like Psillos, Chakravarrty, and Devitt. This comprehensive theory integrates core insights to
explain in much finer detail how truth functions as dominating factor in science.
The aim of SBT is to provide a general framework for structuralists with which to define individual objects within
the boundaries of OSR that helps to clarify certain obscurities of the structuralist ontology. First, SBT can help
dispel some worries about the conceptual coherence of OSR. By analyzing structures in terms of a well-understood
metaphysical notion, immanent universals, SBT provides a clear notion of object that may help to elucidate how to
understand the structuralist framework as a whole. Further, SBT clarifies why objects as relata are usually
considered by OSRists as mere heuristic tools that are ontologically irrelevant by clarifying the ontological
dependence relation between objects and structures. Finally, SBT can shed some light on the explanatory challenge
by providing an account of objects that is able to ground our explanatory practices. By providing a conceptually
useful criterion with which to individuate objects, SBT allows OSR to appeal to objects and causal relations as
heuristic devices used for explanatory purposes. Although objects are not ontologically fundamental entities, they
do correspond to real aspects of reality: the mereological sum of relations they are composed of. SBT captures the
ontological insights of OSR in a conceptual framework constituted by well-understood and precise metaphysical
concepts, thereby facilitating the dialogue between OSR and other ontological frameworks and making the radical
metaphysical revision of OSR more palatable.
taken seriously, but Ill argue that it gestures us towards the kinds of scientific realism that are viable rather than
presenting a decisive obstacle to scientific realism in all its forms.
Todays scientific realists are fallibilists. Fallibilism is consistent with the idea that science is self-correcting, and
that our theories are becoming more truth-like. A realist might therefore claim only that science becomes more
truth-like as we unearth and remove errors, although now the role of success in defending realism becomes more
obscure. More positive realist theses are also worth consideration, however. Fallibilism about scientific knowledge
can motivate the idea that scientific success is best understood comparatively: theories are not successful, or
unsuccessful, but more or less successful than available alternatives. Coupling a comparative notion of success
with the selective realist strategy suggests we should be particularly interested in those novel insights that have
induced scientific progress. If comparative empirical success is achieved through the introduction of more truth-
like ideas, then we might anticipate the retention of those ideas, or at least something closely approximating those
ideas. There are further worries to be confronted here, but the problem of whiggish history is one Ill argue that we
can overcome. Ultimately, the problem of whiggish history need not extend to a complete prohibition on current
understanding, although that there are important lessons to be learned about how history can inform the realism
debate and what kind of selective realist theses might emerge.
24) Christian Carman & Jos DezLaunching Ptolemy to the Scientific Realism Debate: Did Ptolemy Make Novel and
Successful Predictions?
Scientific realism (SR) about unobservables claims that the non observational content of our successful/justified
empirical theories is true, or approximately true. As is well known, its best argument is a kind of abduction or
inference to the best explanation, the so-called Non-Miracle Argument (NMA): if a theory is successful in its
observational predictions that performs making use of non-observational content/posits, if such non-
observational content would not approximately correspond to the world out there, its predictive success would
be unexplainable/incomprehensible/miraculous. In short: SR provides the best explanation for the empirical
success of predictively successful theories. Empiricists like Van Fraassen may argue that NMA is question begging,
or simply has false premises, for there is other (at least equally good) explanation of empirical success, namely
empirical adequacy. Yet, most realists feel comfortable replying that that empirical adequacy provides no
explanation at all, or at best an inferior explanation than (approximate) truth.
This comfortable position enters into crisis when Laudan brings the pessimistic meta-induction (back) to the
debate. Laudan recalls that history of science offers us many cases of predictively successful yet (according to him)
totally false theories, and provides a long list of such cases. Laudans confutation was contested in different ways,
among them arguing that his list contains many cases in which the theory in point was not really a piece of mature
science and/or that it was cocked for making the successful predictions. But not all cases could be so contested and
realists acknowledged that at least in two important cases, caloric and ether theories, we have successful and novel
predictions made with a theoretical apparatus that posits non-observable entities (caloric fluid, mechanical ether)
that according to the next, superseding theories do not exist at all, not even approximately. Realists accept that
they must accommodate such cases and the dominant way of doing so is going selective: when a theory makes a
novel, successful prediction, the part of its non-observational content responsible of such a prediction need not
always be the whole non-observational content; many times it is only part of the non-observational content what
is essential for the novel prediction, and it is only the approximate truth of this part what explains the
observational success.
Selective Scientific Realism (SSR) may be summarized thus: Really successful (i.e. with novel predictions)
predictive theories have a part of its non-observational content, the part responsible of their successful
predictions, that is (a) approximately true, and (b) approximately preserved by the posterior theories which, if
more successful, are more truth-like. SSR(a) explains synchronic empirical success and SSR(b) explains diachronic
preservation (and increase) of empirical success.
Since we dont have independent, non observational direct access to the world to test SSR(a), the claim that is
relevant for testing SSR as a meta-empirical thesis is SSR(b). And selective realists claim that history of science
confirms SSR(b). According to them, the historical cases that count as confutations or anomalies for plain, non
qualified realism, are actually confirmative instances of its more sophisticated, selective reformulation SSR.
Although caloric and ether theories are false, they are not totally false, they have a non-observational part that is
responsible of the relevant novel successful predictions which (is approximately true and that) has actually been
approx retained by its historical successor. Then history confirms SSR(b), the only testable part of SSR, the SSR is
an empirical thesis that, though fallible, is historically well conformed.
Confronted with an alleged case of a theory that made ovel, 2 successful predictions but such that -the opponent
argues- its non-observable content is not retained by the superseding theory, the selective realist must find out a
part of the theory in point that is both (i) sufficient for the relevant prediction and (ii) approx retained by the
superseding theory. As an empirical thesis, SSR may have anomalies and the way it must fix them is always doing
this divide et impera movement. According to some, SSR successfully fixed the caloric and ether anomalies, while
Page 25 of 46
according to others it has not, or not yet, or not fully. The debate continues, and other eventual anomalies are
presented and discussed. For instance the phlogiston case, initially dismissed as a pseudo-case but later
acknowledged by some as a real troublesome case and faced in a similar SSR manner.
Our goal here is to launch a new case to this debate: Ptolemaic astronomy. This was another item in Laudans list,
though quickly dismissed as not really doing novel predictions. We argue that this is not so. The not novel
predictions tag put on Ptolemys astronomy is a consequence of the mere epycile+deferent reading of the
theory, a myth that, like alls myths, is extremely popular but false. We find this case particularly useful for it is
easier to find here the parts responsible of the predictions. In other cases, such as the caloric or ether ones, much
of the discussion and disagreement between realists and their opponents concern whether some non-obervational
part of the theory was/was not really necessary for the relevant prediction. Was the hypothesis of a mechanical
substance with orthogonal vibration necessary for deriving Fresnels laws, from which the white spot prediction
follows? Realists say no (to the mechanical substance), opponents say yes. Was the material fluidity of caloric
essential for the derivation of the speed of sound in ? Realists say no, opponents say yes. Likewise in other cases.
We find Ptolemys case especially useful in this regard, for here the contents responsible of the predictions are
relatively easy to identify.
We find this case not only especially useful but also especially interesting, for here the SSR strategy of trying to find
in the superseding theory the approx retention of the prediction-responsible parts seems prima facie particularly
difficult, if not unpromising. But a detailed discussion of, and conclusion about, each case for the SSR debate goes
beyond the limits of this paper. Our goal here more limited, just to lunch these predictions to the scientific realism
arena, showing that Ptolemys case deserves attention in this debate.
We will present and discuss what we think are the best candidates in Ptolemy's astronomy for successful novel
predictions:
(1) The parallax/distance of the Moon at syzygies
(2) The phases of inner planets at first conjunction and outer planets at conjunction
(3) That Mercury and Venus are the only planets between the Sun and the Earth
(4) The growth of brightness during retrograde motion for Mars
(5) Mars is not eclipsed by the Earth
The conclusion is that, but perhaps for the first, these cases represent a challenge that the selective realist must
face.
25) Timothy Lyons Epistemic Selectivity, Historical Testability, and the Non-Epistemic Tenets of Scientific Realism.
In Part One of this paper, I survey a set of live-option meta-hypotheses that contemporary scientific realists claim
we can justifiably believe. More carefully, scientific realists offer an empirical meta-hypothesis about scientific
theories that, they claim, we can justifiably believe. In its unrefined formulation, that hypothesis is successful
scientific theories are approximately true. The justification for the second correlate of this meta-hypothesis,
approximate truth, is that it constitutes the only or at least the best explanation of the first correlate, success.
Prompted (e.g. by Laudan) to address empirical, historical challenges against that unrefined version of the realist
meta-hypothesis, realists have modified the correlates of that meta-hypothesis (and accordingly, the elements of
their explanatory argument). In particular, realists have introduced into their meta-hypothesis various criteria
meant to pick out particular constituents of scientific theories. Here I endeavor to identify the most charitable
formulations of the recent criteria advanced by realists and, hence, the most recently formulated meta-hypotheses
that todays realists claim we can justifiably believe.
Upon doing so, I argue that each resulting live-option meta-hypothesis falls into one or more of the following
categories: (1) the criterion packed into the second correlate of the meta-hypothesis fails to pick out those
constituents that are genuinely responsible, and so deserving of credit, for successand, hence, crucially, the
criterion fails to offer the selective realist any kind of explanation of that success; or (2), the criterion merely
immunizes epistemic realism, resulting in a meta-hypothesis that is neither testable nor able to inform us of just
which theoretical constituents are picked out by the meta-hypothesis that realists claim we can justifiably believe;
or (3), the criterion fails to pick out constituents that reach to a level deeper than the empirical data, thereby
failing to license commitment to a meta-hypothesis that goes beyond meta-hypotheses that are happily embraced
by anti-realists; or finally, (4), although the criterion is relevant, testable, and adequately realist, the meta-
hypothesis containing the criterion as a correlate is in significant conflict with available historical data.
In Part Two, I focus on those realist meta-hypotheses falling into category (4). Doing so, I offer a novel account of
the nature of the historical argument against epistemic realism. I contend that the form and content of this novel
argument render untenable even a fallible, conjectural variant 2 of epistemic realism. Also, mindful that some
historians are inclined to deny the legitimacy of testing philosophical hypotheses against the history of science, I
show that, when these realist meta-hypotheses are properly understood, for instance, as hypotheses about
scientific texts, their testability is no more problematic than the testability of scientific hypotheses. And inquiry
should not be banned in the former case any more than it should in the latter.
Page 26 of 46
In Parts Three and Four, I direct these arguments toward a positive conclusion. Despite the fact that some
otherwise relevant, testable, and adequately realist meta-hypotheses conflict with historical data (those category
(4) meta-hypotheses), the examination and testing of such meta-hypotheses is nonetheless profoundly informative
toward the development of a proper and, in fact, still realist conception of science. In more detail, one can invoke
the selective realists primary insight at a higher level: as above, selective scientific realists acknowledge
that scientific theoretical systems contain various individual constituents, and they endeavor to formulate an
appropriately refined/restrictive meta-hypothesis that we can justifiably believe; yet just as scientific theoretical
systems consist of individual theoretical constituents, scientific realism itself consists of a set of individual
constituents or tenets. The epistemic tenet that we can justifiably believe the selective realist meta-hypothesis, for
instance, that those theoretical constituents genuinely contributing to successful novel predictions are
approximately truethis is only one constituent of scientific realism. I argue that one can fruitfully bracket
beliefor at least our obsession with what we, or scientists, can justifiably believeand so bracket that epistemic
tenet of scientific realism.
After doing so, in Part Four, I identify and isolate a set of non-epistemic yet nonetheless fundamental constituents
or tenets of scientific realism, whose articulations, I contend, have received insufficient attention, especially when
compared to the epistemic tenet around which the debate has pivoted. For instance, while realists emphasize that
the primary aim of science is truth, just which subclass of true claims science seeks remains inadequately
explicated. Also, although realists embrace inference to the best explanation as the primary mode of inference by
which science endeavors to attain truth, realists (as well as non-realists) readily admit that we lack a proper
understanding of just what this kind of inference amounts to. My proposal here is that, despite the historical threat
against the epistemic tenet of scientific realism, careful attention to the testable meta-hypotheses of selective
realism and the historical data put forward against those meta-hypotheses affords us a far more informed
articulation of these other non-epistemic yet nonetheless wholly realist thesese.g. regarding the kind of truth
science seeks and the mode of inference employed to attain it. And, bracketing belief as I am proposing, these
refined articulations can be deployedin the way scientific hypotheses can be deployedas tools for further
inquiry. That is, with these more informed articulations comes a better understanding of the nature of past
scientific practice, one that may even afford contemporary scientists themselves liberation from some of the myths
(e.g. whiggism) to which they may inadvertently maintain a commitment. Hence, although I challenge the view that
selective realism succeeds in picking out a meta-hypothesis we can justifiably believe, I seek to show how testing
such empirical meta-hypotheses against the history of science can be immensely valuable, perhaps even toward
the advancement of scientific inquiry itself.
The problem, briefly, is as follows. Given some assumption A involved in the derivation of a prediction, a weaker
(entailed) assumption is always AvB for any arbitrary B. Now if B is chosen/engineered so that it can be used to
make the derivational step go through, then AvB is an assumption weaker than A which is sufficient for the
derivation to work. Allowing such disjunctions within the realists definition of weaker would lead to some very
odd results for realist commitment. Take the case of the plummeting lift: the realists commitments would have to
include some peculiar disjunction such as The load was too heavy OR somebody cut the cables OR the cables were
old and weak OR . This is a statement, entailed by The load is too heavy, which is sufficient to reach the
successful prediction that the lift will indeed plummet. But then a concern arises that the realists commitments
can never fail, since whatever caused the lift to plummet can be tacked onto the end as an extra disjunct. In
addition, many anti-realists would be happy with making this sort of commitment: of all the different disjuncts one
of them might be (approximately) true, but the problem is that we cant know which one, since all of them lead to
the same successful prediction.
In theories of causal explanation we find a very similar disjunction problem. Sometimes explanations include
overly specific claims, and one reaches a better explanation by introducing more abstract (weaker) claims.
However, this goes wrong if one allows the move from claim A to claim AvB for arbitrary B. To get around the
problem Strevens (2008) introduces a cohesion requirement, which acts as a brake on abstraction, halting it
before it crosses the line into disjunction. (p.103). Whether this cohesion requirement can help the realist with
her own disjunction problem will be investigated in this paper.
References
Saatsi, J. (2005): Reconsidering the Fresnel-Maxwell Case Study, Studies in History and Philosophy of Science
36(3): 50938.
Strevens, M. (2008): Depth. Harvard: Harvard University Press.
Vickers P. (2013): A Confrontation of Convergent Realism, Philosophy of Science 80(2): 189-211
position. The notion of detection property plays a central function in SMR. In order to clarify this notion of
detection properties, let us introduce a preliminary distinction between experimental and unobservable
properties. An experimental property is a property of an experimental setting : it is an observable property, it can
be used as an epistemic justification of other claims. A detector has experimental properties : it is observable and
give us good reasons to believe some propositions. An unobservable property is a property of an unobservable
object. In order to be a satisfying selective strategy position, a theory should identify experimental properties that
enable scientists to conclure in the existence of some unobservable properties. Experimental properties should be
the basis of realist knowledge and unobservable properties should be its object. Now, in the characterization of
SMR, detection properties seem to have two functions. On the one hand, they are seen as experimental properties.
On the second hand, they are regarded as the very content of scientific theories, as realist properties. The whole
project of SRM is grounded on the two following identifications : Some of the effects of unobservable objects are
experimental changes. The cause of experimental changes are unobservable objets. To identify these two
properties requires to solve the epistemic circle problem. Semirealists dilemma is the impossibility to
simultaneously satisfy the three requirements of a selective realism strategy. Either detection properties are
construed as properties of the detectors (or as experimental properties) or as the properties of the detected
objects (or as unobservable properties). This ambiguity leads to two very different interpretations of SMR : If
DP are the detectors properties, they are a good epistemic justification, but only for an instrumentalist content.
If DP are objetss properties, then, they have a genuine realist content, but they are not epistemologically
grounded.
predecessor to a successor physical theory, the level at which the phenomena are considered may
remain unchanged the inter-theory re- lations are within the same level (intra-level relations) , or
there may be a variation of the domain of application (usually by a change of the value range of the scale)
the inter-theory relations are between different levels (inter-level relations).
It is a central claim of the paper that whether the domain remains the same or changes in the passage from
one physical theory to another is a question of crucial relevance when evaluating the significance of
structural continuity (as a kind of intertheoretic relation) for a structural realist stance. In particular, it is
maintained that how to intend successive theories is not independent from whether the inter-theory
passage is domain preserving or not. In current discussions on structural continuity, theory change is
generally taken to indicate the substitution of a preceding theoretical description with another more
successful one. The idea is that the first theory is elimi- nated when the second one is proposed. However, as
argued in the paper, this is not what happen in most of the cases of intertheoretic relations typically
considered as representative for discussing structural continuity: in partic- ular, the cases where the relation
between a predecessor theory T and a successor theory T l is such that, in a certain part of the successor
theory T ls domain of application, the results of T l are well approximated by those of the predecessor
theory T .
The paper is articulated into three Parts. The first Part goes back to the historical source of structural realism
by examining how the much debated Fresnel-Maxwell shift is originally discussed by Poincare in his 1902
classic text, Science and Hypothesis. The claim is that the actual issue, in this dis- cussion, is not so much what
is retained in the change from a predecessor to a successor theory, as rather what is true in the case there
are different descriptions of the same physics. The following Parts are devoted to exam- ining the structural
continuity condition in intra-level and inter-level cases, respectively. The conclusion is that (under certain
assumptions) structural continuity is a viable condition for a structural realist stance when the inter- theory
relations are within the same level, while this is generally not the case when the inter-theory relations are
between different levels.
References
H. Poincare [1902: La Science et lHypoth`ese] (1905), Science and Hypothesis, London: Walter Scott
Publishing.
J. Saatsi (2012), Scientific Realism and Historical Evidence: Shortcomings of the Current State of Debate, in
In Henk W. de Regt (ed.), EPSA Philosophy of Science: Amsterdam 2009, Springer, 329-340.
I. Votsis (2011), Structural Realism: Continuity and its Limits, in A. Bokulich and P. Bokulich (eds.),
Scientific Structuralism, Dordrecht: Kluwer.
J. Worrall (1989), Structural Realism: The Best of Both Worlds?, Dialectica 43: 99-124.
29) Tom Pashby Entities, Experiments and Events: Structural Realism Reconsidered
The recent structuralist turn of scientific realism was introduced to smooth out the onto- logical discontinuities of
theory change. However, it has led some to strong metaphysical theses aimed at eliminating non-structural
elements from our ontology entirely. There is a tension here, I claim, with the idea that the foundation of our
empirical knowledge comes from the particular outcomes of experiments, and that ultimately what the scien- tific
realist should be committed to is the existence of experimentally accessible entities like electrons and positrons. I
contend that the structural realist owes an account of how a commitment to theoretical structure suffices to
justify ontological claims about theoretical entities made in specific situations, such as in the laboratory.
This problem can be clearly seen by considering elements of the philosophy of Wilfrid Sellars. In Philosophy and
the Scientific Image of Man, Sellars addresses the conflict be- tween two systematic modes of thinking about the
objective world: the manifest image and the scientific image. Only the scientific image involves the postulation of
impercep- tible entities, and principles pertaining to them, to explain the behaviors of perceptible things.
However, according to James Ladyman and Steven Frenchs Ontic Structural Re- alist, the scientific image
(properly understood) posits structures, not self-subsisting in- dividual entities. Whereas Bas van Fraassens
Constructive Empiricist sought to defang the scientific image by eschewing the ambitions of scientific theories to
represent more than the observable phenomena, the Ontic Structural Realist advocates the elimination of non-
structural objects and thus apparently seeks to eliminate the manifest image.
I contend that to do so would be a mistake. First, as Sellars maintains, the manifest image is open to revision,
rather than elimination. Second, the scientific image has the man- ifest image as a foundation and, in Sellars
words, pre-supposes the manifest image. If Sellars is right then the scientific image cannot get off the ground
without a robust commitment to the perceptible objects of 2 the manifest image. In other words, the very
foundation of scientific realism is a straightforward realism about ordinary objects and events as concrete
particulars. This idea is captured nicely by Ian Hackings Entity Re- alist, who takes the instrumental success of
experimental practices and manipulations (when explained in terms of the existence and properties of
unobservable entities) to jus- tify belief in theoretical entities. It is precisely this belief in entities as individuals
Page 30 of 46
30) Angelo Cei The Epistemic Structural Realist Program. Some interference.
The paper is intended to assess the prospects of Epistemic Structural Realism (ESR) to constitute a sound realist
response to antirealist preoccupations raised by deep historical changes in science. This aim is achieved
contrasting various forms of ESR with a case of theoretical change in the history of physics. In particular, I will
devote my attention to the explanation of the Zeeman effect offered in Lorentz Theory of Electrons and how it
looks from the perspective of Relativistic Electrodynamics. The various positions will be contrasted with this case
and the prospects of ESR evaluated in this context.
Deep changes in theoretical frameworks constitute a major challenge for realist positions on science. The family
of antirealist arguments that exploits this historical fact goes under the headings of pessimistic meta-induction
(PMI). The argument questions the fundamental idea that an abductive inference from success to truth is
legitimate and it is the best explanation of the success of science. It does so drawing on the harshness of historical
lessons: past dismissed theories were, after all, instances of successful science but they are now taken as false. On
one hand, there is a wide range of realist attacks to PMI. On the other hand, several theories in the history of
physics exhibit commonalities captured by mathematical structures. Worrall (1989) turned one of this cases into
a proposal for a highly debated version of realism. He insisted that we are justified in believing in the equations of
our best physical theories. These theoretical features are in fact immune from the theoretical changes that are the
focus of the antirealist's concern. The case in point was the retention of Fresnels equations in Maxwells
electromagnetism. Worralls picture conceded something to the antirealist: Fresnel's ether is gone, no track of it
remains in modern science.
Nonetheless, we do have knowledge: it is knowledge of structure though and it is not knowledge of entities.
Hence we ought to embrace Epistemic Structural Realism (ESR). This is view features a variety of alternative
views that range from the adoption of the Ramsey Sentence to updated versions of Russellian structural
knowledge (See Votsis, 2005).
In this work, I chose to confine myself to the context of physics and I intend to present ESR with the following
dilemma developed through a case study: either ESR has nothing particularly structuralist to offer in defence of
realism where structural refers to certain kinds of relations that allegedly survive to the change; or a defence
based merely on structural features might not be sufficient 3 to support a form of realism. This result will emerge
through the analysis of the two exemplar versions of ESR (in this sense we can talk of a program) and of various
criticisms available in the literature concerned with the topic.
The contrasting case for this analysis is the study of the prediction of the Normal Zeeman Effect (NZF). NZF is
notoriously a phenomenon of alteration of the frequency of light due to the effect of a magnetic field on its source.
Page 31 of 46
Depending on the intensity of the magnetic field the effect of alteration of the spectrum of light varies
considerably and a family of diverse effects may occurr. The model adopted for the prediction in Lorentzs theory
of Electrons (1916) explains the Zeeman Effect as a precession in the period of oscillation of a radiating charge.
The radiating charge is an electron whose acceleration explains the emission of light. The alteration on the period
of oscillation of the electron due to the magnetic force exerted by the field determines an alteration in the
frequency of the light. The core features of this explanatory model are the Lorentz Force and a model of the
electron as extended body featuring an harmonic motion. The harmonic motion and the Lorentz Force can feature
a relativistic explanation as well but the Relativistic version of the model prescribes a point charge. A point charge
is in turn incompatible with the original classical explanation. Furthermore, a variety of physical magnitudes
involved in the prediction undergoes to a significant shift from the classical to the relativistic context. In this
context I test the Epistemic Structural Realist Program.
I argue that this case despite its prima facie favourability to the structuralist cause puts a considerable stress on
it. After having set the physics stage, I go on to articulate this argument analysing the presupposition that lie
behind (the various version of) ESR and disambiguating the various conceptions of structure that are left
unaddressed in the literature. The contrast with the case study will show that a particular development of the
position seem to offer the best prospects.
References
This is however not mere rebranded instrumentalism (Morrison 2011). It is still possible to explain why
particular models are useful for certain purposes, for instance why large collections of discrete particles would
approximate the behaviour of a continuous medium. But this explanation would itself be relying on one or more
discrete models, subject to the same anti-representationalist story told above. For any particular model, we can
explain its relation to the world independently of that model. But there can be no global account of which features
of the world successful models correspond to independently of all models.
Two points about the account sketched above. Firstly, I only present it as the most promising development of
Giere's work insofar as it provides an interesting alternative to selective realism, but not to defend its plausibility
beyond this. Second, while it captures some features of perspectivism, it cannot support the strong analogy to
colour perspectives that Giere stresses. We can still talk of how the world appears from the perspective of some
model, but the sharp distinction between representational models and merely instrumentally justified theoretical
perspectives is blurred and indeed unnecessary on this account.
References:
Chakravartty, Anjan (2010): Perspectivism, inconsistent models, and contrastive explanation, Stud. Hist. Phil. Sci.
41: 405-412.
- (2013): Dispositions for Scientific Realism, in: Greco & Groff (eds.), Powers and Capacities in
Philosophy: The New Aristotelianism, Routledge.
Giere, Ronald (2006): Scientific Perspectivism, University of Chicago Press.
- (2006b): Perspetival Pluralism in: Kellert, Longino, & Waters, (eds.): Scientific Pluralism, University of
Minnesota Press.
- (2009): Scientific Perspectivism: behind the stage door, Stud. Hist. Phil. Sci. 40: 221-
223. Lipton, Peter (2007): The World of Science, Science 316: 834.
Morrison, Margaret (2011): One phenomenon, many models: Inconsistency and Complementarity, Stud. Hist.
Phil.
Sci. 42: 342-351
Price, Huw (2013): Expressivism, Pragmatism and Representationalism. Cambridge UP. Teller, Paul (2001):
Twillight of the Perfect Model Model, Erkenntnis 55: 393-415.
Votsis, Ioannis (2012): Putting Realism in Perspective, Philosophica 84: 85-122.
Moreover, one would not be rationally justified in believing in the existence of theoretical entities unless one
believed some part of a theory that tells us what, say, a proton is and what it does. In the case of Worrall, Psillos
contends that the required distinction between theory and structure is unsound. To derive the experimental
consequences required to confirm the theory (or any part of it), theoretical interpretation of the theorys
mathematical structure is necessary. To safeguard his version of scientific realism from the challenge of the
pessimistic meta-induction, Psillos develops a theory of reference that he claims picks out natural kinds from
trivial posits in scientific theories. This theory of reference he calls the causal-descriptive theory of reference since
it incorporates features from both causal and descriptive theories of reference. Psillos contends that not all
descriptions of a theoretical posit should be taken to be referential. Rather, he claims that only those descriptions
that are necessary for differentiating kinds from one another are referential. Natural kinds are those objects in the
world that are mind-independent. Thus, those descriptions that are genuinely referential are called kind
constitutive because they distinguish natural kinds from non-natural kinds.
Psillos uses the transition from luminiferous ether to electromagnetic field as an exemplar to illustrate how his
theory of reference picks out natural kinds and preserves reference across theory change. He claims that the
formers constitutive elements (such as its alleged elastic-solid structure) were not carried over into the
description of the latter terms constitution. Hence, Psillos concludes that the constitutive elements were non-
natural while the kinematic and dynamical elements, which were carried over into the description of the
electromagnetic field, are kind constitutive. Moreover, when identifying natural kind terms, Psillos appeals to the
advocates of a successful theory to see which terms they took to be necessary for explaining the success of their
theory.
However, P. Kyle Stanford (2003) and Hasok Chang (2003) have shown that Psillos treatment of his historical
examples is flawed. Psillos claims the proponents of the luminiferous ether and of caloric dismissed the
constitutive properties of these posits as heuristic. However, the scientists Psillos cites in support of his causal-
descriptive theory differed significantly from him in what they considered to be kind constitutive descriptions of
these entities. Stanford describes how some scientists considered the constitutive properties of the luminiferous
ether to be kind constitutive properties. Chang illustrates how the term caloric fits the description of a natural
kind term according to Psillos standards.
While Stanford and Chang challenge Psillos on historical grounds, I challenge him on philosophical grounds. I
contend that if Psillos can dismiss the constitutive elements of the luminiferous ether and caloric as heuristic,
then entity realists and structural realists can regard descriptions of theoretical entities as heuristics too. The
entity realists and structural realists cannot be shown to be mistaken; rather they are merely more cautious than
Psillos. I conclude that Psillos defense of scientific realism, and of semantic realism in particular, fails because his
voluntarism undermines the objectivity of his philosophy.
35) Alistair Isaac The Locus of the Realism Question for the Semantic View
The realism questionthe question of how our best theories relate to the worldhas traditionally been
addressed within the semantic view (SV) through an analysis of the relationship between theory models and data
models. This is because the data model has commonly been accepted as the window through which science
approaches the world. I argue that the locus of the realism question for the founders of SV was not in the theory
of data models, but in the theory of psychological judgments, particularly judgments of similarity. Reviving this
position today has the advantage of suggesting an avenue of reconciliation between the formal strand of SV and
recent work on modelling which self- consciously perceives itself as offering an informal alternative to SV.
Background: The formal program in SV has turned to increasingly weak mathematical analyses of the relation
between theory and data models, e.g. isomorphism (van Fraassen, 1980), homomorphism (Mundy, 1989), and
partial homomorphism (Bueno, French, and Ladyman, 2002). The lack of a general theory of this relationship
(and the perceived inadequacy of a single mathematical formalism for providing one) has motivated some to
reject the formal program of SV as misconceived (Godfrey-Smith, 2006), and has left even those more
sympathetic to the formal approach with serious reservations (Frigg, 2006). An alternative program has focused
on modelling practice in a more informal way, with the pertinent relation between model and world analysed as
one of similarity (Giere, 1988; Weisberg, 2013). This informal approach to model realism often portrays itself as
rectifying an error at the foundation of SV.
Revisiting the Original Program: I examine two founders of SV and argue that, properly construed, their programs
locate the realism question within the foundations of psychology. Patrick Suppes is canonically recognized as the
founder of SV. However, Mary Hesse independently proposed essentially the same conceptual shift as Suppes;
juxtaposing Suppes and Hesse reveals interesting commonalities
3 in their research programs. In particular, both
figures
(i) call for models to serve as a central focus for philosophy of science; and (ii) provide insight into scientific
method via comparisons of models which do not detour through the theory-world relation. In the case of
Suppes, the project was to mathematically compare models of different theories in order to illustrate synchronic
Page 36 of 46
foundational relations between different parts of science; in the case of Hesse, the project was to compare
models of successive stages in a field's development in order to illustrate the logic of diachronic reasoning in
science.
Suppes (1960) influentially brought the question of data models to the attention of philosophers of science.
Furthermore, it is clear that Suppes considers the question of how data models relate to theory models a crucial
one for the foundations of statistics. However, Suppes also argues for an increasingly detailed hierarchy of
models of experiment. In particular, as more and more aspects of the experimental setup are formalized,
eventually a model of the experimenter herself will need to be included. Insofar as the world is approached ever
more closely through this progressive formalization of the experiment, it is in the limit of this process, i.e. the
foundations of psychology, where the realism debate must be fought. Solamente statistics will never provide a
complete analysis of empirical adequacy since its starting point, the data, is always infected by the judgments of
the experimenter.
The ultimate moral here is also that of Hesse (1961). She demonstrates that failures of fit between model and
world ("negative analogies") are irrelevant for scientific progress. Rather, it is the "neutral analogies," features of
the model for which fit with the world is unknown, which drive theory change. While only models that exhibit
both positive and neutral analogies are apt for a realism debate, it is not the business of science to generate such
models. Crucially, models with negative analogies can play a productive role in scientific reasoning so long as the
scientist keeps track of internal relations between positive, negative, and neutral analogies. Thus, the ultimate
locus for assessing the theory-world relationship is in the scientist's judgments about the status of different
aspects of the model.
A Synthesis: Both Suppes and Hesse locate the theory-world relation relevant for understanding scientific
method in the mind of the scientist. If, as in Suppes' original program, philosophical questions are to be analysed
in terms of model relations, the relevant models are psychological models of the scientist's ideas (or mental
representations) of the theory and of the world, and the pertinent relation between them is her assessment of
their similarity. For Suppes, these psychological models complete that model of the experiment that most closely
approaches the world; for Hesse, they connect stages in a theory's development, illuminating scientific progress.
The relation of fit between theory and world here is assessed in the judgment of the scientist, just as in the post-
Giere modelling literature; on this view, however, the original SV program does not rest on a mistaken
understanding of models, nor is its formal component misguided. Rather, the formal analysis of the relation
between data and theory models continues to be important in the foundations of statistics, but its importance as
a response to the realism question rests on the role of data and theory models in the reasoning of scientists
themselves. This reasoning may also involve the manipulation and comparison of mental representations that,
while they may be studied formally, may not precisely mirror the mathematical structures of theory as presented
in textbooks or analysed by statisticians.
36) Francesca Pero The Role of Epistemic Stances within the Semantic View
The semantic view rises in the Sixties as an analysis on the structure of scientific theories. In fifty years it has both
replaced the Syntactic View and established itself as the orthodox view on scientific theories. In this paper an
assessment of the reasons for the success of the semantic view is provided. The guideline for the assessment is
obtained by merging the general stances presented by van Fraassen (1987) and Shapiro (1983) on, respectively,
the task of philosophy of science and how such a task should be accomplished. As it turns out, the role played
within the semantic analysis of theories by epistemic stances, whether realist or antirealist, should be severely
reconsidered.
Van Fraassen repeatedly presents as the foundational question par excellence of philosophy of science the one
concerning the structure of theories (1980; 1987; 1989). Van Fraassen also suggests that such a question should
be kept distinct from issues concerning theories as objects of epistemic attitudes (such as realism and anti-
realism). As for Shapiro, he claims that philosophy of science cannot be understood as isolated from the practice
of science.
Therefore, any inquiry concerning science should count in scientific practice as a crucial element for its
development. Assuming the tenability of these two stances, in this paper I argue in favor of the following claims
as reasons for the success of the semantic view as an analysis of scientific theories: the semantic view (i) provides
a realistic answer to the right question: what is a scientific theory ? ; (ii) provides such an answer in the correct
manner, i.e., remaining epistemologically neutral; (iii) acknowledges scientific practice as crucial for dealing with
(i) and (ii).
With few notable exceptions (Worrall, 1984; Cartwright et 3 al., 1995; Morrison, 2007; Halvorson, 2012), the
semantic view is widely accepted as the orthodox view on the structure of scientific theories. It is not possible to
identify the semantic approach with a univocal view, since its formulations are many (Suppes, 1967; van
Fraassen, 1970; Giere, 1988; Suppe, 1989; da Costa and French, 1990; van Fraassen, 2008). However, as a
program of analysis whose origin is to be traced back to Beth (1949), although officially identified with the
Page 37 of 46
work by Suppes (1960) the semantic view fixes two tasks to be fulfilled. The first task is to provide a formal
analysis of theories. The second task is to provide an analysis of theories as regarded in the actual practice of
science. Both the tasks are accomplished by introducing the notion of models, generally conceived as Tarskian
set-theoretical structures (see Tarski, 1953).
Notwithstanding the wide literature on the different formulations of the semantic view and on its potential
consistency with either realist or anti-realist stances, a systematic analysis both of its significance and of the
reasons for its orthodoxy status is yet to be provided. The aim of this paper is to provide such an analysis and, in
order to do that, I mean to deploy van Fraassen's and Shapiro's insights concerning philosophy of science.
Van Fraassen claims that as the task of any philosophy of X is to make sense of X, so philosophy of science is an
attempt to make sense of science and, elliptically, of scientific theories (1980, p. 663). This task, van Fraassen
adds, is carried out by tackling two questions. The first question concerns what a theory is per se (internal
question"). This is the question par excellence for the philosophy of science insofar as answering it is preliminary
to, and independent of, tackling issues concerning the epistemic attitude to be endorsed towards the content of a
theory. These issues fall under the second question which indeed concerns theories as objects for epistemic
attitudes (external question").
Van Fraassen's remarks can be consistently supplemented with Shapiro's view on how a philosophical analysis
should be carried out. Shapiro advocates the necessity for any philosophy of X not to be isolated from the
practice of X (1983, p. 525). He explicitly refers to scientific explanation, mentioning that reducing explanation to
a mere description of a target system does not suffice to justify in virtue of what the abstract description relates
to the object described. Without such a justification is indeed impossible to account for the explanatory success of
theory. Only referring to the practice of theory construction allows to account for how science contributes to
knowledge.
The semantic view evidently deals with the foundational question of philosophy of science. As the
syntactic view did, the semantic view aims at providing a picture of scientific theories. However, unlike the
syntactic view, the semantic view succeeds in providing a realistic picture of theories. The syntactic view has
been driven in its formulation by the (anti-realist) Positivistic credo, according to which a programmatic goal for
the analysis of theories is to provide only a rational reconstruction of the latter, i.e., a reconstruction which omits
the scientists' actions and focuses only on their result (i.e., theories. See Carnap, 1955, p. 42). The semantic view,
on the other hand, preserving its neutrality with respect to any preexistent school of thought, whether realist or
anti- realist, succeeds in providing a realistic image of scientific theories which is obtained by focusing on how
science really works" (Suppe, 2000, p. 114).
As a final point, I mean to show that the suggested guideline for justifying the success of the semantic view can
also be employed as a demarcation criterion for establishing the tenability of the available formulations of the
semantic view. Using the guideline as a demarcation criterion, I show that the partial-structure approach (da
Costa and French, 1990; Bueno and French, 2011) fails at being semantic for two reasons. Firstly, it violates the
epistemic neutrality presupposed by the semantic view. Secondly, the partial-structures approach falls short of
integrating the image of theories which it provides with the actual scientific practice.
empirical, a property that is not shared with other domains of inquiry. These points are connected. I argue that it
does justice to a body of historical scientific achievements in several fields (including chemistry and biology, as
well as physics itself) to say that science discovered that there was one science with unrestricted empirical scope,
and this science was, approximately, physics. (I say approximately because some parts of what is institutionally
called physics are recognized to be special sciences.) This discovery was contingent to the extent that alternative
possibilities (in which all sciences were special, or where some science other than physics was not special) have
been taken seriously in the history of science, and abandoned because of scientific discoveries.
If this is correct, it explains the plausibility of some versions of what has come to be known as the via negativa
account of the physical (Spurrett and Papineau 1999). This proposal defuses the dilemma by stipulating that
physics is a causally complete science, and excludes (for example) fundamental mental properties. By being
causally complete, physics thus understood is appropriate for motivating arguments for supervenience, reduction,
identity (and other solutions to the is problem). And by excluding, for example, the fundamentally mental, such
physicalisms are not trivial insofar as future scientific discovery could in principle falsify them. Considerations
from common sense, and the history of science, support excluding fundamental mental properties from the one
science of unrestricted empirical scope (physics), as they do excluding biological properties, chemical properties
and a variety of others.
The form of physicalism I describe here also suggests a way of developing the via negativa response to the alleged
dilemma. It is science itself that identifies (and occasionally revises) the catalogue of special sciences to be taken
seriously, where special is understood in contrast to physics. This catalogue motivates naturalist (and non-
arbitrary) variants of the via negativa (for specific sciences), and helps explain why some particular historical
episodes are distinctively important for making physicalism plausible. Finally, as an empirical thesis of a certain
kind, a response to van Fraassen is natural. Physicalism is a thesis motivated by the history and state of science. It
is good fallibilism to recognise that future evidence may motivate revisions, but bad philosophy to abandon a view
motivated by so much evidence simply because we are not infallible.
References
Spurrett, D. & Papineau, D. (1999) "A note on the completeness of 'physics'", Analysis, 59:25-29.
Van Fraassen, B. (2002) The Empirical Stance, Yale University Press.
F [6]: Anti-Realism
38) Moti Mizrahi The Problem of Unconceived Objections and Scientific Antirealism
According to Stanford (2006, 20), the history of scientific inquiry itself offers a straightforward rationale for
thinking that there typically are alternatives to our best theories equally well confirmed by the evidence, even
when we are unable to conceive of them at the time. Based on the PUA, Stanford advances an inductive
argument he calls the New Induction on the History of Science, which Magnus (2010, 807) reconstructs as
follows:
A New Induction on the History of Science
NI-1 The historical record reveals that past scientists typically failed to conceive of alternatives to
their favorite, then-successful theories.
NI-2 So, present scientists fail to conceive of alternatives to their favorite, now- successful
theories.
NI-3 Therefore, we should not believe our present scientific theories.
If Stanfords New Induction on the History of Science were cogent, then it would show that what Psillos (2006,
135) calls the epistemic thesis of scientific realism, i.e., that mature and predictively successful scientific
theories are well-confirmed and approximately true (cf. Psillos 1999, xix), is not worthy of belief.
Now, Mizrahi (2013) argues that there is a problem parallel to the PUA that applies to Western analytic
philosophy. As Mizrahi (2013) writes:
In much the same way that the history of scientific inquiry itself offers a straightforward rationale for thinking
that there typically are alternatives to our best theories equally well confirmed by the evidence, even when we
are unable to conceive of them at the time (Stanford 2006, p. 20), the history of philosophical inquiry offers a
straightforward rationale for thinking that there typically are serious objections to our best philosophical theories,
even when we are unable to conceive of them at the time. In other words, the historical record shows that
philosophers have
typically failed to conceive of serious objections to their 3well-defended philosophical theories. As the historical
record also shows, however, other philosophers subsequently conceived of serious objections to those well-
defended philosophical theories (original emphasis).
If Stanfords PUA provides the basis for a New Induction on the History of Science, then Mizrahis PUO provides
the basis for a New Induction on the History of Philosophy.
Page 39 of 46
Mizrahis New Induction on the History of Philosophy, then, runs as follows (Mizrahi 2013):
A New Induction on the History of Philosophy
NIP-1 The historical record reveals that past analytic philosophers typically failed to conceive of serious
objections to their favorite, then-defensible theories.
NIP-2 So, present analytic philosophers fail to conceive of serious objections to their favorite, now-defensible
theories.
NIP-3 Therefore, we should not believe our present philosophical theories.
Accordingly, since an alternative scientific theory T2 that accounts for the phenomena just as well as T1
amounts to a serious objection against T1 (Mizrahi 2013), Stanfords PUA is actually a PUO for scientific
theories.
Since empirically viable alternatives to then-well-confirmed scientific theories that turned out to be equally
confirmed by the evidence amount to serious objections to those scientific theories, Stanfords PUA is a PUO for
scientific theories. Accordingly, the parallels between Stanfords New Induction on the History of Science and
Mizrahis New Induction on the History of Philosophy can be seen as follows:
NI-1 [/NIP-1] The historical record reveals that past scientists [/philosophers] typically failed to conceive of
alternatives [/serious objections] to their favorite, then-successful theories.
NI-2 [/NIP-2] So, present scientists [/philosophers] fail to conceive of alternatives [/serious objections] to their
favorite, now-successful theories.
NI-3 [/NIP-3] Therefore, we should not believe our present scientific [/philosophical] theories.
Given Mizrahis (2013) New Induction on the History of Philosophy, I argue that scientific antirealists who
endorse Stanfords PUA and his New Induction on the History of Science face the following problem: if
Stanfords New Induction on the History of Science is a cogent argument for scientific antirealism, then
Mizrahis New Induction on the History of Philosophy is a cogent argument for philosophical antirealism. If that
is the case, however, then it follows that scientific antirealism is not worthy of belief, since scientific antirealism
is a philosophical theory. More explicitly:
(1) Stanfords New Induction on the History of Science is a cogent argument for scientific
antirealism. [Assumption for reductio]
(2) If Stanfords New Induction on the History of Science is a cogent argument for scientific antirealism, then
Mizrahis New Induction on the History of Philosophy is a cogent argument for philosophical antirealism.
[Premise]
(3) Mizrahis New Induction on the History of Philosophy is a cogent argument forphilosophical
antirealism. [from (1) & (2) by modus ponens]
(4) If Mizrahis New Induction on the History of Philosophy is a cogent argument for philosophical
antirealism, then, if scientific antirealism is a philosophical theory, we should not believe it. [Premise]
(5) If scientific antirealism is a philosophical theory, we should not believe it. [from
(3) & (4) by modus ponens]
(6) Scientific antirealism is a philosophical theory. [Premise]
(7) We should not believe scientific antirealism. [from (5) & (6) by modus ponens]
(8) Stanfords New Induction on the History of Science is a cogent argument for scientific antirealism but
we should not believe scientific antirealism. [from (1) &
(7) by conjunction]
Of course, scientific antirealists who endorse Stanfords New Induction on the History of Science cannot
accept (8), since (8) says that they should not believe the conclusion of a cogent argument for their own
position.
References
Magnus, P. D. (2010). Inductions, red herrings, and the best explanation for the mixed record of science. British
Journal for the Philosophy of Science, 61, 803-819.
Mizrahi, M. (2013b). The problem of unconceived objections. Argumentation. DOI 10.1007/s10503-013-9305-z.
Psillos, S. (1999). Scientific Realism: How Science Tracks Truth. London: Routledge.
Psillos, S. (2006). Thinking about the ultimate argument for realism. In C. Cheyne and J. Worrall (eds.),
Rationality & Reality: Essays in Honour of Alan Musgrave (pp. 133-156). Dordrecht: Springer.
Stanford, P. K. (2006). Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives. New
York: Oxford University Press.
3
Page 40 of 46
method of science (here in line with Peirces (1955) warning that there is no evidence that everything will converge to a given
result in Pragmatism in Retrospect). This method crystallises in what is learned through interaction between theories and
experiments as the result of testing and subsequent revision and affects a kind of meta-continuity of sciences processes which
is much more meaningful that a notion of (static) continuity in terms of accumulation in the trivial sense that theory Tn+1
retains all that theory Tn can explain and predict.
In the final section of the paper the potential of naturalised realism satisfying the demand recently made by writers such as
Ruetsche (2011) and Chang (2012) that realism must be pragmatic and pluralist is explored. It is concluded that the only way
in which to be a realist, given the true nature and current content of science, is not to be one!
Bibliography
Chang, H. (2012) Is Water H2O? Evidence, Pluralism and Realism. Dordrecht, Springer.
Hintikka, J. (1975) the Intentions of Intentionality. Dordrecht, D. Reidel.
Niiniluoto, I. (1999) Critical Scientific Realism. Oxford, Oxford University Press.
Peirce, C.S. (1955). How to make our Ideas clear. In Justus Buchler (Ed.), Philosophical Writings of Peirce. New York: Dover
Publications.
Peirce, C.S. (1955). Pragmatism in Retrospect. In Justus Buchler (Ed.), Philosophical Writings of Peirce. New York: Dover
Publications.
Ruetsche, L .(2011). Interpreting Quantum Theories. Oxford, Oxford University Press.
These puzzles differ in dialectical value. (P1)(P4) are fanciful scenarios that are not directly relevant to any
serious physical theory, but these puzzles are nonetheless useful because they make the conventionalist point
very vivid. (P5) and (P6), on the other hand, are directly relevant to the interpretation of real physics, but they
are extremely technical. My strategy will be to use the easier puzzles to get the basic idea through, then Ill
turn to (P5) and (P6).
The gist of my argument is that these puzzles arise if and only if we reify spacetime into an object which can be
looked at from outside, either literally from another spatiotemporal vantage point, as in (P2) and (P4), or by
extending a metric concept related to intraworld spatial experience to states of affairs that are not accessible
in the context of intraworld spatial experience, as in (P1), (P3), (P5), and (P6). To portray (P1)(P6) as
variations on a single underlying theme, I introduce the idea of internal and external metrics:
The internal metric of world W:
IMWP (x, S) = a number that expresses the magnitude of xs spatiotemporal property P (in some standard
unit) as measured by a normal observer in situation S (where S is complex state of affairs that describes
the environment of x and the observer).
The external metric of world W:
EMWP(x, S) = a number that expresses xs spatiotemporal property P (in some standard unit) as x is in
situation S.
I argue that (P1)(P6) are all based on the presupposition that for some spatiotemporal property P, there are
possible worlds with the same internal P-metric as the actual internal P-metric but with an external metric
that is deviant in the following sense:
Deviant external metrics:
EMWP is deviant EMWP involves deviant excess structure =df
(1) For some x, S and S*
(i) S* is a proper part of S
(ii) IMWP (x, S) = k IMWP(x, S*) (k is a real number)
(iii) EMWP (x, S) k EMWP(x, S*)
or (2) For some S, S is in the domain of EMWP and S isnt in the domain of IMWP.
For example, in (P3), we have an internal metric according to two intervals can appear to have the same
length while being, according to the external metric, of different length (type (1) deviant metric). In (P2), we
have an internal metric whose domain does not include the situation of measuring an object on the surface of
the mirror from outside the mirror (type (2) deviant metric).
The paper shows that all puzzles fit this schema, and I relate the schema to technical discussions of (P5)
(Malament 1977) and (P6) (Ben-Menahem 2006: 85127). I argue that the conventionalist puzzles only arise
if we presuppose that there are external metrics, and I claim that if we deny the possibility of external metrics,
then we are led to phenomenalism. I relate this idea to John Fosters (1982) arguments against realism about
spacetime.
References
Ben-Menahem, Yemima (2006): Conventionalism. Cambridge University Press. Foster, John (1982): The Case
for Idealism. Routledge & Kegan Paul.
Helmholtz, Hermann (1881): On the origin and significance of geometrical
axioms. In his Popular Lectures, Longman, Green, and Co., 2772.
Malament, David (1977): Causal theories of time and the conventionality of simultaneity. Nos 11, 293300.
Poincar, Henri (1952): Science and Hypothesis. Dover Publications.
Putnam, Hilary (1975): The refutation of conventionalism. In his Mind, Language, and Reality, Cambridge
University Press, 153191.
Reichenbach, Hans (1958): The Philosophy of Space & Time. Dover Publications.
explanations as scientific explanations will be shown to be either non compatible with a realist position,
or inadequate in order to justify such acceptance in a non circular way.
1 Following Rice 2013, p. 3-4, OM can been briefly described as follows: Optimality models are
distinguished by their use of a mathematical technique called Optimization Theory, whose goal is to
identify which values of some control variable(s) will optimize the value of some design variable(s) in light
of some design constraints (). An optimality model specifies a constrained set of possible strategies
known as the strategy set. The design variables to be optimized constitute the models currency. An
optimality model also specifies what it means to optimize these design variables (e.g. should a design
variable be maximized or minimized). This is referred to as the models optimization criterion. Once the
strategy set and optimization criterion have been identified, an optimality model describes an objective
function, which connects each possible strategy to values of the design variable(s) to be optimized. ().
The strategy that optimizes the models criterion, in light of various constraints and tradeoffs, is deemed
the optimal strategy. By mathematically representing the important constraints and tradeoffs, an
optimality model can demonstrate why a particular strategy is the best available solution.
the abstractness of mathematical entities with that of scientific theories. For example, Suppes states that:
the meaning of the concept of model is the same in mathematics and the empirical sciences (Suppes
1960, p. 12). If the realist wants to claim that the scientific theories are true, and scientific theories are
intended as classes of mathematical models, then it seems that the realist has to embrace MP.
So, given that OM are a kind of mathematical model widely accepted in science, if the explanations
deriving from the use of OM could be interpreted as non causal scientific explanations, realists would
have solved the problem of demonstrating that their conception of reality is broad enough to be
compatible with the scientific practice.
Difficulties for the realist
The problem for the realist is that she has now to account for the applicability of mathematics in a way
compatible with SR.
It seems there are two main possibilities for the realist: 1) to show that relying on explanatoriness
consideration is a valid tool to assess the validity of a scientific explanation (Glymour 1980); 2) to show
that the deepest scientific explanations are not causal, that they are mathematical, and that they can be
successfully trusted because mathematics is able to describe the modal structure of the universe (Lange
2013).
These two options will be analyzed in general, and then specifically tested in the context of the OM, and
will be shown to be deeply related, and both inadequate.
In fact, it can be shown that explanatoriness considerations cannot be equate to confirmation in order to
assess a scientific theory. The problem is that the centrality that the empirical success seems to have for
the realists (and even for the platonists) arguments, cannot be coherently accounted for anymore in the
realist frame if non causal explanations and explanatoriness considerations are accepted in such frame
(Sober 1999; Zamora Bonilla 2003).
The idea that mathematics can tell us what is necessary rests on an assumption which is pythagorean in
character, that is that the modal structure of the world is mathematical, and so that mathematics can
reveal us such structure (Lange 2013). But this is exactly what the realist should demonstrate and not
just assume, in order to justify the acceptability of a mathematical (non causal) explanation as a scientific
explanation.
The problem for such position is that it is not able to give an account for such supposed capability of
mathematics in a naturalistic way. In fact, it seems there are three available ways to justify such capacity
of mathematics: 1) from the success of the previously used mathematical models, but this kind of
inference would amount to a form of the NMA, and would be prone to the same objections; 2) from a non
naturalistic point of view, but this option should not be palatable for those scientific realists who support
even some sort of naturalism;
3) from a naturalistic point of view, relying on an evolutionary account of the human abilities
which give rise to mathematics, but such position can be shown to be circular.
So, it seems reasonable to conclude that there is no easy way for a scientific realist to use OM in order to
show she is able to embrace MP and to make her position coherent.
References
Baker, A. (2009): Mathematical Explanation in Science. The British Journal for the Philosophy of Science
60, 611633
Balaguer, M. (2009): Realism and anti-realism in mathematics. In: Gabbay, D., Thagard, P., Woods, J. (eds.)
Handbook of the Philosophy of Science, vol. 4, Philosophy of Mathematics, 117151. Elsevier, Amsterdam
Bangu, S.I. (2008): Inference to the best explanation and mathematical realism. Synthese 160, 1320
Baron, S. (2013): Optimisation and mathematical explanation: doing the Lvy Walk. Synthese, DOI
10.1007/s11229-013-0284-2
Batterman, R.W. (2010): On the Explanatory Role of Mathematics in Empirical Science. The British
Journal for the Philosophy of Science 61, 125
Bokulich, A. (2011): How scientific models can explain. Synthese 180, 33 45
Bueno, O. (2011): Structural Empiricism, Again. In: Bokulich, P., Bokulich,
A. (eds.) Scientific Structuralism, 81l03, Springer Dordrecht
Colyvan, M. (2001): The Indispensability of Mathematics. Oxford University Press, Oxford
Field, H. (1989): Realism, Mathematics and Modality. Blackwell, Oxford Glymour, G. (1980): Explanations,
Tests, Unity and Necessity. Nos 14, 31
4
50
Lange, M. (2013): What Makes a Scientific Explanation Distinctively Mathematical?. The British Journal
for the Philosophy of Science 64, 485-511
Melia, J. (2002): Response to Colyvan. Mind 111, 7579
Page 46 of 46
Potochnik, A. (2010): Explanatory Independence and Epistemic Interdependence: A Case Study of the
Optimality Approach. The British Journal for the Philosophy of Science 61, 213233
Potochnik, A. (2009): Optimality modeling in a suboptimal world. Biology and Philosophy 24, 183197
Psillos, S. (2011): Living with the abstract: realism and models. Synthese 180, 317
Psillos, S. (1999): Scientific Realism. Routledge, New York
Rice, C. (2013): Moving Beyond Causes: Optimality Models and Scientific Explanation. Nos, DOI
10.1111/nous.12042
Rice, C. (2012): Optimality explanations: a plea for an alternative approach.
Biology and Philosophy 27, 685703
Sober, E. (1999): Testability. Proceedings and Addresses of the American Philosophical Association 73,
4776
Suppes, P. (1960): A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical
Sciences. Synthese 12, 287301
Zamora Bonilla, J.P. (2003): Meaning and Testability in the Structuralist Theory of Science. Erkenntnis 59,
4776