Você está na página 1de 46

Page 1 of 46

Abstracts submitted for the conference New Thinking about Scientific Realism

Contents
A [1]: General Scientific Realism .................................................................................................................................................. 2
1) Morteza Sedaghat - A Practicalist Defense of Scientific Realism ............................................................................... 2
2) J Wolff A new target for scientific realism debates ................................................................................................. 3
3) Samuel Schindler Kuhnian theory choice, convergence, and base rates ................................................................ 3
4) Raphael Scholl Realism from a causal point of view: Snow, Koch and von Pettenkofer on Cholera ...................... 4
5) Mark Newman Scientific Realism, the Pessimistic Meta-Induction, and Our Sense of Understanding .................. 4
6) Axel Gelfert Experimental Realism and Desiderata of Manipulative Success ......................................................... 6
7) Jack Ritchie I could be wrong but it just depends what you mean: explaining the inconclusiveness of the
realism-anti-realism debate................................................................................................................................................ 7
8) Dean Peters Observability, perception and the extended mind ............................................................................. 7
9) Adam Toon Empiricism for cyborgs - ....................................................................................................................... 9
10) Curtis Forbes An Existential Approach to Scientific Realism ............................................................................... 9
11) Andrew Nicholson Are there any new directions for scientific realism?........................................................... 10
B [2]: Truth, Progress, Success and Scientific Realism ...................................................................................................... 11
12) Michael Shaffer Farewell to the Realism/Anti-realism Debate: Practical Realism and Scientific Progress....... 11
13) Juan Manuel Vila Prez A Critique of Scientific Pluralism The Case For QM .................................................. 12
14) Danielle Macbeth Revolution and Realism? ...................................................................................................... 13
15) Nora Berenstain Scientific Realism and the Commitment to Modality, Mathematics, and Metaphysical
Dependence ...................................................................................................................................................................... 14
16) John Collier Information Can Preserve Structure across Scientific Revolutions ................................................ 15
17) Juha Saatsi Pessimistic induction and realist recipes: a reassessment.............................................................. 16
18) Mario Alai Deployment vs. discriminatory realism ............................................................................................ 16
19) Gauvain Leconte Predictive success, partial truth and skeptical realism .......................................................... 18
20) Sreekumar Jayadevan Does History of Science Underdetermine the Scientific Realism Debate? A
Metaphilosophical Perspective......................................................................................................................................... 19
21) Hennie Ltter Thinking anew about truth in scientific realism ......................................................................... 20
C [3]: Selective Realisms ............................................................................................................................................................... 22
22) Xavi Lanao Towards a Structuralist Ontology: an Account of Individual Objects .............................................. 22
23) David William Harker Whiggish history or the benefit of hindsight? ................................................................ 23
24) Christian Carman & Jos DezLaunching Ptolemy to the Scientific Realism Debate: Did Ptolemy Make Novel
and Successful Predictions? .............................................................................................................................................. 24
25) Timothy Lyons Epistemic Selectivity, Historical Testability, and the Non-Epistemic Tenets of Scientific
Realism.............................................................................................................................................................................. 25
26) Peter Vickers A Disjunction Problem for Selective Scientific Realism ............................................................... 26
27) Raphael Kunstler Semirealists dilemma............................................................................................................ 27
28) Elena Castellani Structural Continuity and Realism ........................................................................................... 28
29) Tom Pashby Entities, Experiments and Events: Structural Realism Reconsidered............................................ 29
30) Angelo Cei The Epistemic Structural Realist Program. Some interference. ...................................................... 30
31) Kevin Coffey Is Underdetermination a Problem for Structural Realism? .......................................................... 31
32) Michael Vlerick - A biological case against entity realism .................................................................................... 32
33) Rune Nyrup Perspectival realism: where's the perspective in that? ................................................................. 33
D [4]: The Semantic View and Scientific Realism ................................................................................................................ 34
34) Alex Wilson Voluntarism and Psillos Causal-Descriptive Theory of Reference ................................................ 34
35) Alistair Isaac The Locus of the Realism Question for the Semantic View .......................................................... 35
36) Francesca Pero The Role of Epistemic Stances within the Semantic View ........................................................ 36
E [5]: Scientific Realism and the Social Sciences.................................................................................................................. 37
37) David Spurret Physicalism as an empirical hypothesis ...................................................................................... 37
F [6]: Anti-Realism .......................................................................................................................................................................... 38
38) Moti Mizrahi The Problem of Unconceived Objections and Scientific Antirealism ........................................... 38
39) Emma Ruttkamp-Bloem The Possibility of an Epistemic Realism ...................................................................... 40
40) 1
Yafeng Shan What entities exist ........................................................................................................................ 40
41) Daniel Kodaj From conventionalism to phenomenalism ................................................................................... 42
42) Fabio Sterpetti Optimality models and scientific realism .................................................................................. 43
Page 2 of 46

A [1]: General Scientific Realism

1) Morteza Sedaghat - A Practicalist Defense of Scientific Realism


Practicalism, or as it is introduced in the relevant literature pragmatic encroachment, in epistemology is
explicitly this view that practical factors can be constitutive parts of epistemic justification. In other words, in
contrast to what traditional epistemology says, what constitute epistemic justification are not merely truth-related
factors such as evidence, reliability, etc. Practicalists argue for a pragmatic condition on epistemic justification, JA:
if Ss belief that p is epistemically justified, then S is practically rational to act as if p. Contrastively speaking, the
less S is practically rational to act as if p (i.e., according to what was said above, the less practical benefits S
acquires to act as if p), the less Ss belief that p is epistemically justified. Call this latter condition, CJA.

The argument behind JA (and similarly CJA) is the following one:


(1) Ss belief that p is epistemically justified.
(2)For two states of affairs A and B, S knows that if p then S is practically rational to prefer A.
(3)Therefore, S is practically rational to prefer A, i.e. to act as if p.
(4)Therefore, JA (and similarly CJA) holds.
In other words, following what practicalism says, if S's belief that p is epistemically justified, this belief should
provide enough practical reason for him to act accordingly and thereby acting as if p should produce more
practical benefits for him in contrast to acting as if ~p. If, anyway, the latter does not hold, according to
practicalism, S's belief that p is not epistemically justified for one cannot attribute that belief (i.e. the belief of p) to
S in order to rationalize his actions. I am, for example, epistemically justified to believe that my home's refrigerator
is empty for, among other things, I am practically rational to stop at the grocery to buy something, i.e. to act as if
my home's refrigerator is empty. If I was not practically rational to stop at the grocery to buy something, it would
be suspected that my belief that my home's refrigerator is empty is epistemically justified. In better words, since
truth leads to success, if one's acting as if p does not lead to success it might be the case that p does not hold and
thereby one does not know that p.
Now let us consider the case in which p="scientific realism holds" and ~p="scientific realism does not hold" and
see acting as if which of p or ~p produces more practical benefits. Accordingly, if practicalism holds, we can decide
the belief of which of p or ~p is more epistemically justified and hence which of p or ~p is more likely to be true. If
finally p comes out to be more likely to be true, we have a practicalist defense of scientific realism, i.e. a defense of
scientific realism conditionally that practicalism holds.
In this presentation, having provided some intuitions for what practicalism says regardless of objections against it,
I want to show that S acquires more practical benefits (explanation, novel prediction and unification among
others) to act as if "scientific realism does hold" than to act as if " scientific realism does not holds ". Hence, a
practicalist defense of scientific realism.
References:
Hawthorne, J. (2004), Knowledge and Lotteries, Oxford: Oxford University Press.
Stanely, J. (2005), Knowledge and Practical Interests, Oxford: Oxford University Press.
Fantl, J. and McGrath, M. (2010), Knowledge in an Uncertain World, Oxford: Oxford University Press,
Fantl, J. and McGrath, M. (2007), On Pragmatic Encroachment in Epistemology, Philosophy and Phenomenological
Research, R. LXXV, nr 3, s. 558-589.
Nagel, E. (1950), Science and semantic Realism, Philosophy of Science, nr 17, s. 174-181.
Duhem, P. (1906), The Aim and Structure of Physical Theory, trans. P. Wiener, Princeton, NJ: Princeton University
Press (1954)
Duhem, P. (1908), To Save the Phenomena, trans. E. Donald and C. Mascher, Chicago: University of Chicago Press
(1969)
Mach, E. (1893), Popular Scientific Lectures, Chicago: Open Court.
Carnap, R. (1928), The Logical Structure of The World, trans. R.George, Berkeley: University of California Press.
Carnap, R. (1936), Testability and Meaning, Philosophy of Science, nr 3, s. 419-471.
Van Frassen, B.C. (1980), The Scientific Image, Oxford: Clarendon Press.
Van Frassen, B.C. (1985), Empricism in Philosophy of Science, [w:] Images of Science, red. Churchland P.M. and
Hooker C.A., Chicago: University of Chicago Press.
Van Frassen, B.C. (1989), Laws and Symmetry, Oxford: Clarendon 2 Press.
Earman, J. (1992), Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory, Cambridge, MA: MIT
Press.
Page 3 of 46

2) J Wolff A new target for scientific realism debates


Realism and antirealism debates exist in many different areas of philosophy. The targets of such debates are
typically either certain kinds of entities, whose existence a realist affirms and an antirealism denies, or certain kinds
of claims, which a realist takes to be largely true, and an antirealist takes to be either systematically false, or as
perhaps not even truth-apt. The ontological and semantic formulations of realism debates are not unrelated; a
typical reason an antirealist might have for thinking that certain claims are systematically false is that the entities
purportedly referred to in those claims do not exist, and accordingly there is nothing to make true the claims in
question.
In my paper I aim to do three things. First, I argue that it is far from clear what the target of the current scientific
realism debate is. Secondly, I argue that both realists and antirealists could benefit from re-conceptualizing realism
debates in the philosophy of science as debates about particular sorts of claims. Finally, I consider several
candidates for claims, which might serve as the target of re-conceptualized scientific realism debates. To argue for
the first point, I show that the current targets of scientific realism debates are problematic. One option is to take
science (or scientific discourse) as a whole to be the target of scientific realism debates. This is prima facie
plausible, since the status of science as an endeavor or institution is arguably what is at stake in these debates.
The downside of taking science as a whole as the target is that it leads to a wellknown impasse between the
antirealists pessimistic meta-induction (PMI) and the realists no-miracles argument (NMA). Both arguments, at
least in their original intent, are directed at science as a whole: the NMA insists that antirealists fail to explain the
incredible success of science, whereas the PMI points to the allegedly numerous cases of overturned theories to
suggest that even our own best theories might very well turn out to be false. There are three alternatives to this
impasse. The first is van Fraassens famous proposal to make the aim of science the target of realism debates;
realists, van Fraassen suggests, are those who take science to aim for truth, whereas empiricists are those who
take science to aim merely for empirical adequacy. The second response comes in the form of various selective
realisms, which try to identify that part of scientific theories which is likely to survive through theory change. The
third response is to abandon science as a whole as the target, and to take particular theories or models as targets
of scientific realism instead. An example of this can be seen in recent trends of retail as opposed to wholesale
realism, which restricts the NMA to particular models or theories, without committing to an extension of the
argument to science as a whole. I argue that each of these responses faces severe difficulties, which motivates the
search for an alternative. To argue for the second main point, I compare scientific realism debates to realism
debates in other fields, in particular in meta-ethics. Questions about realism in ethics proceed from having
identified a distinctive feature of ethical claims: they are normative. In contrasting the normativity of ethical claims
with that of descriptive claims, questions about realism in ethics can be raised as questions about what makes
distinctively normative claims true, in (putative) contrast to the truth-makers of descriptive claims. This suggests
that it is a good strategy for articulating realism debates about a particular discourse to identify which claims, if
any, in that discourse appear to be problematic, and why. To so benefits both realists and antirealists, since it
allows for a clearer statement of the various realist and antirealist positions one might want to take. In light of this
I look at several candidates for such claims within science: claims about unobservable entities, modal claims, and
quantitative claims. Science arguably involves claims of each sort, which means a realist about science should be in
a position to take (at least some) claims of these three types to be true. I argue that while each kind of claim is
potentially epistemically problematic when compared to qualitative, non-modal claims about observables, modal
claims and quantitative claims are, in addition, also semantically and metaphysically problematic. For both modal
claims and quantitative claims, the question arises what makes them true, and it seems, at least prima facie, that
their truth-makers differ from the truth-makers of non-modal claims and the truth-makers of qualitative claims
respectively. This is not the case for claims about unobservable entities: we have no good reason to think that
electrons cannot make true claims about electrons in very much the same way in which apples make true claims
about apples. That unobservable entities are at best epistemically, but not ontologically, problematic is widely
recognized by contemporary empiricists as well. An antirealist position about science, which targets modal claims
or quantitative claims, by contrast, would have metaphysical and semantic aspects as well. Conversely, a realist
about science would do well to provide truth-makers for modal claims and quantitative claims. I conclude that
both modal claims and quantitative claims make for an appropriate target of realism debates about science.

3) Samuel Schindler Kuhnian theory choice, convergence, and base rates


The arguably strongest contemporary argument for scientific realism, the No-Miracles-Argument (NMA), has it that
it would be a miracle if our theories were as successful as they are, and not be true. As Howson (2000) pointed out,
however, as normally stated, the NMA commits the base rate 3 fallacy: it ignores the base rate / prior probability of
theories being true (depending on ones preference for the interpretation of probabilities in frequentist or
subjectivist terms, respectively). But the base rates matter: when the base rate is very low, the posterior
probability of a successful theory being true will be very low, even when the error rates, i.e., the false positive and
Page 4 of 46

the false negative rates (i.e., the rate of a theory being successful if false, and the rate of a theory not being
successful if true, respectively) are very low. And apparently, setting the values for the base rates is elusive. If
probabilities are construed objectively, then it looks as though we have no way of finding out about them. If, on the
other hand, probabilities are construed subjectively, then both the realist and antirealist can set the priors as they
please. A rational debate about realism can then no longer be had (Magnus and Callender 2004).
This paper will argue that the Kuhnian picture of theory choice suggests a strengthened defense of scientific
realism. On the Kuhnian picture of theory choice, it is normally the case that a theory possesses some virtues but
not others. The following amplified No-Miracles-type argument (NMtA) then suggests itself: it would be an
unlikely coincidence if a theory were to possess all the five standard virtues and not be true.
When formalizing such a NMtA, error rates now need to be fixed for each of the theoretical virtues, giving the
NMtA more leverage than the traditional NMA. Furthermore, it will be shown that there are principled and non-
arbitrary grounds for setting the error rates at particular levels, whilst the principle of charity towards the
antirealist is observed. Setting the error rates in this way will then (non-arbitrarily) determine the base rates. The
base rate neglect charge is defeated. Interestingly it turns out that the Kuhnian picture of theory choice allows the
realist to concede that the base level of true theories is rather low and still have it her way.
Given the principled reasons for setting the error rates, the antirealist can now no longer simply insist that the
base rate be lower. She must challenge the fixing of the error rates by argument. Magnus and Callenders
skepticism about the resolvability of the realism debate is thus rebutted.

4) Raphael Scholl Realism from a causal point of view: Snow, Koch and von Pettenkofer on Cholera
In current debates about scientific realism, much deserved attention is paid to the problem of unconceived
alternatives, which P. Kyle Stanford has developed in a rich series of detailed and well chosen case studies.
However, it has not yet been explored how the problem of unconceived alternatives presents itself in sciences
which can be broadly described as causal and mechanistic (for instance, molecular biology). There are reasons to
think that the traditional problem as formulated by Stanford does not present itself: In causal inference, the space
of possible hypotheses tends to be exhausted by the contradictories P is a cause of Q and P is not a cause of Q
(this was emphasized by Peter Lipton in his debate with Bas Van Fraassen about the argument from the bad lot).
While it may be difficult to determine which of these is true, there is no obvious room for unconceived alternatives.
Nevertheless, there is ample room for debate about causal claims. First, we may question whether causal relevance
or salience has been successfully demonstrated (for example if confounding is suspected). Second, we may ask
what causal co-factors C are necessary for a given cause P to exert its effect on Q, and how often these co-factors
are in fact realized. Third, we may debate the existence and relevance of alternative causal pathways promoting or
preventing the occurrence of Q. Fourth, we may define event types at too coarse-grained or too fine-grained levels
of description. Fifth and finally, there may exist unknown potential causes whose causal relevance we have not yet
explored. If this characterization is correct, we would expect it to affect the dynamics of actual scientific debates in
the causal and mechanistic sciences. In a detailed case study of John Snows, Robert Kochs and Max Joseph von
Pettenkofers work on cholera in the 19th century, I will show that the categories outlined above illuminate large
parts of the actual debates about the causation and mechanism of cholera. I conclude that there are important
parts of science where unconceived alternatives as traditionally conceived are not the primary problem for
scientific realism.

5) Mark Newman Scientific Realism, the Pessimistic Meta-Induction, and Our Sense of Understanding
In his (2005, 2006, 2011, 2014) Michael Devitt argues that an adequate version of the Pessimistic Meta-Induction
must show not only that we have frequently got things wrong in our unobservable posits, but also that despite
methodological improvements, we have not been getting increasingly right. His view is that we now have much
more sophisticated and rigorous scientific methods than in previous centuries, so appeals to historical errors such
as Phlogiston, Caloric, and the Luminiferous Ether are irrelevant to current optimism about our theories. This
amounts to the claim that the PMI fails as an argument against scientific realism unless we have evidence against
our highly reliable current scientific methods.
J.D. Trout has provided the seed of just such undermining evidence. In his series of papers (2002, 2005, 2007) he
argues that the sense of understanding, which often reflects our feeling of having grasped the underlying causal
nature of some part of the world, has contributed to a long historical train of scientific errors. The sense of
understanding, he argues, is highly unreliable, and yet is at least part of the reason scientists accept theories (he
cites Ptolemy, Galen, and the alchemists). This alone does not provide enough evidence to undermine Devitts
claim, but Mark Newman (2010) has argued that the error 4 of taking intelligibility of a theory too seriously led
directly to the most noted PMI cases: the acceptance of Phlogiston, Caloric, and the Luminiferous Ether. Unless
scientists are nowadays ignoring intelligibility as a part of their selection criteria for theories, (a dubious claim),
then when combined with Trouts thesis Newmans work shows, contra Devitt, that we do currently have reasons
Page 5 of 46

to think contemporary scientific methodology is suspicious. Their work therefore provides a new reason for
thinking the PMI plausible after all.
One might think we ought to recommend scientists cease using their sense of understanding when evaluating
theories. I dont pursue that line, for their sense of understanding may still have epistemic benefit. Instead, I
believe a different account of understanding can be used by the realist to respond to the PMI, and I provide a
theory along those lines. In contrast to the internalist sense of understanding about which Trout sows seeds of
doubt, this is an externalist account, one which avoids the subjective and unreliable phenomenology of
understanding, instead opting for objective, testable evaluative criteria. I argue that when certain understanding
criteria are satisfied by a theory, and combined with independent background evidence for underlying physical
principles of that theory, they are found to track with successful theories in the history of science, and fail to track
those ultimately false theories which fall on the scrap heap of science. This account of understanding therefore
provides a reason to reject the PMI: when scientists have selected theories that they understood in this technical
sense, and they had independent evidence for the underlying physical principles of the theory, they have selected
correctly. I think it plausible to believe our current scientific methods incorporate just such a condition of
understanding on adoption of new theories.
I offer the following externalist account of understanding, which I call the Inferential Model of Scientific
Understanding (IMU):
(IMU): S understands scientific theory T iff S can reliably use principles Pn constitutive of T to make goal-conducive
inferences for each step in a problem-solving cycle which reliably results in solutions to qualitative problems relevant
to that theory.
This definition of understanding theory demands a fair amount from the scientist, and I think when it is satisfied
we have good reason to be initially optimistic about the theory in question. I argue that past failed theories never
enabled scientists to fully satisfy this condition, though our current theories do so. The key to satisfying this
definition is a scientists ability to use underlying physical principles to make correct inferences regarding each part
of a specific problem-cycle relevant to the theory. The problem-solving cycle requires satisfaction of four steps: (i)
correctly describing the problem by constructing a mental model; (ii) selecting correct background theoretical
principles relevant to the problem; (iii) applying those background principles to a specific problem case; (iv)
planning the problem-solving sequence from start to finish.
Theories that are intelligible in this way, I argue, possess the property of being reliable indicators of truth when
they also have background independent evidence for those physical constitutive principles. To defend this claim I
make the following argument:
To possess the resources for a scientist to satisfy (IMU) a theory must incorporate correct underlying principles
(principles incorporating properties which explain the behavior of the properties of the problem) which are used
to make correct qualitative inferences about the problem at hand. For instance, we have independent evidence for
the principle of conservation of mechanical energy. On the other hand although Phlogiston, Caloric, and the
Luminiferous Ether had explanatory underlying principles like thisprinciples which explained why their
particles had the properties they did, these principles were not independently confirmed. Contemporary theories we
consider approximately true do have such independently confirmed principles. Thus, we have reason to think this
externalist account of understanding tracks with correct theories and can be used to defend realism against the
PMI.
I close by considering why we need an externalist account of understanding, after all, if the internalist sense of
understanding can similarly point to constitutive principles which have to possess independent evidence, why
bother with externalist understanding? I answer that the sense of understanding, unlike my externalist account,
does not direct us to those physical principles which must be independently confirmed in order to secure theory
justification.
References
Devitt, Michael (2005) Scientific Realism. In The Oxford Handbook of Contemporary Philosophy, Frank Jackson and
Michael Smith, eds. Oxford: Oxford University Press: 767-91.
______. (2006). The Pessimistic Meta-Induction: A Response to Jacob Busch. Sats Nordic Journal of Philosophy 7:
127-35.
______. (2011) Are Unconceived Alternatives a Problem for Scientific Realism? Journal for General Philosophy of
Science 42: 285-93.
______. (2014) Realism/Anti-Realism. In The Routledge Companion to Philosophy of Science, Maricn Curd and
Stathis Psillos, eds. New York: Routledge.
Newman, Mark (2010) Beyond Structural Realism: pluralist criteria for theory evaluation Synthese 174: 413-443.
Trout, J.D. (2002) Scientific Explanation and the Sense of5Understanding Philosophy of Science, 69, pp. 213-233.
______. (2005) Paying the Price for a Theory of Explanation Philosophy of Science, 72, pp. 198-208.
______. (2007) The Psychology of Scientific Explanation Philosophy Compass 2: 564-91.
Page 6 of 46

6) Axel Gelfert Experimental Realism and Desiderata of Manipulative Success


In his influential 1983 book Representing and Intervening, Ian Hacking argues for manipulative success as the
decisive criterion for when theoretical entities should be considered real. Only once we are able to manipulate
other parts of nature in a systematic way using the (candidate) entities concerned, will they have ceased to be
something hypothetical, something inferred (Hacking 1983: 262). In a later paper, he reaffirms this position in
response to criticism: When we use entities as tools, as instruments of inquiry, we are entitled to regard them as
real (Hacking 1989: 578). The resulting form of scientific realism is usually referred to as entity realism (or
experimental realism), and has received significant attention over the past couple of decades with respect to
specific scientific examples (e.g., Shapere 1993, Gelfert 2003), in relation to more general forms of semirealism
(Chakravartty 1998), and concerning its place within the realism/anti-realism spectrum (Massimi 2004). The
present paper aims to forge a connection between this debate and recent accounts of scientific practice and
experimentation; it does so through the lens of one specific scientific challenge to entity realism, namely the case
of quasi-particles in physics (Gelfert 2003), which has recently been the subject of further discussion and
elaboration (Falkenburg 2007, 2014; Gelfert 2011; McKenzie 2010; see also Vallor 2009).
At one level, the case of quasi-particles fits with earlier scientific challenges that were intended to call into
question Hackings criterion of manipulative success. Thus, Shapere (1993) argues that, surely, manipulability
that is, our ability to exploit the causal powers of (at first merely hypothetical) entities cannot be a necessary
condition for their being considered real. After all, there are many entirely contingent reasons why we may be
unable to exploit a real entitys causal powers: for example, the entity may simply be too large or too far removed
from us or both, as in Shaperes example of gravitational lenses. Gravitational lenses, Shapere argues, manifest
themselves in clearly observable ways (by bending light from distant stars around them, thus producing clearly
observable patterns on images produced by telescopes). Perhaps, then, manipulative success is not intended as a
necessary condition at all, but as a sufficient condition that is, there can be no instances of successful, systematic
manipulation without the exploitation of causal powers that are unique to the presumed entities (or experimental
tools) in question. However, as Gelfert (2003) argues, physics is replete with examples where such a move from
systematic manipulative success to the reality of the hypothetical entities in question is not warranted. This
applies especially to the case of quasi-particles phenomena in complex correlated systems (e.g. many-electron
systems) which mimick the (chimerical) appearance of independent particles (complete with apparently
measurable properties such as a virtual effective mass', average life-time, electric charge, and so forth). Examples
of quasi-particles are electron holes and excitons in semiconductor physics, or phonons (quantized lattice
vibrations) in solid-state crystals. The existence of such quasi-particles is illusory, insofar as quasi-particles are
mere artefacts of the collective behaviour of the correlated many-electron system: their apparent properties and
effects do not have independent reality, but are simply functions of the total many-electron system. It would seem
then, that Hackings criterion of manipulative success initially put forward as a fail-safe criterion for
distinguishing between real (causally efficacious) and merely theoretical (potentially non-existent) entities
fails to deliver any necessary or sufficient conditions on when we should consider scientific entities to be real or
not.
However, the case of quasi-particles may not be as clear-cut as it seems. As Falkenburg (2007, 2014) has
argued, acknowledging the shortcomings of Hackings criterion of manipulative success does not mean that there
is no interesting middle-ground of particle-like phenomena, where the proper articulation of such phenomena
may require the use of particle-like (or entity-like) terms. The question, then, is not Do quasi-particles exist?
but How do quasi-particles exist? (Falkenburg 2014). Rather than limiting its argument to the status of quasi-
particles in particular, the present paper adopts a diagnostic approach towards the initial challenge to Hackings
criterion (Gelfert 2003) and the subsequent criticisms of it (Falkenburg 2007, McKenzie 2010, Falkenburg 2014).
What the debate shows, it is argued, is a need for a richer characterization of experimental practice including an
account of desiderata for what counts as an instance of successful manipulation as well as for a more fine-
grained taxonomy of the ontological and explanatory status of scientific entities. Regarding the former, the paper
draws on Vallors (2009) phenomenological defense of experimental realism and Pickerings (1993) concept of the
mangle of practice that is, the interplay between resistance and accommodation that plays out in the
experimenters attempt to bring experimental skill and theoretical knowledge (including what Hacking calls home
truths) to bear on a concrete empirical challenge. Regarding the latter, it is argued that, although the initial
challenge (Gelfert 2003) to the criterion of manipulative success stands at least to the extent that it addresses
Hackings proposal on its own terms the subsequent criticisms (Falkenburg 2007, 2014; McKenzie 2010) point to
the existence of a stable middle ground of quasi-entities. Whatever degree of reality such quasi-entities enjoy is
not jeopardized by entity realisms failure to distinguish between
6 entities and quasi-entities.
Page 7 of 46

7) Jack Ritchie I could be wrong but it just depends what you mean: explaining the inconclusiveness of the
realism-anti-realism debate.
We can identify in broad terms two ways of characterising the realism-anti-realism debate in the philosophy of
science. One, made famous by van Fraassen, concerns whether the aim of science is truth or something less
ambitious like empirical adequacy. The other concerns whether we have good reasons to believe in the claims of a
scientific theory that go beyond what has been or can be observed. The first is concerned with a descriptive
matter: what is the goal of science. The second is deals with a normative matter: what ought we to believe. In this
paper I argue that the second debate, at least in its contemporary form, has run its course.
Realism-anti-realism debates of this second kind focus on two main arguments: the No Miracles Argument and
Pessimistic Meta-Induction (or sophisticated variations of it like Stanfords Problem of Unconceived Alternatives).
Realists typically claim we ought to believe our scientific theories because they are empirically successful and the
best explanation of that empirical success is that they are true or approximately true. Anti-realists seek to
undermine this argument by appealing to the history of science. They point to cases of empirically successful
theories in which key theoretical terms such as ether failed to refer so undermining claims that such theories
were true or even approximately true. The realist in turn will say that closer examination of these cases provides
evidence of a more sophisticated kind of referential continuity which vindicates realism; and so on. It is this
dialectic which is doomed to be forever inconclusive. The problems with the standard debate can be grouped into
two kinds: epistemological and semantic.
Epistemological problems
All parties to the debate should be fallibilists. They should admit that we might be wrong in unknown ways about
what we believe about the world. Given this is so, what is sometimes called the prospective challenge for realism
falls away. If the anti-realist expects that the realists should be able to identify elements within a theory which they
know will be retained whatever the future development of science, then that is a hopeless demand for clairvoyance
and at odds with fallibilism. If all that is meant by the prospective challenge is that realists should be able to make
plausible but fallible claims about which parts of their theories they consider to be best supported by the evidence
and most likely true, then that challenge can easily be met. Scientists and others often have a good idea of which
parts of a theory are most secure. However, they would also admit as good fallibilists that they may be and
probably are wrong in some of the details. So providing case studies in which scientists best guesses about what
aspects of a theory would be retained prove nothing unless you reject fallibilism.
Semantic problems
A presupposition of the debate is that in order to make sense of claims or denials of approximate truth we need to
have a well-defined reference relation (or if you are a structural realist perhaps a well-defined semantic similarity
relation). We need something like this, the thought goes, to be able to make some sense of the idea that current
scientists are talking about roughly the same sort of things as past sciences; and we need to do that in order for it
to be even plausible that past theories are approximately true. A great deal of labour then has gone into the project
of coming up with the correct theory of reference. I suggest there is a common way of understanding this project
which is fundamentally misconceived: it is a mistake to think of theories of references as providing a set of
naturalistically specifiable conditions which relate words to some aspect of the world. I illustrate the folly of this
approach by appeal to some well-known paradoxes found in the writings of Stephen Stich and Huw Price.
What we are in fact be given by various theories of reference or semantic continuity are conditions for interpreting
the past theory in the terms of our current theory. Once we recognise that we are interpreting in this way we can
see that claims and counterclaims about continuity are mediated by non-factual judgements of reasonableness. We
must judge what we think is the most reasonable or the most charitable interpretation of past science. I contend
using the example of phlogisten and the ether that the facts of these cases underdetermine realist and anti-realist
interpretations. Both are possible and both may be judged reasonable.
The upshot of this is that once the bad epistemology and metasemantics are cleared away from the realism-anti-
realism debate, there is nothing substantive to be argued about. There are realist ways of articulating the history of
science which emphasise continuities; and anti-realist ways which emphasise discontinuities. What we know of
the history of science allows either story.
I conclude by briefly showing how the permissibility of either a realist or an anti-realist interpretation of the
history of science plays a role in van Fraassens arguments for Constructive Empiricism, particularly in his
characterisation of what Jean Perrin was doing in his famous Brownian motion experiments. I conclude by arguing
if there is an interesting realism-anti-realism debate it is of the kind van Fraassen directs us to and in that debate
the history of science will play a quite different role.
7
8) Dean Peters Observability, perception and the extended mind
In this paper, I will sketch a general account of perception modelled on the account of cognition provided by the
extended mind thesis (Clarke and Chalmers,1998). This account potentially has many applications, but I focus on
Page 8 of 46

one, namely the notion of observability in the scientific realism debate. Scientific realism states that scientific
theories can make (approximately) true claims about not only the observable world, but also about unobservable
entities and processes. A major competing view is constructive empiricism (van Fraassen, 1980), which states that
we have no warrant for believing claims about unobservables i.e. those mediated by scientific instruments and
that the goal of science is thus empirical adequacy. I will argue that selective scepticism in respect of
unobservables is untenable, given the theory of perception sketched out.
Churchland (1985) is a forerunner of this approach, arguing that which perceptual capacities we actually happen
to possess is too contingent a matter to be epistemically significant. If humans were, for instance, all born with
microscopes affixed to their left eye, our natural conception of what counts as observable would differ. Van
Fraassen (1980, 1985) replies that what counts as an observable phenomenon is a function of what the epistemic
community is (that observable is observable-to-us). To the constructive empiricist, Churchlands argument illicitly
presupposes that his hypothetical humanoids are already part of our epistemic community.
The correct response to this, I will argue, is to emphasise that belief in the outputs of our native perceptual
capacities also require justification. Psillos (1996) gestures in this direction when he argues that van Fraassens
arguments against inference to the best explanation apply just as well to the ampliative inference from empirical
success to empirical adequacy as they do to the inference from success to truth. The deeper point is that inference
to the best explanation is required to warrant any claims about the external world, that is, to warrant treating our
senses as perceptual capacities. Consider the standard sceptical worry, that I, the observer, am a brain in a vat, and
that the objects I am apparently observing do not exist. This worry is undermined by the fact that certain
features of objects are best explained by their actual existence. Russell (1912) emphasises the spatiotemporal
coherence of objects. A cat, for instance, is first observed in one part of the room, then later in another, without
appearing at each intermediate point. Dennett (1992) emphasises that we actively seek out new perceptual
contact with objects. So to simulate the existence of an object would require preparing for any possible interaction
the brain might wish to have with it, resulting in a combinatorial explosion in the number of simulated states
required. Thus, although it is logically possible that our perceptions of objects are illusory, the vat-keeper would
have a far easier time simply providing actual objects!
So, there is no obvious way to argue for the existence of even ordinary observable objects without inference to the
best explanation. This sort of consideration could be wielded as a Psillos-style tu quoque against the constructive
empiricist, attacking at the level of observability rather than empirical adequacy. More interesting, however, is to
offer a positive general account of perceptual capacities, drawing on the extended mind thesis. Clarke and
Chalmers argue that a process should be counted as mental if and only if it is functionally integrated into ones
central cognitive processes, i.e. is reliably accessible, has a high bandwidth connection to the centre, etc. Many
(but not all) brain processes meet this standard, and some external processes do as well. For instance, if an
Alzheimers sufferer uses a notebook to record important facts, Clarke and Chalmers claim that the contents of the
notebook should under certain circumstances be considered part of his memory.
Analogously, we might say that something counts as a perceptual capacity if and only if it is functionally integrated
into our other cognitive (epistemic) processes. Importantly, the key markers of functional integration
significantly overlap with those features that lead us to infer the existence of objects. Following Russells
argument, the output of a perceptual capacity should be coherent, both internally and with respect to the output of
other capacities. We would doubt that we were really detecting a cat if our visual image of it lacked spatiotemporal
coherence, or if we could hear it but were consistently unable to observe it visually. Following Dennetts argument,
a perceptual capacity should be a rich source of information across a wide variety of circumstances, and its output
should depend on where it is directed.
These criteria coherence, bandwidth, reliability and directability are not exhaustive. Nevertheless, different
sensors satisfy them to different degrees. Thus, the biological sensors possessed by humans will not necessarily
possess all these features to a greater extent than all artificial sensors. For instance, observations obtained via even
a simple light microscope are significantly richer in information than the output of the vestibular apparatus
(although the latter is usually more reliably available). Thus, to the extent that the outputs of such instruments
satisfy the stated criteria, the objects that they purportedly reveal should be counted as observable.
Van Fraassen might object that, unlike our native capacities, our artificial perceptual capacities are acquired.
However, depriving a newborn animal of vision for a time can render it permanently blind (Wiesel and Hubel,
1964), apparently demonstrating that visual capacity is in fact acquired. Moreover, it is implausible that full-
fledged perceptual capacities would be hardwired, as this would require wiring information to be genetically
encoded. Given faculties of learning or neural plasticity, environmental exposure ensures that perceptual organs
become functionally integrated. Of course, our native perceptual capacities are no doubt acquired by specialised
8
learning faculties. But it remains to be shown that the acquisition of non-native capacities by means of general-
purpose learning faculties differs in kind (as opposed to simply speed) from this more specialised process. I
conclude that there is no principled epistemic distinction between native and artificial capacities, and that the
observable/unobservable distinction is therefore refuted.
Page 9 of 46

9) Empiricism for cyborgs - Adam Toon


One important debate between scientific realists and constructive empiricists concerns whether we observe
things using instruments. Scientific realists argue that we do and that the development of scientific instruments
has enabled us to observe new realms of phenomena previously beyond the reach of our senses. According to
the realist, for example, the invention of the microscope means that we can now see cells and microbes. In
contrast, constructive empiricists argue that the use of instruments does not count as observation. The
development of instruments has created new phenomena that we can observe with the naked eye and which
our theories must accommodate, such as the tracks in a bubble chamber or the images produced by an electron
microscope or CAT scanner, but it has not widened the reach of our senses. Observation remains limited to the
use of our unaided senses and, as a result, for the constructive empiricist, so too does scientific knowledge.
Realists often speak of instruments as extensions to our normal cognitive capacities. For example, in his book on
instruments and computational science, revealingly entitled Extending Ourselves, Paul Humphreys argues that
[o]ne of sciences most important epistemological and metaphysical achievements has been its
success in enlarging the range of our natural human abilities (2004, pp. 3-4)
In Humphreys view, the extension of our natural abilities through instruments has profound implications for
epistemology and philosophy of science. In fact, in extending ourselves, scientific epistemology is no longer
human epistemology (2004, p. 8).
In this paper, I will ask whether the realist may flesh out her view of instruments by drawing on the extended
mind thesis (Clark and Chalmers, 1998). Proponents of the extended mind thesis claim that cognitive processes,
including perceptual processes, can sometimes extend beyond our brains and bodies into the environment.
Although some have begun to explore the consequences of the extended mind thesis for epistemology (e.g. Clark
et al., 2012; Pritchard, 2010; Vaesen, 2011), its implications for the philosophy of science have yet to be properly
explored (although see Estany and Sturm, 2014). I will suggest that the extended mind thesis offers a way to
make sense of realists talk of instruments as extensions to the senses and that it provides the realist with a new
argument against the constructive empiricist view of instruments. One defender of the extended mind thesis
describes humans as natural born cyborgs, ever-ready to incorporate external devices into their cognitive
processes (Clark, 2003). In this paper, I consider the consequences of this vision of humanity for the debate
over scientific realism. The result, I suggest, is an empiricism for cyborgs: a position consistent with the
constructive empiricists core claim, that we should restrict our belief to observable phenomena, but in which
the limits of observation far outstrip what can be seen with the naked eye.
The structure of the paper is as follows. First, in Section 2, I briefly review the debate over instruments
between realists and constructive empiricists. In Sections 3 and 4, I introduce the extended mind thesis and
show how realists might use it to offer a new argument against the constructive empiricist view of instruments,
which I will call the extended perception argument. In Section 5, I consider how this argument differs from well-
known realist arguments put forward by Grover Maxwell and Paul Churchland. Finally, in Section 6, I consider
some of the strengths of the extended perception argument compared to other realist strategies, as well as
some likely objections. We will also see how the debate between realists and constructive empiricists might
turn out to depend upon our conception of persons and the bounds of the epistemic community.

10) Curtis Forbes An Existential Approach to Scientific Realism


My project is to assess the existential value of choosing a realist approach to the philosophical interpretation of
science; that is to say, I seek to determine which human values, roles, and activities seem to be best served by a
realist stance or attitude, in comparison to various antirealist stances or attitudes. I make my assessment on
the basis of a) historical case studies of working scientists, b) the philosophical implications of realism beyond its
metaphysical foundations, and c) the prospects of realism for informing social policy-making. Motivation and
Background In a recent and well-received treatment of scientific realism, Anjan Chakravartty took the approach of
offering what he calls not a defence of realism, per se (2007, xi). This work, instead, is best understood as a
clarification of what realism entails in terms of its foundational metaphysical assumptions. So while his primary
objective is not to defend or to condemn, the idea is that when equipped with a better understanding of what
[scientific realism] entails and does not entail, one may find oneself in a better position to defend or condemn it
(ibid, xi-xii). Chakravarttys particular approach to the realism question can be understood as asking about the
consequences of taking a realist attitude for our metaphysical
9 beliefs: what sorts of metaphysical beliefs will we
have if we are consistent and committed realists? My approach to the realism question can be similarly
understood, as asking about the existential consequences of taking a realist attitude for our social, political, and
scientific practices: what kind of person am I likely to be, what values might I serve, what kind of policies can I
Page 10 of 46

make, and what kind of science will I practice if I choose to be a scientific realist? I have a similar hope for my
project to the one that Chakravartty has for his: while not a defence or condemnation of realism on its own, I hope
that knowing the historical and possible future effects of scientific realism on social, political, and scientific
practices will allow people a more informed choice when choosing to accept or reject it for themselves.
Analysis and Assessment
My analysis is threefold: first, I assess the consequences of realist attitudes in scientific practice through historical
examples, focusing on the contrast between metaphysically inclined and antimetaphysically inclined researchers
in late 19th century electrodynamics (Weber and Helmholtz, respectively). This part of my analysisfollowing
Robin Hendry (1995 and 2001)is meant to challenge some of Arthur Fines claims about how choosing a realist
or an anti-realist attitude makes little difference to scientific practice. I depart from many commentators (e.g.
Hendry 1995 and Psillos 2000) in not arguing that realism is the best philosophical framework for scientific
research tout court, but I do argue that there are specific circumstances (e.g. the measuring of natural constants,
property magnitudes, and theoretical parameters) in which a realist outlook seems more fruitful than its
alternatives, even if something similar can be said for its anti-realist rivals in other circumstances (e.g. an anti-
realist empiricist approach seems, historically speaking, to be much more useful when exploring novel phenomena
in the laboratory and developing new theoretical frameworks). Second, I assess the ways that realist attitudes can
resolve certain philosophical issues relating to the sciences, e.g. with respect to scientific revolutions and the
epistemology of the sciences. This part of my analysis is meant to mirror van Fraassens contention in The
Empirical Stance that an anti-metaphysical attitude can best resolve philosophical issues concerning the nature
and implications of radical theoretical change in science. I argue, on the basis of the now familiar optimistic
inductionand other work by Chakravartty, Psillos, and othersthat scientific realists have several very
perceptive solutions to the various philosophical problems brought up by scientific revolutions. With a consistent
metaphysical backing for the view it would seem that scientific realism, for the most part, faces few philosophical
challenges that appear to be better resolved by one of its alternatives. In my third and final section I assess the
prospects of realist attitudes in developing science and social policy. I argue that the efforts of realists to make
sense of scientific revolutions have generated perspectives on the sciences (e.g. Chakravarttys distinction between
detection and auxiliary properties) that implicitly claim to be capable of making novel predictions about the future
of scientific inquiry. If such perspectives prove consistent with the history of science and as it stands it seems
like they arethis would give us some ability to predict the course of future scientific change, and that would be
quite useful for various forms of science policymaking. At the same time, I note, overly conservative and
hegemonic views have often been supported by misguided realisms, so care must be taken in allowing realism too
much sway in social policy-making about the sciences.

11) Andrew Nicholson Are there any new directions for scientific realism?
Can the debate between scientific realism and anti-realism be satisfactorily resolved? Over the past thirty years
several authors (e.g. Fine (1986a, 1986b, 1990), Blackburn (2002) and Stein (1989)) have argued for a negative
answer to one or both of these questions. Instead of continuing to attempt to argue for the endorsement of either
realism or anti-realism, philosophers of science, we are told, should embrace a quietism (Fine, 1986) about such
matters. Although views of this sort are united in this claim, they differ over the issue of just what is wrong with
the realism/anti-realism debate. On the one hand, there are those, like Fine, who claim that there is a principled
distinction to be made between realist and anti-realist interpretations of science. However, the problem is that
this distinction is uninteresting or unimportant (Fine (1986a; 1986b)) or does not admit of a neutral basis for
adjudication (cf. Wylie (1986), Chakravartty (2011)). On the other hand, there are those, like Blackburn and Stein,
who claim that there is no principled distinction here at all. Clearly, the latter view is the more extreme, and the
threat which it poses to the project of exploring what new directions exist for scientific realism is consequently
more severe.
In this paper, I take up the challenge of defending the idea that there is a principled distinction to be had between
realist and anti-realist interpretations of science. Central to this task will be an examination of what I take to be
the most well-developed argument for the contrary thesis, that presented by Blackburn (2002). The central
argument in the explicit case made by Blackburn is to the effect that no coherent, principled distinction can be
made between the realists notion of belief and the constructive empiricists notion of immersion (what
Blackburn refers to as animation). I argue that although Blackburns explicit focus is on the debate between the
scientific realist and the constructive empiricist about the1appropriate attitude to take to the theoretical claims of
scientific discourse, his arguments readily generalise to the debate between the scientific realist and anti-realist.
In brief, the anti-realist, in order to adequately accommodate our, at least, instrumental reliance on science must
either endorse a reinterpretation of scientific discourse or endorse the adoption of an epistemic attitude towards
Page 11 of 46

such discourse which falls short of belief (cf. Stanford (2006)). However, it is the latter option which is preferable,
and accommodation of our full instrumental reliance necessitates the endorsement of the adoption of the attitude
of acceptance in such a way that we are animated (in Blackburns sense) by the relevant theory. Hence the
coherence of the distinction between realist and anti-realist interpretations of science depends upon the
coherence of the distinction between belief and animation, and so Blackburns argument applies directly to this
more general case.
As a result of this first part of the discussion, I frame the central puzzle which must be resolved if we are to meet
Blackburns challenge: We must establish a notion of animation which simultaneously satisfies the following four
desiderata: (i) the relevant notion of animation must differ from the notion of belief (where this latter concept is
rendered in a way which is intuitively acceptable and compatible with scientific realism); (ii) it should be capable
of grounding an anti-realist argument to the effect that we should not proceed past animation to belief; (iii) the
notion must be recognisably anti-realist; and (iv) it should accommodate our full instrumental reliance on
science.
In the second part of the paper, I present and assess three approaches one might adopt in establishing the
appropriate distinction between belief and animation. The first approach is based on Stanfords discussion in the
last chapter of his (2006), in particular on his attempt to develop a principled and coherent restricted theoretical
instrumentalism. The second approach aims to give the appropriate account of animation in terms of
contemporary accounts of pretense or make-believe. This approach draws heavily on the philosophical projects
of Walton (1990) and Yablo (1998) as well as recent work by Toon (2012). The third approach attempts to
account for animation by appealing to second-order cognitive attitudes. I argue that the first strategy does not
satisfactorily deal with Blackburns challenge since it either falls foul of desideratum (i) or desideratum (iv),
above. I then proceed to argue that whilst the second and third approaches are deficient as they stand, with
suitable amendment and combination they lay a potential foundation for a more satisfactory approach.
In the third and final section of the paper, I provide a preliminary sketch of this more satisfactory alternative.
References
Blackburn, S. (2002) Realism: Deconstructing the Debate. Ratio 15, 111-133.
Fine, A. (1986a). The Natural Ontological Attitude in The Shaky Game: Einstein, Realism, and the Quantum
Theory. Chicago: University of Chicago Press.
---------. (1986b) Unnatural Attitudes: Realist and Instrumentalist Attachments to Science. Mind 95, 149-
179.
---------. (1991) Piecemeal Realism. Philosophical Studies 61, 79-96
Stanford, K. (2006). Exceeding Our Grasp: Science, History and the Problem of Unconceived Alternatives. Oxford:
Oxford University Press.
Stein, H. (1989). Yes, butSome Skeptical Remarks on Realism and Anti- Realism. Dialectica 43, 47-65.
Toon, A. (2012). Models as make-believe: Imagination, fiction and scientific representation. Palgrave Macmillan.
Walton, K. (1990). Mimesis and Make-Believe. Cambridge, Mass.: Harvard University Press.
Wylie, A. (1986). Arguments for Scientific Realism: The Ascending Spiral. Philosophical Quarterly, 23 (3), 287-
297.
Yablo, S. (1998). Does Ontology Rest on a Mistake?. Proceedings of the Aristotelian Society, Supplementary Volume
72: 2296.

B [2]: Truth, Progress, Success and Scientific Realism

12) Michael Shaffer Farewell to the Realism/Anti-realism Debate: Practical Realism and Scientific Progress.
Traditionally scientific realism and scientific anti-realism have been regarded as deeply opposed views of the aims
of the sciences. On the one hand, scientific realists of all sorts hold that science aims (in part or in whole) to
discover true (or perhaps merely approximately true) theories. On the other hand, anti-realists deny that science
aims to discover true theories or even approximately true theories. Many anti-realists also contend that the aim of
the sciences is to produce theories that are practically useful in some important sense of that expression. One
prominent reason that anti-realists adopt this stance towards scientific theories is that they believe that there are
good reasons to hold that all (or even just most) scientific theories are (or must be) qualified by idealizations. 1 So,
according to this important line of anti-realist thinking anti-realists
1 claim that scientific theories are not, strictly
speaking, true. Thus the idealization argument against scientific realism constitutes a powerful attack on scientific

1
See Cartwright 1988 for the most thoroughly worked out version of this view.
Page 12 of 46

realism. 2 It is a direct threat to scientific realism because this argument attacks the feasibility of satisfying the
realists conception of the aim of scientific theorizing. Moreover, this argument is also supposed to support the
contention made by many anti-realists that the real aim of the sciences is only to produce practically useful
theories. This is typically because idealizing inherently involves making simplifications that are motivated by
practical concerns like computability. In other words, according to such anti-realists, we construct idealized (and
hence false) theories because they are practically useful to us and their usefulness is a result of their being simpler
and hence more computationally tractable.
However, the idealization argument crucially depends on the assumption that if scientific theories are qualified by
idealizations, then they are not true (or even approximately true). But, this is by no means an uncontroversial
assumption. For example, recently Shaffer (2012) has argued at great length that idealized scientific theories
ought to be regimented as special sorts of counterfactuals. The antecedents of these counterfactuals are idealizing
assumptions and the consequents are theoretical claims about the behaviors of physical systems. What is then
most important for the issues to be discussed here is that idealizing counterfactuals have perfectly ordinary and
well-understood truth conditions. Even more importantly, many such claims are simply true. So, the key
assumption behind the idealization argument against scientific realism is false. Scientific theories can be true even
if they are qualified by idealizing assumptions and the anti-realists contention that idealized theories must be
adopted for merely practical reasons is erroneous.
What this novel response to the idealization argument against scientific realism further suggests about the more
general realism/anti-realism debate is that this long running debate is predicated on a simple but deeply
important misunderstanding. The nature of this error concerns the compatibility of scientific realism and
scientific anti-realism in terms of the aims that they proscribe for scientific theorizing. In brief, the debate is
confused because scientific realism and anti-realism need to not be regarded as incompatible and these two views
of the aims of the sciences simply arent opposed in the way that the parties to the debate have traditionally
assumed. All that is required to see this is the recognition that there need not be only one aim of the scientific
theorizing. If we cede both realist monism and anti-realist monism about the aims of the sciences we are free to
adopt the view that the sciences aim to produce true (or approximately true) theories and the view that the
sciences also aim to produce theories that are practically useful because they are idealized. We can adopt a hybrid
view of the aims of scientific theorizing that captures the most basic insights of both the realist and the anti-realist.
Let us call this view practical realism and the core insight of this hybrid view is that literal truth and practical
usefulness must be balanced in solving scientific problems. So, while science aims at truth it also involves the use
of practically motivated idealizations that qualify theories. The adoption of practical realism is a crucially
important step in moving beyond the seemingly intractable stalemate that afflicts the debate concerning realism
and anti-realism. In adopting practical realism we can see that the realism/anti-realism debate simply dissolves.
As a result, the adoption of practical realism allows for us to get to the real work of articulating a more realistic
view of theorizing in the sciences that acknowledges a complex view of the aims of the sciences. So practical
realism involves the key recognition of the dual aims of scientific theorizing and thus allows for us to explore how
this dualistic notion of the aims of science impacts important issues in the philosophy of science like explanation
and confirmation. More specifically, practical realism raises all sorts of interesting questions about scientific
representation, degrees of idealization, scientific progress, etc. In this particular paper a novel concept of scientific
progress consonant with practical realism will be explicated. This notion of scientific progress will be framed in
terms of Shaffers (2012) contextualist theory of explanation and some of its most important implications will be
explored. More specifically the concepts of partial explanation and partial understanding involved in this notion of
progress are investigated.
Cartwright, N. (1983). How the Laws of Physics Lie. New York: Oxford University Press.
Shaffer, M. (2012). Counterfactuals and Scientific Realism. New York: Palgrave-MacMillan.

13) Juan Manuel Vila Prez A Critique of Scientific Pluralism The Case For QM
Scientifically speaking, Quantum Mechanics (QM) is the most successful theory ever made. Philosophically
speaking, however, it is the most controversial. Its basic principles seem to contravene our deepest intuitions
about reality, which are most patently exhibited in the metaphysical commitments of Classical Mechanics (CM).
In the last century many attempts to rejoin CM and QM have taken place, like Bohrs Kantian defense of the priority
of classical concepts (Bohr, 1927) or Bohms search for a classical limit trough the quantum potential R (Bohm,
1952). However, most interpretations suffer from one of two main serious difficulties: either they are thought to
be too restrictive and incapable of appreciating the revolutionary
1 features of QM, or else they are thought to be too
implausible given the strange ontological commitments required by the interpretation.

2
See Shaffer 2012.
Page 13 of 46

Scientific Pluralism (SP) has become an attractive middle ground between these two poles. A pluralist stance
respects the idiosyncratic features of each theory, while at the same time restricts their ontology to what is
required by the mathematical formalism.
An important historical example of SP in Quantum Physics can be found in Werner Heisenbergs book Physics and
Philosophy (1958). According to Heisenberg, the history of physics is a succession of theories, were each theory is a
closed system [abgeschlossene System]. A closed system is a system of axioms and definitions which can be
expressed consistently by a mathematical scheme (Heisenberg 1958, p. 92). In each system, the concepts are
represented by symbols which in turn are related by a set of equations, and the resulting theory is thought to be
an eternal structure of nature, depending neither on a particular space nor on particular time (Ibid. p. 93). Given
this systemic closure, each theory generates its own concept of reality, whose validity is not restricted by other
theories.
After Heisenberg, many sophisticated versions of SP have been recently proposed (Krause 2000, Longino 2002,
Chang 2007). Most of these versions engage critically with Heisenbergs notion of closed system. However, they
all share a common assumption which stems directly from Heisenbergs treatment of physical theories, and has
been barely discussed. I will call this assumption the Comprehension Thesis (CT). According to the
Comprehension Thesis (CT), to understand or comprehend something is to relate a multiplicity of elements
trough a finite number of non-arbitrary relations. The local version of this thesis is that each theory must be
internally comprehended. This is typically achieved through the fixation of the referents of some of the theorys
symbols. The symbols are then related to non-symbolic items trough what I call Principle of Referential
Persistence (PRP). A symbol persistently refers to an item of the world E iff refers to E in every occurrence of
. When each symbol becomes attached to its referent, the resulting articulation constitutes the ontology of each
theory.
Although defenders of SP typically ascribe to CT and PRP, the Pluralist denies any global application of CT, since
this would imply an inter-theoretical reduction of the many languages, methods and metaphysical commitments
into one total theory (or ToE), and the rejection of such a theory is precisely the starting point of any scientific
pluralism.
The aim of this paper is to show how this selected use of CT is unwarranted. Since SP lacks any alternative
conception of comprehension for global cases, any restriction of CT to the local case shows itself to be arbitrary.
As the argument develops, it will be suggested that the main reason for this internal weakness is that SP upholds,
along with Scientific Realism, a representational conception of scientific theories according to which a theory is a
description of the physical reality. This commitment is obviously manifested in the maintenance of PRP. It will be
argued that the central problem with Scientific Pluralism is that its restriction of CT is incompatible with its own
representational conception of scientific theories.
So the Pluralist must choose: either she abandons CT altogether or she fully applies it. If she chooses the former
option, it is impossible to distinguish a scientific pluralism from a instrumentalist account of scientific theories,
since the problem of comprehending those theories as being about something would be completely obliterated. If,
on the other hand, she chooses the latter alternative, then she is confronted with a reductive account of physical
theories, since her holding of PRP makes it impossible to avoid a Theory of Everything. As a conclusion, I suggest
that the only viable way to preserve scientific realism and avoid a reductive account of global comprehension is to
abandon PRP in favor of a more dynamic account of the way in which scientific theories relate to the physical
world.

14) Danielle Macbeth Revolution and Realism?


In the seventeenth century the practice of mathematics was fundamentally transformed; and this transformed
mathematics led in turn to a transformed practice of physics. It was at the time very natural to think that although
we had in the past gotten things wildly wrongfire does not want to go up, the sun does not revolve around the
earth, things are not really colored, or flavored, or soundingnow we had things right. Such thinking cannot
survive a second revolutionary transformation such as occurred in mathematical practice in the nineteenth
century and in the practice of physics in the twentieth. The fact that, in principle, this second revolution could be
followed by a third, then a fourth, and so on, provides grounds for skepticism about the truth of our scientific
theories; it suggests that we have no good grounds for scientific realism. But if one considers these two revolutions
more closely, a very different picture begins to emerge. As is especially clear in the case of the mathematical
revolutions that underwrite those in physics, the two revolutions are essentially different. Instead of one damn
thing after another, we see in these two revolutions an organic growth of knowledge that provides grounds for a
very robust form of scientific realism. This new realism, we 1 will see, is structuralist, but it is also interestingly
different from what has come to be known as structural realism.
Ancient Greek mathematical practice, like ancient thought generally, is constitutively object oriented. It studies
numbers of various sorts, for instance, primes, and the odd and the even, where a number is to be understood as a
collection of units; and it studies geometrical figures, triangles, circles, and so on. By the end of the first
Page 14 of 46

millennium, the positional system of Arabic numeration together with its algorithms for performing calculations
had been developed as a form of paper-and-pencil reasoning to rival calculating on a counting board. And Vite in
the sixteenth century devised a symbolic notation suitable for algebraic manipulations. But this notation, like
Arabic numeration, was taken to have only an instrumental value. Until the work of Descartes, the basic
intellectual orientation remained that of the ancient Greeks; it was Descartes who learned to read the symbolism
of arithmetic and algebra as a fully meaningful language, albeit one of a radically new sort. And he did so through a
metamorphosis in his most basic understanding of space and spatial objects. Hitherto conceived as the relative
locations of objects, space was now, through a kind of figure/ground gestalt switch, to be conceived as an
antecedently given whole within which objects might (but need not) be placed, each independent of all the others.
Mathematics, from being about objects, was now to be conceived as a science of relations among arbitrary
quantities as expressed in equations. These equations made possible in turn the modern notion of a law of nature,
in particular, Newtons laws of motion.
Early modern Newtonian science offered a view of reality that was to replace our now-seemingly nave everyday,
sensory view of things. That nave view is wrong, a mere appearance of things to creatures like us; the view
afforded by the exact sciences is right, a picture of how things actually are. The kinetic theory of gases provides a
nice illustration of the idea: what appears to us as the heat, pressure, and expansion of a gas really is nothing more
than the increasing or decreasing motions of tiny particles. Kant then adds a further twist to this: our knowledge of
mathematical and physical reality is ineluctably shaped by the forms of our sensibility and understanding. We
cannot, even through our most successful scientific theories, know things as they are in themselves. Then, in the
nineteenth century, mathematical practice was again transformed to become, as it remains today, a practice of
deductive reasoning from concepts. And this transformation, by contrast with that of the seventeenth century,
constitutes a rebirth of mathematics as a whole. The aspirations of early modernity have finally been fully realized:
mathematics is revealed to be, has become, a purely rational enterprise. Although reason in its first appearance,
say, in ancient Greek mathematical practice, cannot constitute a power of knowingas, for example, perception is
a power of knowing (we can, in some instances, just see how things are)reason can, over the course of history,
through radical transformations in our forms of mathematical practice, become such a power. Astonishing though
it must at first seem, deductive reasoning from concepts can extend ones knowledge in contemporary
mathematical practice.
This new, purely rational and conceptual mathematics has enabled in turn a new form of fundamental physics that
does not merely use mathematics as, say, Newtons physics does, but instead simply is mathematics. There is no
physical correlate. And because this mathematics is purely rational, because it has been purged of all sensory
content, it is correct to say that the aspects of reality it revealsin special and general relativity and in quantum
mechanicsis maximally objective, the same for all rational beings. This, then, is a new form of scientific realism,
one that is structuralist without being quite what is generally meant by structural realism. What it shows is that far
from being incompatible with scientific realism, revolutions in the practice of science can be constitutive of
scientific realism.

15) Nora Berenstain Scientific Realism and the Commitment to Modality, Mathematics, and Metaphysical
Dependence
I show why the scientific realist must be committed to an objective, metaphysically robust account of the modal
structure of the physical world. I argue against Humean regularity theory on the grounds it is incompatible with
scientific realism and fails to be naturalistically motivated. I specifically address the Mill-Ramsey-Lewis view,
which states that laws of nature are those regularities that feature as axioms or theorems in the best deductive
system describing our universe. This view, also known as sophisticated Humeanism, is broadly incompatible with
scientific realism as it can offer no explanation of the success of inductive inference. The Humean about laws of
nature denies the existence of natural necessity. Since the Humean cannot explain why the regularities in our
world continue to hold from moment to moment, she cannot explain why inductive inference should be a
successful method of investigation. This does not sit well from the perspective of a scientific realist. One of the
driving motivations behind scientific realism is the thought that there must be an explanation for the success of
science. The use of induction is a cornerstone of scientific theorizing and investigation. If the Humean cannot offer
an explanation of the success of induction, neither can she offer an explanation of the success of science. Thus the
scientific realist must embrace a robust view of physical modality that involves natural necessity.
Causality, equilibrium, laws of nature, and probability are four notions that feature prominently in scientific
explanation, and each one is prima facie a modal notion. These modal properties are necessary to make sense of
our best scientific theories, and scientific realists cannot do
1 without them. Structural realists in the vein of
Ladyman and Ross [2007] sometimes suggest that this modal structure is primitive. I offer a new account in which
the modal structure of the physical world is metaphysically dependent upon mathematical structure.
The argument proceeds by way of analogy between the no-miracles argument for scientific realism about
unobservable entities and the indispensability argument for realism about mathematical entities. The no-miracles
Page 15 of 46

argument shows that we must be committed to unobservable or theoretical entities if we are to account for
sciences ability to explain and predict novel phenomena. Colyvans [2001] indispensability argument and cases
supporting it (such as Baker [2005]) show that facts about mathematical structures and relations can also explain
and predict features of the physical world. Just as no-miracles is taken to be an argument not just for theoretical
entities themselves but for a dependence relation (usually causal) between the theoretical entities and the
observable phenomena they explain, the indispensability argument must similarly be taken to be an argument for
a dependence relation between explanans and explanandum. This relation between explanans and explanandum is
what I take to be the relation of metaphysical dependence.
I argue that the modal structure of the physical world is derived from mathematical structure. The modal
properties of a physical system, such as limits on what values its physical quantities can take, derive from the
underlying mathematical structure that the system instantiates or approximates. Rather than taking mathematics
to merely usefully model physical systems, we should take mathematical structures to determine the modal
properties of these systems. In other words, the modal structure of a physical system is metaphysically dependent
on the systems underlying mathematical structure.
The metaphysical-dependence view of physical modality paves the way for a unified account of modal and
mathematical epistemology. It illuminates the nature of necessity in the natural world. And it accounts for the
incredibly successful applications of mathematics to empirical phenomena when it comes to explanation and novel
prediction. Further, once we understand that modal physical properties are grounded in properties of
mathematical structures, the enormous usefulness of mathematics in the natural sciences no longer borders on the
mysterious. Thus, the view provides a natural answer to the applicability problem. I show this account of the
applicability of mathematics to empirical science to be superior to those put forth by Pincock [2004] and Bueno
and Colyvan [2011].

16) John Collier Information Can Preserve Structure across Scientific Revolutions
Thomas Kuhn and Paul Feyerbend introduced the issue of semantic incommensurability across major theoretic
changes that we call scientific revolutions, though Hanson originated the problem. Feyerabend recognized that the
problem of semantic comparability arose because of problems in empiricism itself. I argue that the problem arises
from two widely held assumptions. The first is Peirce's criterion of meaning according to which any difference in
meaning must make a difference to possible experience. This is a sort of positivism, but it is not verificationist. The
second assumption is the verificationist view that the meaning of any statement is given by the conditions under
which it can be taken to be verified. Together these assumptions entail the infamous Quine-Duhem Thesis that any
two theories have extensions that are equally compatible with the evidence. This leads less directly to Kuhn's
Incommensurability Thesis, that two theories can be both incompatible and semantically incommensurate,
notoriously across major "scientific revolutions", undermining the idea of cumulative progress in science. Kuhns
own position is notoriously ambiguous, supporting both antirealism and what might be called sequential idealism.
In Second Thoughts on Paradigms he clarified his ideas and placed the problem on incompatible classifications
that have no clear common ground. The problem for the progressive realist, then, is to find some way to establish a
common ground for comparison of classes.
One of the more promising attempts at resolution is the Structuralist Approach to Theories, in which theories are
model theoretic structures isomorphic to parts of the world. It divides theories into the core theory, a set of models
with the laws dropped out but retaining the classes, and a set of observational models without any of the
apparatus of the theory. Unfortunately the approach to intertheoretic reduction advocated was shown fairly early
to permit incommensurability because of the indeterminacy of isomorphism across models.
Further restrictions are required. I will argue that a resolution using the theory of Information Flow developed by
Jon Barwise and Jerry Seligman can provide the extra restrictions, allowing even incommensurate theories to
share evidence. A consequence of this perspective is that the meaning issue is a red herring. Another is the
rejection of verificationism, which forces the meaning issue to the fore, as noted above. Kuhn argued in his later
work that the problem was due to incommensurable classifications across different theoretical contexts, with no
common context to provide a common semantic ground. If we assume that both an earlier theory and a later one,
or two competing theories in general, share a common basis of observational instances, then the problem is that
the two theories classify the common instances differently. Barwise and Seligmans approach to information flow
assumes that we have two classifications of tokens (instances) which bear a relation that has the characteristics of
what they call an infomorphism. If an infomorphism holds, then we can talk of an information flow from one
classification to the other (though the reverse need not be true). An infomorphism is a pair f of functions f, f
between two classifications A and C, one from the set of objects
1 used to classify A to the set of objects used to
classify C, and the other from C to A, such that the biconditional relating the second function to the inverse of the
first function holds for all tokens c of C and all types of A, f(c) A if and only if cC f( ). The biconditional is
called the fundamental property of infomorphisms. HereA is the classification by the classes in A of the instances
(tokens) of the classes. The problem of information flow across theoretical models through their empirical
Page 16 of 46

instances, I will argue, is exactly the problem of semantically comparing the theories, based on Kuhns idea that the
problem is one of classification. The fact that the two theories to be compared have the same empirical instances
(though perhaps under different classifications and thus names) helps considerably, but it does not solve the
problem of intertheortic semantic comparison.
I will set some desiderata for completing the requirements for an infomorphism across theoretical models, and I
will argue that they can be satisfied. I will further argue that unless they are satisfied, incommensurability is a very
real phenomenon that cannot be resolved by strictly empiricist means. My approach is in the spirit of signs in the
semiotics of C.S.Peirce.

17) Juha Saatsi Pessimistic induction and realist recipes: a reassessment


Much of the scientific realism debate has focused on responding to a fa- miliar challenge from the history of
science: arguably history provides evid- ence against the realist notion that we have good scientific evidence
for the approximate or partial truth of our current best theories. Realist responses to this pessimistic
induction are multifarious in their letter, but unified in their spirit: while admitting that our current best
theories clearly fall short of being true simpliciter most realists have aimed to provide a recipe for
characterising the truth-content of our current theories. (Or, more broadly, characterising a uniform sense
in which well-confimed theories latch onto unobservable reality.) These realist responses exemplify the
spirit of (what I call) recipe-realism.
The recipe-realist response to pessimistic induction has led to structural realism and other such positions
that are sweeping and monolithic in their outlook. In the spirit of recipe-realism structural realism, for
example, puts forward a unified notion of structure as capturing realist commitments across a wide range of
disciplines and areas of scientific theorising. I will criticise structural realism and other monolithic realist
positions by questioning the spirit of recipe-realism altogether. I will argue that in response to the histor-
ical challenge the realist should aim to characterise realist commitments via exemplars instead of aiming to
pin down some uniformly applicable recipe.
I will argue that realists should admit a variety of different kinds of realist explanations of empirical success
and predictive accuracy in order to prop- erly capture not only the history of science, but also features of
idealised and inconsistent models. The commonly held idea that a realist must provide a prospectively
applicable recipe for capturing realist commitments is a fools errand; the best we can do, I will argue, is to
provide a range of exemplars accompanied with an associated elucidation of the sense in which a given
exemplar involves a realist explanation of empirical success.
The paper begins by exploring recipe-realism and its limits as a response to the challenge from the history of
science (and from idealised and incon- sistent theories and models, more broadly). I then argue that recipe
realism can at best be bought for the price of an unacceptable degree of ambiguity in realist commitments.
Accepting that realists should forgo the spirit of re- cipe realism, I will argue that they should focus
insteadin a more piecemeal wayon characterising and studying in detail exemplar cases, such as the
transition from Newtonian gravity to general relativity, or the transition from classical to quantum physics. I
will discuss one or both of these cases in suf- ficient detail to illustrate the philosophical issues at stake. I will
explain why the exemplar-response to the historical challenge is necessarily piecemeal by virtue of
acknowledging that any given exemplar has a limited domain of ap- plicability: a realist explanation of the
empirical success of Newtonian grav- ity, for example, only supports realism about theorising that is suitably
similar to this exemplar. By the same token, the realist can accept that there can be exemplars of non-realist
explanations of empirical success. Those also have a limited domain of applicability: they only support anti-
realism about theor- ising that is sufficiently similar to them. (I illustrate this with the Kirchhoff case
discussed in Saatsi and Vickers 2011.)
A also discuss some challenges to the exemplar approach to realism. The biggest challenge is to specify what
a realist explanation of success can amount to. I argue that we can appeal to intuitions and clear example
cases to get started on this.

18) Mario Alai Deployment vs. discriminatory realism


For Gerald Doppelt (2005, 2007) deployment realism cannot withstand the pessimistic meta-induction and
meta-modus tollens objections, because the partial truth of discarded theories is refuted by historical
counterexamples. Besides, deployment realism explains predictive success, but not explanatory success. Doppelt,
instead, proposes a version of realism supposedly explaining also explanatory success, according to which old
discarded theories are completely false, and best current 1theories completely true.
I counter that (I) deployment realism can resist historical counterexamples; (II) explaining explanatory
success does not add much to explaining novel predictions; (III) a realism confined to current theories is
implausible, and easily defeated by the pessimistic meta-induction.
Page 17 of 46

(I) Resisting the meta-modus tollens


Laudan (1981) and Lyons (2002) argued that some radically false theories in the past had predictive success; so,
predictive success does not warrant truth. Psillos replied that: (1) those predictions were not novel; (2) some
successful but discarded theories were partly true; (3) the causal theory of reference shows that those theories
referred to real entities.
Doppelt criticizes these replies. Elsewhere I countered Lyons objections, now I rebut Doppelts criticisms:
(1) Novelty
Doppelts objections to (1):
(i) Temporal novelty is irrelevant.
My rejoinder: but use novelty is necessary to warrant belief in truth.
(ii) Naturalism requires that scientific realism is evaluated as scientific theories. So, since realism does not make
novel predictions, this should not be a requirement for theories either.
My rejoinder: (b) moderate naturalism does not hold that realism is, but that it is like, a scientific theory; (b)
radical naturalism is implausible.
(iii) Some false theories were explanatorily successful, i.e., their account of data was simple, consilient, unifying,
complete, of broad scope, plausible, consistent with background knowledge, etc. So, success is not evidence of
truth.
My rejoinder: theoretical virtues alone, without novel predictions, cannot warrant truth. So, realists are entitled to
discard theoretically virtuous but predictively ineffective theories.
(2) Partial truth
For deployment realism caloric, ether, etc., were inessential to the novel predictions made by their theories.
Doppelt objects that they were necessary to explanatory success, i.e. to the simplicity, consilience, , etc., of those
theories.
My rejoinder: no, because they were not part of the equally (or more) virtuous theories which superseded them
preserving their true assumptions.
(3) Referential continuity
For Psillos empty terms (e.g.ether) involved in novel predictions referred to real entities (e.g., the electromagnetic
field) playing the same causal role.
Doppelt objects that the properties of ether not shared by the electromagnetic field were essential to make the
ether theory virtuous.
My rejoinder: truth is not necessary to explain virtuosity, only to explain novel predictions.
(II) What realism must explain and how
According to Doppelt realism should explain why predictively successful theories are also simple, consilient, , etc.
Truth is not a sufficient explanation, for it does not entail those virtues. We also need the metaphysical
assumption that nature itself is simple, consilient, etc.
I agree, but notice that (i) this assumption is needed as part of the explanans in the realist inference to the best
explanation, so it is not a further premise but an additional conclusion; (ii) it is already implicit in the claim that
theories are true; (iii) the full-blown structure of the realist argument is:
Why scientists succeed in making novel predictions? Because
Assumption1: scientists find partly true and heuristically fruitful theories.
Isnt finding true and fruitful theories more miraculous than just finding empirically adequate theories? No, if
Assumption2: scientific method is truth-conducive and heuristically effective (Maher 1988, White 2003).
But why is scientific method truth-conducive and heuristically effective? because
Assumption3: it reconstructs unobservable structures by analogy, abduction and inductive extrapolation; these
inferences work if nature is simple, symmetrical, , etc; and nature actually is simple, symmetrical, etc.
Assumptions 1-3 are all required to explain novel predictions, and this already explains explanatory success as
well.
(III) The pessimistic meta-induction and apartheid realism
The pessimistic meta-induction goes as follows:
past successful theories were completely wrong;
there is no radical methodological difference between past and present theories;
therefore,
current and future successful theories will also turn out completely wrong.
Deployment realism rejects premise (1). For Doppelt there is no truth in discarded theories, so he rather rejects
(2): current theories are radically better than past ones, since they are much more confirmed and fulfil much
higher standards. So, while for deployment realism both past 1 and present theories are partly true, for Doppelt past
theories are completely false, and current most-successful-and-best-established (MSBE) theories are completely
(though only approximately) true. He doesnt explicitly say completely true, but he obviously thinks so, since he
doesnt countenance partial falsity as an explanation of why current theories occasionally fail.
Page 18 of 46

But this form of apartheid raises many problems:


(1) what explains the novel predictions of past theories, if not their partial truth? For Doppelt, their theoretical
virtues. But I argue that this is impossible.
(2) Why current theories occasionally fail? His only option is denying that some current theories are MSBE: e.g.,
Quantum Mechanics is not best-established, for it lacks internal coherence, intuitive plausibility, etc., and it is
incompatible with Relativity. But then, even Relativity is not best-established, since it is incompatible with
Quantum Mechanics. Paradoxically, therefore, Doppelt is not committed even to the partial truth of our two best
physical theories.
(3) Claiming that MSBE theories are completely true flies in the face of fallibilism.
(4) Many past theories fulfilling the highest contemporary standards were later superseded. Why shouldnt
current MSBE theories be discarded too? the pessimistic meta-induction is back!
Doppelt grants that our belief in the truth of current MSBE theories can be defeated in the future. But then we
should expect that, sooner or later, any best theory is rejected. And since for Doppelt rejected theories are
completely false, it follows that we will never know any truth! So, his distinction between past false theories and
present true ones collapses into radical antirealism. Arguing that both past and present theories are partly true
remains the best strategy for realists.

19) Gauvain Leconte Predictive success, partial truth and skeptical realism
A common defense for scientific realism since Musgraves and Leplins works (Musgrave 1988; Leplin 1997) is to
argue that the truth of our scientific theories is the best explanation of their predictive success, i.e. their capacity to
make novel predictions. Yet, in order to resist Laudans pessimistic induction (Laudan 1981), these defense must
also explain why past theories enjoyed predictive success but are now considered as false. Most of scientific
realists, such as Worrall (Worrall 1989a, 120), Kitcher (Kitcher 1993, 140), Psillos (Psillos 1999, 108) or
Chakravartty (Chakravartty 2007, 45) reply that these past theories were not completely true or false but
approximately or partially true, and only their true aspects are still part of our current scientific theories.
Yet, for this reply to be effective, realists must give an independent criterion or procedure to sort out the true parts
of a theory from the others, otherwise the notion of partial truth is just an ad hoc evasion from the pessimist
induction. Furthermore, the different versions of scientific realism, such as Worralls structural realism, Psillos
orthodox scientific realism or Chakravarttys semirealism, differ precisely on which parts of theories must be
considered as ontologically committing. The elaboration of such a procedure is therefore crucial not only to defend
scientific realism, but also to choose among the many versions of this position.
This procedure is often based on an indispensability argument: if the truth of a theory is the best explanation of its
predictive success, the parts of a theory which are indispensable to its predictive success must be true. While a lot
of recent debates and critics of scientific realism have focused on the possibility to discriminate between false and
approximately true theories on the basis of their predictive success (Stanford 2000; Held 2011; Cevolani et
Tambolo 2013), few has been said about the possibility to use predictive success as a criterion to identify, inside
partially true theories, those parts worthy of belief.
In my paper, I present two objections to the indispensability argument and to this use of predictive success. These
objections come from the examination of the derivation of novel predictions from a theory and rely on the study of
a paradigmatic case: the prediction, from Fresnels wave theory of light, of a white spot in the center of the shadow
of a circular object (Worrall 1989b; Psillos 1995).
The first objection is that the indispensability argument is too liberal. The prediction of the white spot, as it is
exposed by Fresnel in one of the appendix to his Mmoire of 1819 (Fresnel 1866, 365372), implies several false
assumptions and idealizations with no physical meaning, such as the hypothesis of an infinite radius of an aperture
or the absolute velocity of the particle of ether. Many authors have underlined the role of fictions in explanation
and explanatory models (Batterman 2001; Cartwright 2004; Surez 2008; Mki 2011; Bokulich 2012). The white
spot example suggests that fictions are equally important in deriving novel prediction and building predictive
models. Then, if a lot of hypotheses, which are clearly idealizations or false simplifications, are necessary to the
derivation of novel predictions, then false parts of theories are also indispensable to their predictive success.
This objection may be avoided by drawing a distinction between the central and auxiliary hypotheses of a theory.
However, I show that if central hypotheses are only defined as the ones which were maintained in posterior
theories, this distinction is guilty of post hoc rationalization, and is of little use to determine which aspects of our
current theories are true. But if we use another distinction between central and auxiliary hypotheses, we are led
back to the initial problem of distinguishing between the different aspects of a theory, and we fall into a vicious
circle. 1
The second objection is that it is often possible to predict the same phenomena by different predictive processes,
which do not use the same parts of a given theory. The prediction of the white spot was first derived by Poisson
from Fresnels integrals; but Fresnel, in his appendix, offers a simpler solution [to the problem of the shadow of a
circular object] without using the integrals I have used in the preceding Mmoire to compute the other phenomena
Page 19 of 46

of diffraction. (Fresnel 1866, 365) In other words, there are different ways to predict a phenomenon within the
same theory, and the set of the indispensable elements for predictive success is not uniquely determined.
I show that one cannot argue that we are ontologically committed only toward the common elements used to make
the two predictions of the same phenomena, because these elements were not retained in posterior theories such
as Maxwells electromagnetic theory. Moreover, these elements constitute a very small part of the overall theory:
the realist position based on them would be so thin that it would be even more deflationist than Saatsi s minimal
explanatory realism (Saatsi 2005).
In the last part of the paper, I give other examples from various fields of empirical sciences of predictions of the
same phenomenon which use different parts of the same theory. I claim that this fact is compatible with (and
makes a good case for) skeptical realism. For the skeptical realist, scientific successes inclines us to believe that a
theory is partially true, but we cannot discriminate between the parts of this theory which are indeed true and the
ones which are mere useful fictions. In other words, predictive success may tell us that it is very probable that
some parts of our scientific theories are true, but, as a Mafioso arrested by the police, remains silent on the name of
these parts.
References
Batterman, Robert. 2001. The Devil in the Details. Oxford University Press.
Bokulich, Alisa. 2012. Distinguishing Explanatory from Nonexplanatory Fictions . Philosophy of Science 79 (5):
72537.
Cartwright, Nancy. 2004. From Causation To Explanation and Back . In The Future for Philosophy, 23045.
Oxford University Press.
Cevolani, Gustavo, et Luca Tambolo. 2013. Truth may not explain predictive success, but truthlikeness does .
Studies in History and Philosophy of Science Part A.
Chakravartty, Anjan. 2007. A Metaphysics for Scientific Realism: Knowing the Unobservable. Cambridge University
Press.
Fresnel, Augustin Jean. 1866. Oeuvres compltes dAugustin Fresnel. dit par Henri Hureau de Senarmont et mile
Verdet. Paris: Imprimerie Impriale.
Held, C. 2011. Truth Does Not Explain Predictive Success . Analysis 71 (2): 23234.
Kitcher, Philip. 1993. The Advancement of Science-Science without Legend, Objectivity without Illusions . Oxford
University Press 1.
Laudan, Larry. 1981. A Confutation of Convergent Realism . Philosophy of Science 48 (1): 1949.
Leplin, Jarrett. 1997. A Novel Defense of Scientific Realism. Oxford University Press.
Mki, Uskali. 2011. The Truth of False Idealizations in Modeling . In Models, Simulations, and Representations.
Routledge.
Musgrave, Alan. 1988. The ultimate argument for scientific realism . In Relativism and realism in science, 22952.
Springer.
Psillos, Stathis. 1995. Is Structural Realism the Best of Both Worlds? Dialectica 49 (1): 1546.
. 1999. Scientific realism: How science tracks truth. Routledge.
Saatsi, Juha. 2005. Reconsidering the FresnelMaxwell Theory Shift: How the Realist Can Have Her Cake and EAT
It Too . Studies in History and Philosophy of Science Part A 36 (3): 50938.
Stanford, P. Kyle. 2000. An Antirealist Explanation of the Success of Science . Philosophy of Science 67 (2): 266
84.
Surez, Mauricio (dir.). 2008. Fictions in science: Philosophical essays on modeling and idealization. Routledge.
Worrall, John. 1989a. Structural Realism: The Best of Both Worlds? Dialectica 43 (1-2): 99124.
. 1989b. Fresnel, Poisson and the white spot: the role of successful predictions in the acceptance of
scientific theories . In The Uses of Experiment, Studies in the natural sciences, David Gooding, Trevor Pinch, et
Simon Schaffer, 135157. Cambridge: Cambridge University Press.

20) Sreekumar Jayadevan Does History of Science Underdetermine the Scientific Realism Debate? A
Metaphilosophical Perspective
It is often argued that historical evidence intellectually compels us to organize our philosophical temperament
against scientific realism. This issue has been one of the subject matters of the scientific realism debate for more
than two decades. An outpouring of historical studies happened in the recent years as different factions
developed their own explanations as to what is retained across
1 theory-change. I evaluate the development of the
scientific realism debate in the recent two decades from a metaphilosophical perspective. I argue along the lines
of Juha Saatsi (2011) that current explorations in history of science are not enough to vindicate any single
position. Later, I build upon this claim that there is a sense in which, we may declare that history of science
underdetermines the scientific realism debate. This is because, in the debate, philosophical positions like
Page 20 of 46

structural realism and scientific realism couch historical case studies by narratives of their own. These narratives
appear to be lending support to the respective positions in the debate. In order to show this, firstly, I explore the
ways in which Stathis Psillos (1999) and James Ladyman (2011) interpret the Stahl-Lavoisier episode of
eighteenth century chemistry. Secondly, I investigate the attempts of Anjan Chakravartty (2007) and John
Worrall (1989) where they develop their own unique readings of the Fresnel-Maxwell episode in nineteenth
century optics. I show from the analysis of these two episodes that:
The all-inclusive nature of interpretations of certain notions ingrained in the debate ( e.g. 'structure') often
affect the evenness of philosophical positions. Notions like abstract/concrete structure and entities, which
are the backbones of many realist and quasi-realist positions get loosely interpreted over history of science.
Scientific realism (the version defended by Psillos) mostly works well as a position about current science
consisting of matured and predictively successful theories. Scientific realism does not possess the resources to
pin- point truth bearing constituents of past theories even with Psillos naturalistic approach of leaving the task
to the current practicing scientists.
Disparate historical episodes do not uniformly support any single philosophical position in the debate. That is, all
of history of science does not reflect a uniform epistemic attitude.
Therefore, the task at hand does not end here. The lack of sufficient historical evidence in favor of any position
compels us to think against a uniform epistemic attitude across science. We may entertain the view that different
philosophical ideals apply in different historical phases- a
hint found in the works of David Pappineau (1996), Juha Saatsi (2011) and Uskali Mki (2005). I introduce the
notion of an epistemic indicator, which is a domain specific virtue giving warrant to the belief in individual
philosophical positions. For example, notions like intervention and detection in the case of early nineteenth
century physics (favoring entity realism and semirealism) and abstract structural retention in nineteenth
century optics (favoring epistemic structural realism). These domain specific virtues are epistemic indicators for
particular historical phases. The epistemic indicator drives different epistemic attitudes conducive for different
phases of science. Revealing these epistemic indicators in historical cases is the key in shaping a non-uniform
pluralistic epistemic attitude to past and present science. Thus, I conclude by arguing that a metaphilosophical
angle into the scientific realism debate directs us to be pluralists in our philosophical temperament about
scientific knowledge. The pluralism can be asserted by combing history of science where unique epistemic
indicators are at play lending support to different positions.
There are four parts in this paper. In part one, I outline the historical trajectory of metaphilosophical perspectives
in the debate starting from the works of Alison Wylie (1986). I also argue as to why the genre of metaphilosophy
within general philosophy of science is pivotal in evaluating the overall progress of the debate in the last two
decades. In part two, I elaborate some of the recent attempts by thinkers who interpret specific phases of history
of science in favor of their positions (especially Psillos, Ladyman, Worrall and Chakravartty). In part three, the
inadequacy of historical evidence in lending support to individual positions as well as the all- inclusive nature of
interpretations of philosophical notions are scrutinized. Here, I elaborate the reasons as to why we should think
that history of science underdetermines the scientific realism debate. In part four, I introduce the notion of
epistemic indicators with which, a pluralistic epistemic attitude can be endorsed on scientific knowledge.
Selected Bibliography
Chakravartty, Anjan. (2007) A Metaphysics for Scientific Realism: Knowing the Unobservable, Cambridge:
Cambridge University Press.
Ladyman, James. (2011) Structural realism versus standard scientific realism: The Case of Phlogiston and
Dephlogisticated Air. Synthese 180(2): 87-101.
Maki Uskali (2005) Reglobalizing Realism by Going Local or (How) Should our Formulations of Scientific
Realism be informed about the Sciences. Erkenntnis 63: 231251.
Papineau, David (Ed.) (1996). Introduction. In Papineau, D., editor, The Philosophy of Science, chapter 1,.
London: Oxford University Press. 121.
Psillos, Stathis. (1999) Scientific Realism: How Science Tracks Truth, New York: Routledge.
Saatsi, Juha (2011) 'Scientic Realism and Historical Evidence: Shortcomings of the Current State of Debate', Vol 1,
EPSA Proceedings, Springer, 329-340.
Worrall, John. (1989) Structural Realism: The Best of Both Worlds?, Dialectica 43: 99-124.
Wylie, Alison. (1986), Arguments for Scientific Realism: The Ascending Spiral, American Philosophical Quarterly,
23: 287298.

21) Hennie Ltter Thinking anew about truth in scientific realism


2
In response to Laudans famous challenge in his article A Confutation of Convergent Realism, scientific realists
have developed sophisticated theories of reference, critically examined many examples of discarded scientific
theories to determine which kind of elements were transferred to the new, replacement theories, developed ideas
about the characteristics of mature scientific theories, and checked the representativeness of Laudans examples
Page 21 of 46

and the plausibility of his interpretations thereof. However, the theory of what truth is seems to have been least
successfully revised in the light of the Laudans critique, although his critique on the theory on the theory of truth
in use by the scientific realists merely stated that the notion of approximate truth is presently too vague to permit
one to judge whether a theory consisting entirely of approximately true laws would be empirically successful
(Laudan 1981: 47).
This lack of a fundamental revision is surprising in the light of the central role of the concept of truth in claims
made by scientific realists. On the no-miracles account of realism, truth guarantees and explains the success of
science, or alternatively, the success of science can only be understood in the light of its truth. My central claim is
that [i] the current general scientific realist understanding of truth is incomplete, and obscures core features of
how science functions as part of how humans cognitively engage with the world, and [ii] this view of truth is out of
sync with some exciting new and older theories of truth recently developed e.g. the works of Davidson, Sher,
Lynch, and Nozick. In the proposed paper I will present a new theory of truth more appropriate and congenial to
the general model of science found in scientific realism.
An alternative theory of truth must be developed in very specific ways. Both Crispin Wright and Michael Lynch
emphasize that a theory of truth must use truisms (Lynch) or platitudes (Wright) about truth as fundamental
building blocks to guide the development of a theory. Although such a starting point is important, more must be
done. The strategy of reflective equilibrium, used by the political philosopher John Rawls to design his high impact
theory of justice, seems particularly apt in this case. The general intuitions [I prefer this word to truisms or
platitudes] about truth and the best philosophical explanations thereof must be worked into a new theory and
then a to-and-fro dialogue between intuitions and theory must trim and prioritise elements in the theory to ensure
maximum descriptive and explanatory power of those features of human life deeply influenced by the concept of
truth. In addition, the theory must be tested by means of various key examples from the history of science to
examine its explanatory value and problem-solving ability.
In developing my theory of truth, I use tools and building blocks emerging from recent philosophical work that
can enable us to design a more satisfying theory of truth with greater explanatory power. Briefly these building
blocks include the following: The first building block is to be clear what truth is about. For Donald Davidson a
theory of truth deals with the utterances of language speakers, i.e. what they say and write [Davidson 1990: 309].
Gila Sher judges the main issue of truth as our disposition to question whether things are as our thoughts say they
are (26). For Davidson the point of having a concept like truth is that it forms an essential part of the scheme we
all necessarily employ for understanding, criticizing, explaining, and predicting thought and action [Davidson
1990: 282]. In this context truth functions as a normative concept that serves as a fundamental standard of
thought (Sher)
Michael Lynch suggests more building blocks when he sets out three truisms about truth that ought to guide our
theorising, i.e. objectivity [the belief that p is true if, and only if, with respect to the belief that p, things are as they
are believed to be [70]; truth as a norm of belief [it is prima facie correct to believe that p if and only if the
proposition that p is true]; and truth as the end of inquiry [other things being equal, true beliefs are a worthy goal
of inquiry]. Another core requirement of a theory of truth is that it must tell us how truth is manifested in the
different domains of our cognitive life (19). Sher argues that these aspects of our cognitive lives should also guide
the development of a theory of truth: [i] the complexity of the world; [ii] humans ambitious project of theoretical
knowledge of the world; [iii] the severe limitations of humans cognitive capacities; and [iv] the considerable
intricacies of humans cognitive capacities. She argues for a theory that can accommodate that we use a variety of
routes to reach the world cognitively and thus claims that there are multiple routes of correspondence between
true cognitions and reality [6]
Shers defence of a composite correspondence theory of truth that can explain the substantial correspondence
(of one kind or another) between correct cognition and reality shows the direction one might want to go. In
addition, her a neo-Quinean model of knowledge that re-interprets and balances the analytic-synthetic distinction
and gives new contents to the centre-periphery distinction might become fundamental building blocks as well.
Lynch hints at another building block for a theory of truth when he calls truth a functional property of sentences
that supervenes on a distinct kind of propertiesand thus is multiply realizable (2009: 69). For him this means
that atomic propositions are true when they have the distinct further property that plays the truth role
manifests truth for the domain of inquiry to which it belongs (77). He accommodates truth as a concept with a
single meaning through the idea of a single property being manifested in this way, and views truth as plural in
the sense that different properties may manifest truth in distinct domains of inquiry (78).
Robert Nozicks offers building blocks in favour of a more epistemic theory of truth. His theory implies that
truth is a property correct at a specific time but it might be revised later. Truth thus is tentative and Nozick
2
thus rejects a timeless idea of truth such that a fully specified proposition would have a fixed and unvarying
truth value Nozick says [2001: 27]. Nozick does not presume that any proposition is wholly true, i.e. either a
flat-out success or a total failure, as he judges beliefs to have differing degrees of accuracy [Nozick 2001: 47].
Page 22 of 46

In my paper I present a new theory of truth through engaging philosophers like Davidson, Sher, Lynch, Nozick and
scientific realists like Psillos, Chakravarrty, and Devitt. This comprehensive theory integrates core insights to
explain in much finer detail how truth functions as dominating factor in science.

C [3]: Selective Realisms

22) Xavi Lanao Towards a Structuralist Ontology: an Account of Individual Objects


Ontic Structural Realism (OSR) was initially motivated by metaphysical reflections on the nature of the quantum
world that imply a fundamental underdetermination about object individuality (French [1989]; French and Krause
[1995]). On the basis of this underdetermination, advocates of OSR argued for the incompatibility of an
objectoriented ontology with fundamental physics (French and Ladyman [2003]; French [2006]). Since the first
explicit formulation of OSR by James Ladyman [1998], several versions of OSR have appeared in the philosophical
literature, fundamentally differing in the ontological status assigned to objects. The most radical version of OSR,
proposed by Ladyman and French [2003], advocates for the elimination of individual objects from the scientific
realists ontology, leaving structures as the only fundamental entity. More moderate versions of OSR do not
remove objects from their ontology, but reconceptualize them in a structuralist-friendly way. All different versions
of OSR, however, coincide in holding that individual objects should not be included in the fundamental realist
ontology. However, the radical metaphysical revision proposed by OSR faces several conceptual difficulties that
challenge its ontological project.
Two kinds of challenges can be differentiated in the literature: what I will call the conceptual coherence challenge
and the explanatory fruitfulness challenge. The conceptual coherence challenge casts doubt on the possibility of
developing a coherent concept of structure in the absence of objects. This challenge has been presented in the
literature in different ways: by pointing out the conceptual contradiction of postulating relations without relata
(Psillos [2001]; [2009], ch. 8); by demanding a precise account of the identity conditions of structures, not only as
kinds but also as individual instances, without the inclusion of individual objects (Chakravartty [2003], [2007]); or
by suggesting that the concept of structure used in OSR collapses into Platonism (Cao [2003]; Psillos [2009], ch. 8).
The explanatory fruitfulness challenge questions whether OSR can successfully explain central features of scientific
activity. Chakravartty ([2007], sec. 3.4) and Psillos ([2001]; [2009], ch. 8) argue that objects are central in any
explanation of change and causality and, as a result, a purely structuralist ontology will have explanatory gaps.
Although efforts have been made to answer these challenges, the answers given have mostly been developed as
responses to particular criticisms and they are partial and scattered; there has been little systematic effort to
develop a comprehensive structuralist ontology. In this paper, I take a first step towards developing a
comprehensive structuralist metaphysics. In particular, I will focus on providing the conceptual tools that allow
the construction of an account of individual objects within the structuralist framework. A precise account of
individual objects is particularly relevant for clarifying the ontology of OSR because both of the main ontological
challenges to OSR stem from the worry that the displacement of objects from the center of the realist ontology
results in, at best, an explanatorily inert ontological framework and, at worst, plain incoherence.
In order to develop the structuralist ontological framework, I build on the conceptual and formal apparatus of L.A.
Pauls Mereological Bundle Theory (see Paul [2005], [2006], [2012], and more recently her A One Category
Ontology). The aim of Pauls ontological project is to maximize ontological parsimony by showing that properties
suffice for building up the whole structure of the world without the need of any further ontological categories.
Thus, the whole world is built from a single fundamental category properties by a single fundamental
building relation mereological composition. I adapt Pauls general ontological framework to give an account of
individual objects in terms of structural relations. My account of individual objects can be considered a version of
the bundle theory, which takes objects as bundles of relations in the following sense: Structuralist Bundle Theory
(SBT): For any object x, there is a set of relations S such that (i) x instantiates S, and (ii) x is fully mereologically
composed of S.
Although this definition is just the bare bones of the theory, some brief remarks can be made here to clarify its
reach. First, SBT preserves the core ontological proposal of OSR by entailing that relations are more fundamental
than individual objects, that is, that objects ontologically depend on relations. Second, SBT takes advantage of the
formal apparatus of mereology in defining objects as mereological sums of relations. As a result, the ontological
weight of objects is completely accounted for by the relations they are composed of; there is nothing over and
above these relations. This definition allows OSRists to have an operative concept of individual object without
postulating mysterious entities nor accepting any non structural
2 entities in their ontology. Finally, SBT takes
relations to be aspects or parts of structures. Relations are taken to be ontologically fundamental and concrete.
The concreteness of structures is captured by understanding structures as analogous to n-adic immanent
universals, that is, as spatio-temporally located universals that are wholly present in each of their instances.
Page 23 of 46

The aim of SBT is to provide a general framework for structuralists with which to define individual objects within
the boundaries of OSR that helps to clarify certain obscurities of the structuralist ontology. First, SBT can help
dispel some worries about the conceptual coherence of OSR. By analyzing structures in terms of a well-understood
metaphysical notion, immanent universals, SBT provides a clear notion of object that may help to elucidate how to
understand the structuralist framework as a whole. Further, SBT clarifies why objects as relata are usually
considered by OSRists as mere heuristic tools that are ontologically irrelevant by clarifying the ontological
dependence relation between objects and structures. Finally, SBT can shed some light on the explanatory challenge
by providing an account of objects that is able to ground our explanatory practices. By providing a conceptually
useful criterion with which to individuate objects, SBT allows OSR to appeal to objects and causal relations as
heuristic devices used for explanatory purposes. Although objects are not ontologically fundamental entities, they
do correspond to real aspects of reality: the mereological sum of relations they are composed of. SBT captures the
ontological insights of OSR in a conceptual framework constituted by well-understood and precise metaphysical
concepts, thereby facilitating the dialogue between OSR and other ontological frameworks and making the radical
metaphysical revision of OSR more palatable.

23) David William Harker Whiggish history or the benefit of hindsight?


Successful scientific theories, which have been replaced by radically distinct alternatives, challenge the idea that
success is a reliable indicator of (even approximate) truth. Confronted with particular examples of such theories,
however, it is often tempting to respond that while the replaced theories were defective in some respects, they
nevertheless contained genuine and substantive insights about the domain they purport to describe. Aether
theorists may have erred in their assumption that light is propagated through an all-pervading, elastic solid, but
much of the work conducted by aether theorists remains central to modern optics. Given this general kind of
response, the replacement of successful theories is wholly unsurprising: the theories included mistakes that their
successors overcame. The success of such theories we might suggest - is similarly unsurprising: it is attributable
to the fact these theories were true in certain respects. Thus, if we could sensibly attribute success to the parts of
replaced theories that have been retained, and dismiss the rejected components as playing no role within that
theorys success, then the inference from success to approximate truth would appear to survive the historically
based threat. The suggestion that realist commitments should extend only to certain parts of scientific theories,
namely those that are somehow responsible for the successes of that theory, has become known as selective
realism.
Selective realism requires us to re-assess replaced scientific theories, to establish which constituents contributed
to the theorys success and which did not. One of several widely appreciated challenges for such assessments,
however, is to avoid inappropriate reliance on contemporary scientific theories. This is known as the problem of
whiggish history. If I use the perspective of contemporary theories both to evaluate which parts of past theories
were approximately true and to determine which parts were responsible for the successes of those theories, then
the results are likely to converge. However, that convergence may result simply from the shared perspective from
which the questions are addressed, and hence may not provide any good reason for supposing that past theories
were successful because some of their parts were approximately true. It simply begs the question, against those
who doubt that success indicates approximate truth, if past successes are attributed without independent
justification to those parts of modern theories that have been retained.
One response to the problem of whiggish history seeks means of splitting scientific theories in ways that involve
no dependence on more modern scientific understanding. Im not overly optimistic about the strategy. More
importantly, there are reasons to worry that a complete embargo on current knowledge, for purposes of
reassessing past theories, is unnecessarily restrictive. Science is self-correcting. We know more about confounding
variables, previously unappreciated alternative explanations, vulnerabilities with observation and equipment, and
so on. In short, we appear better situated to evaluate past theories than those who came before us. Denying
ourselves access to such advantages, when evaluating past results, shouldnt be admitted too hastily.
What becomes an important issue, therefore, is that of understanding when our history has become whiggish. In
this paper I argue that this question requires us to consider the kinds of questions we ask about replaced theories
and the evidence that was offered in their support, the ways in which we utilize contemporary understanding for
purposes of addressing those questions, and what realists can justifiably conclude on the basis of these analyses.
For example, there is an important distinction between dismissing the postulates of rejected theories on the
grounds that these have not been retained within our current scientific image, and dismissing those same posits
for reasons that can be independently justified. It was once considered a verifiable fact that the size of Venus, as
observed from Earth, did not vary appreciably. We now know 2 that the spatial resolution of naked eye observations
is insufficient to trust such determinations. Insofar as Ptolemaic astronomy successfully accounted for
observations of Venuss apparent size, we now recognize that success as an illusion. Now, critics of the success-to-
truth inference will respond that todays successes could be similarly illusory. This is a response that needs to be
Page 24 of 46

taken seriously, but Ill argue that it gestures us towards the kinds of scientific realism that are viable rather than
presenting a decisive obstacle to scientific realism in all its forms.
Todays scientific realists are fallibilists. Fallibilism is consistent with the idea that science is self-correcting, and
that our theories are becoming more truth-like. A realist might therefore claim only that science becomes more
truth-like as we unearth and remove errors, although now the role of success in defending realism becomes more
obscure. More positive realist theses are also worth consideration, however. Fallibilism about scientific knowledge
can motivate the idea that scientific success is best understood comparatively: theories are not successful, or
unsuccessful, but more or less successful than available alternatives. Coupling a comparative notion of success
with the selective realist strategy suggests we should be particularly interested in those novel insights that have
induced scientific progress. If comparative empirical success is achieved through the introduction of more truth-
like ideas, then we might anticipate the retention of those ideas, or at least something closely approximating those
ideas. There are further worries to be confronted here, but the problem of whiggish history is one Ill argue that we
can overcome. Ultimately, the problem of whiggish history need not extend to a complete prohibition on current
understanding, although that there are important lessons to be learned about how history can inform the realism
debate and what kind of selective realist theses might emerge.

24) Christian Carman & Jos DezLaunching Ptolemy to the Scientific Realism Debate: Did Ptolemy Make Novel and
Successful Predictions?
Scientific realism (SR) about unobservables claims that the non observational content of our successful/justified
empirical theories is true, or approximately true. As is well known, its best argument is a kind of abduction or
inference to the best explanation, the so-called Non-Miracle Argument (NMA): if a theory is successful in its
observational predictions that performs making use of non-observational content/posits, if such non-
observational content would not approximately correspond to the world out there, its predictive success would
be unexplainable/incomprehensible/miraculous. In short: SR provides the best explanation for the empirical
success of predictively successful theories. Empiricists like Van Fraassen may argue that NMA is question begging,
or simply has false premises, for there is other (at least equally good) explanation of empirical success, namely
empirical adequacy. Yet, most realists feel comfortable replying that that empirical adequacy provides no
explanation at all, or at best an inferior explanation than (approximate) truth.
This comfortable position enters into crisis when Laudan brings the pessimistic meta-induction (back) to the
debate. Laudan recalls that history of science offers us many cases of predictively successful yet (according to him)
totally false theories, and provides a long list of such cases. Laudans confutation was contested in different ways,
among them arguing that his list contains many cases in which the theory in point was not really a piece of mature
science and/or that it was cocked for making the successful predictions. But not all cases could be so contested and
realists acknowledged that at least in two important cases, caloric and ether theories, we have successful and novel
predictions made with a theoretical apparatus that posits non-observable entities (caloric fluid, mechanical ether)
that according to the next, superseding theories do not exist at all, not even approximately. Realists accept that
they must accommodate such cases and the dominant way of doing so is going selective: when a theory makes a
novel, successful prediction, the part of its non-observational content responsible of such a prediction need not
always be the whole non-observational content; many times it is only part of the non-observational content what
is essential for the novel prediction, and it is only the approximate truth of this part what explains the
observational success.
Selective Scientific Realism (SSR) may be summarized thus: Really successful (i.e. with novel predictions)
predictive theories have a part of its non-observational content, the part responsible of their successful
predictions, that is (a) approximately true, and (b) approximately preserved by the posterior theories which, if
more successful, are more truth-like. SSR(a) explains synchronic empirical success and SSR(b) explains diachronic
preservation (and increase) of empirical success.
Since we dont have independent, non observational direct access to the world to test SSR(a), the claim that is
relevant for testing SSR as a meta-empirical thesis is SSR(b). And selective realists claim that history of science
confirms SSR(b). According to them, the historical cases that count as confutations or anomalies for plain, non
qualified realism, are actually confirmative instances of its more sophisticated, selective reformulation SSR.
Although caloric and ether theories are false, they are not totally false, they have a non-observational part that is
responsible of the relevant novel successful predictions which (is approximately true and that) has actually been
approx retained by its historical successor. Then history confirms SSR(b), the only testable part of SSR, the SSR is
an empirical thesis that, though fallible, is historically well conformed.
Confronted with an alleged case of a theory that made ovel, 2 successful predictions but such that -the opponent
argues- its non-observable content is not retained by the superseding theory, the selective realist must find out a
part of the theory in point that is both (i) sufficient for the relevant prediction and (ii) approx retained by the
superseding theory. As an empirical thesis, SSR may have anomalies and the way it must fix them is always doing
this divide et impera movement. According to some, SSR successfully fixed the caloric and ether anomalies, while
Page 25 of 46

according to others it has not, or not yet, or not fully. The debate continues, and other eventual anomalies are
presented and discussed. For instance the phlogiston case, initially dismissed as a pseudo-case but later
acknowledged by some as a real troublesome case and faced in a similar SSR manner.
Our goal here is to launch a new case to this debate: Ptolemaic astronomy. This was another item in Laudans list,
though quickly dismissed as not really doing novel predictions. We argue that this is not so. The not novel
predictions tag put on Ptolemys astronomy is a consequence of the mere epycile+deferent reading of the
theory, a myth that, like alls myths, is extremely popular but false. We find this case particularly useful for it is
easier to find here the parts responsible of the predictions. In other cases, such as the caloric or ether ones, much
of the discussion and disagreement between realists and their opponents concern whether some non-obervational
part of the theory was/was not really necessary for the relevant prediction. Was the hypothesis of a mechanical
substance with orthogonal vibration necessary for deriving Fresnels laws, from which the white spot prediction
follows? Realists say no (to the mechanical substance), opponents say yes. Was the material fluidity of caloric
essential for the derivation of the speed of sound in ? Realists say no, opponents say yes. Likewise in other cases.
We find Ptolemys case especially useful in this regard, for here the contents responsible of the predictions are
relatively easy to identify.
We find this case not only especially useful but also especially interesting, for here the SSR strategy of trying to find
in the superseding theory the approx retention of the prediction-responsible parts seems prima facie particularly
difficult, if not unpromising. But a detailed discussion of, and conclusion about, each case for the SSR debate goes
beyond the limits of this paper. Our goal here more limited, just to lunch these predictions to the scientific realism
arena, showing that Ptolemys case deserves attention in this debate.
We will present and discuss what we think are the best candidates in Ptolemy's astronomy for successful novel
predictions:
(1) The parallax/distance of the Moon at syzygies
(2) The phases of inner planets at first conjunction and outer planets at conjunction
(3) That Mercury and Venus are the only planets between the Sun and the Earth
(4) The growth of brightness during retrograde motion for Mars
(5) Mars is not eclipsed by the Earth
The conclusion is that, but perhaps for the first, these cases represent a challenge that the selective realist must
face.

25) Timothy Lyons Epistemic Selectivity, Historical Testability, and the Non-Epistemic Tenets of Scientific Realism.
In Part One of this paper, I survey a set of live-option meta-hypotheses that contemporary scientific realists claim
we can justifiably believe. More carefully, scientific realists offer an empirical meta-hypothesis about scientific
theories that, they claim, we can justifiably believe. In its unrefined formulation, that hypothesis is successful
scientific theories are approximately true. The justification for the second correlate of this meta-hypothesis,
approximate truth, is that it constitutes the only or at least the best explanation of the first correlate, success.
Prompted (e.g. by Laudan) to address empirical, historical challenges against that unrefined version of the realist
meta-hypothesis, realists have modified the correlates of that meta-hypothesis (and accordingly, the elements of
their explanatory argument). In particular, realists have introduced into their meta-hypothesis various criteria
meant to pick out particular constituents of scientific theories. Here I endeavor to identify the most charitable
formulations of the recent criteria advanced by realists and, hence, the most recently formulated meta-hypotheses
that todays realists claim we can justifiably believe.
Upon doing so, I argue that each resulting live-option meta-hypothesis falls into one or more of the following
categories: (1) the criterion packed into the second correlate of the meta-hypothesis fails to pick out those
constituents that are genuinely responsible, and so deserving of credit, for successand, hence, crucially, the
criterion fails to offer the selective realist any kind of explanation of that success; or (2), the criterion merely
immunizes epistemic realism, resulting in a meta-hypothesis that is neither testable nor able to inform us of just
which theoretical constituents are picked out by the meta-hypothesis that realists claim we can justifiably believe;
or (3), the criterion fails to pick out constituents that reach to a level deeper than the empirical data, thereby
failing to license commitment to a meta-hypothesis that goes beyond meta-hypotheses that are happily embraced
by anti-realists; or finally, (4), although the criterion is relevant, testable, and adequately realist, the meta-
hypothesis containing the criterion as a correlate is in significant conflict with available historical data.
In Part Two, I focus on those realist meta-hypotheses falling into category (4). Doing so, I offer a novel account of
the nature of the historical argument against epistemic realism. I contend that the form and content of this novel
argument render untenable even a fallible, conjectural variant 2 of epistemic realism. Also, mindful that some
historians are inclined to deny the legitimacy of testing philosophical hypotheses against the history of science, I
show that, when these realist meta-hypotheses are properly understood, for instance, as hypotheses about
scientific texts, their testability is no more problematic than the testability of scientific hypotheses. And inquiry
should not be banned in the former case any more than it should in the latter.
Page 26 of 46

In Parts Three and Four, I direct these arguments toward a positive conclusion. Despite the fact that some
otherwise relevant, testable, and adequately realist meta-hypotheses conflict with historical data (those category
(4) meta-hypotheses), the examination and testing of such meta-hypotheses is nonetheless profoundly informative
toward the development of a proper and, in fact, still realist conception of science. In more detail, one can invoke
the selective realists primary insight at a higher level: as above, selective scientific realists acknowledge
that scientific theoretical systems contain various individual constituents, and they endeavor to formulate an
appropriately refined/restrictive meta-hypothesis that we can justifiably believe; yet just as scientific theoretical
systems consist of individual theoretical constituents, scientific realism itself consists of a set of individual
constituents or tenets. The epistemic tenet that we can justifiably believe the selective realist meta-hypothesis, for
instance, that those theoretical constituents genuinely contributing to successful novel predictions are
approximately truethis is only one constituent of scientific realism. I argue that one can fruitfully bracket
beliefor at least our obsession with what we, or scientists, can justifiably believeand so bracket that epistemic
tenet of scientific realism.
After doing so, in Part Four, I identify and isolate a set of non-epistemic yet nonetheless fundamental constituents
or tenets of scientific realism, whose articulations, I contend, have received insufficient attention, especially when
compared to the epistemic tenet around which the debate has pivoted. For instance, while realists emphasize that
the primary aim of science is truth, just which subclass of true claims science seeks remains inadequately
explicated. Also, although realists embrace inference to the best explanation as the primary mode of inference by
which science endeavors to attain truth, realists (as well as non-realists) readily admit that we lack a proper
understanding of just what this kind of inference amounts to. My proposal here is that, despite the historical threat
against the epistemic tenet of scientific realism, careful attention to the testable meta-hypotheses of selective
realism and the historical data put forward against those meta-hypotheses affords us a far more informed
articulation of these other non-epistemic yet nonetheless wholly realist thesese.g. regarding the kind of truth
science seeks and the mode of inference employed to attain it. And, bracketing belief as I am proposing, these
refined articulations can be deployedin the way scientific hypotheses can be deployedas tools for further
inquiry. That is, with these more informed articulations comes a better understanding of the nature of past
scientific practice, one that may even afford contemporary scientists themselves liberation from some of the myths
(e.g. whiggism) to which they may inadvertently maintain a commitment. Hence, although I challenge the view that
selective realism succeeds in picking out a meta-hypothesis we can justifiably believe, I seek to show how testing
such empirical meta-hypotheses against the history of science can be immensely valuable, perhaps even toward
the advancement of scientific inquiry itself.

26) Peter Vickers A Disjunction Problem for Selective Scientific Realism


Selective scientific realists make a realist commitment not to a whole theory which enjoys predictive success, but
instead to the parts of the theory which are doing the scientific work to bring about that success. But what it means
to do the scientific work is still very much an open question. A reasonably uncontentious starting point is to make
it a sufficient condition for doing scientific work that a constituent plays a clear role in the derivation of a
successful, novel prediction. But we may ask: what does clear role mean?
Suppose for a given case we find a derivation of a prediction in the scientific literature. Can we claim that the
assumptions and/or equations written down as part of that derivation are working, and therefore require realist
commitment? No, this would be a mistake. Just because some assumption has been used to reach a
successful/correct conclusion, doesnt mean we ought to believe it is (approximately) true. Toy counterexamples
bring this to life. A doctor might be quite wrong about your having the adenovirus, and yet still reach correct
conclusions about how your symptoms will develop. There is no miracle here. The reason the doctors
conclusions are correct is that you do have one of the cold viruses, and the doctor is committed to you having one
of the cold viruses in virtue of her more specific assumption that you have the adenovirus. And, crucially, the
reasoning doesnt depend on the virus specifically being the adenovirus. Similarly, one might correctly predict a
lifts cables will snap by reasoning with the false assumption that the load is 50kg too heavy (Saatsi 2005, p.532).
One predicts successfully, since the reasoning only depends on the assumption that the load is too heavy the
belief that it is specifically 50kg too heavy is idle (not working).
If playing a role in the derivation of a successful prediction is not sufficient to be doing work in the sense
relevant to realist commitment, what is? The above examples suggest that one should consider, for any given step
in a scientific derivation, whether that step would have gone through with less specific, weaker assumptions. If it
would, then the realist only makes a commitment to those weaker assumptions. But what do we mean by weaker?
In the above toy examples we can think of B being weaker2 than A whenever B is entailed by A: Dave has one of the
cold viruses is entailed by Dave has the adenovirus, and The load is too heavy is entailed by The load is 50kg too
heavy. Thus Vickers (2013) defines weaker in terms of entailment within his own theory of selective realist
commitment. However, in general we cant mean entailed by when we say weaker than this leads to a
disjunction problem.
Page 27 of 46

The problem, briefly, is as follows. Given some assumption A involved in the derivation of a prediction, a weaker
(entailed) assumption is always AvB for any arbitrary B. Now if B is chosen/engineered so that it can be used to
make the derivational step go through, then AvB is an assumption weaker than A which is sufficient for the
derivation to work. Allowing such disjunctions within the realists definition of weaker would lead to some very
odd results for realist commitment. Take the case of the plummeting lift: the realists commitments would have to
include some peculiar disjunction such as The load was too heavy OR somebody cut the cables OR the cables were
old and weak OR . This is a statement, entailed by The load is too heavy, which is sufficient to reach the
successful prediction that the lift will indeed plummet. But then a concern arises that the realists commitments
can never fail, since whatever caused the lift to plummet can be tacked onto the end as an extra disjunct. In
addition, many anti-realists would be happy with making this sort of commitment: of all the different disjuncts one
of them might be (approximately) true, but the problem is that we cant know which one, since all of them lead to
the same successful prediction.
In theories of causal explanation we find a very similar disjunction problem. Sometimes explanations include
overly specific claims, and one reaches a better explanation by introducing more abstract (weaker) claims.
However, this goes wrong if one allows the move from claim A to claim AvB for arbitrary B. To get around the
problem Strevens (2008) introduces a cohesion requirement, which acts as a brake on abstraction, halting it
before it crosses the line into disjunction. (p.103). Whether this cohesion requirement can help the realist with
her own disjunction problem will be investigated in this paper.
References
Saatsi, J. (2005): Reconsidering the Fresnel-Maxwell Case Study, Studies in History and Philosophy of Science
36(3): 50938.
Strevens, M. (2008): Depth. Harvard: Harvard University Press.
Vickers P. (2013): A Confrontation of Convergent Realism, Philosophy of Science 80(2): 189-211

27) Raphael Kunstler Semirealists dilemma


The goal of my paper is to evaluate semirealisms ability to satisfy selective realisms requirements. I try to show
that, in spite of its qualities, semirealism (henceforth SMR) an ambiguous position, and that this ambiguity
threatens the whole project of providing a realist response to the pessimistic argument. The claim of my paper is
that semirealism answers PA, but the price of this answer is a closet instrumentalism.
This assessment leads me wonder how to fix Charkravarttys position. Framing the problem An uncontroversial
definition of scientific realism defines it as the belief that theoretical posits are true, and that objets that are
described by scientific theories really exist. One leading argument against this position is the pessimistic argument
(PA). Instead of rejecting this argument, a possible response to it is to reformulate ones realism. It is possible to
grant to the argument that a part of our theories are not well grounded, without renouncing to the belief that
everything that the theory says is true. This strategy, which is sometimes called Selective realism (RS), claims that
only a part of these objects exist (?). Its main challenge is to find a reason to prefer a part of theories over other
parts (?, 31). This choice must not be arbitrary : it needs to bear on a reliable epistemic base, that is good reasons
to believe the theoretical content of theories.
Let us make explicit the requirements that should be met by any selective realist strategy. In order to do so, it is
useful to distinguish selective realism and selective skepticism (SS). According to Chakravartty, SS consist in
restraining ones theoretical commitments to only a part of scientific theories, in order to identify these parts that
are not expose to future refutation. To be successful, a selective skepticism strategy does not have to be as precise
as a selective realism strategy : selective realism implies selective skepticism but the reverse is false. A selective
skepticism strategy is not necessarily a realist one. This can easily be shown : instrumentalism is a selective
skepticism, but a selective realism. ?, 31 underestimates the fact that instrumentalism is also a SS position.
In its original intention, Laudans argument should not be construed as a one step argument, but a two step one.
His strategy is fully revealed in his answer to Alex Rosenberg. The lesson to be drawn from this distinction
between SS and SR is that a satisfying realist answer to PA should not only be realistic (?), but also realist.
Therefore, a selective realist strategy (SRS) should satisfy the three following requirements : 1. Ability to resist the
PA 2. Identification of an epistemic base capable of supporting non-empirical claims. 3. Identification of an a non-
empirical content epistemically justified. There are three main ways of failing to be a complete SRS : A)
Instrumentalism : this position only satisfies requirements (1) and (2). B) Constructivism : this position only
satisfies requirements (1) and (3). C) Convergent realism : this position only satisfies requirements (1) and (3)
Two interpretations of semirealism SMR can be defined by the three following claims : The empirical base of
SMR is the knowledge of detection properties (this is the 2lesson drawn from entity realism). The content of
SMRs commitment is concrete structures.
A concrete structure is a set of detection properties. The selective skepticism aspect of SMR is encapsulated in
the distinction between detection and auxiliary properties. Only detection properties should be regarded as
robust. In order to assess SMRs value in the light of SRs requirements, it is necessary to clarify Chakravarttys
Page 28 of 46

position. The notion of detection property plays a central function in SMR. In order to clarify this notion of
detection properties, let us introduce a preliminary distinction between experimental and unobservable
properties. An experimental property is a property of an experimental setting : it is an observable property, it can
be used as an epistemic justification of other claims. A detector has experimental properties : it is observable and
give us good reasons to believe some propositions. An unobservable property is a property of an unobservable
object. In order to be a satisfying selective strategy position, a theory should identify experimental properties that
enable scientists to conclure in the existence of some unobservable properties. Experimental properties should be
the basis of realist knowledge and unobservable properties should be its object. Now, in the characterization of
SMR, detection properties seem to have two functions. On the one hand, they are seen as experimental properties.
On the second hand, they are regarded as the very content of scientific theories, as realist properties. The whole
project of SRM is grounded on the two following identifications : Some of the effects of unobservable objects are
experimental changes. The cause of experimental changes are unobservable objets. To identify these two
properties requires to solve the epistemic circle problem. Semirealists dilemma is the impossibility to
simultaneously satisfy the three requirements of a selective realism strategy. Either detection properties are
construed as properties of the detectors (or as experimental properties) or as the properties of the detected
objects (or as unobservable properties). This ambiguity leads to two very different interpretations of SMR : If
DP are the detectors properties, they are a good epistemic justification, but only for an instrumentalist content.
If DP are objetss properties, then, they have a genuine realist content, but they are not epistemologically
grounded.

28) Elena Castellani Structural Continuity and Realism


Continuity through theory change is a key issue for scientific realism. In particular, structural continuity
through theory change plays a crucial role in the form of scientific realism called structural realism. The
possibility of individuating some continuity of structure between predecessor and succes- sor mature
scientific theories was notoriously used by John Worrall, in his seminal 1989 paper on structural realism, for
obtaining the best of both worlds: that is, to account for the empirical success of theoretical science, while
accepting the full impact of the historical facts about theory change in science.
Since Worralls 1989 formulation, structural realism has been much dis- cussed and different versions have
been proposed. But the central role at- tributed to structural continuity has remained unchanged. In general,
it is assumed that a form of continuity between subsequent successful theories is a necessary condition for a
realist stance. With respect to structural real- ism, the assumption is that preservation of some sort at the
structural level is what guarantees that fundamental scientific theories succeed in representing some real
structure of the physical world and are, in this sense, approxi- matively true. Structural continuity is thus an
essential ingredient in the no miracles argument (the empirical successes of our best theories would be
miracles if what these theories say that is going on behind the phenomena were not true) when used in
support of scientific realism in its structuralist version.
Although commonly accepted as a condition for structural realism, struc- tural continuity has become matter
of some critical reflection in recent liter- ature. One main point of discussion is the appropriatedness of the
historical evidence provided. In particular, much attention has been paid to the case first used by Worrall to
motivate structural realism by means of structural continuity, namely the shift from Fresnels theory of light
(as an elastic dis- turbance in the aether) to Maxwells theory of light (as an electromagnetic wave). Worrall
himself has repeatedly advocated the atypical nature of the Fresnel-Maxwell shift and indicated, as much
more representative, the cases where the equations of the older theory reappear in modified form as special
cases of the newer theory in particular, the cases where, as some param- eter of the newer theory tends to
some limiting value, the newer theorys equations tend to the older theorys equations. Whether these indeed
are, as claimed by Worrall, the most straighforward cases of structural retention (or quasi-retention)
through theory change in the history of science is a debated issue, especially in consideration of the frequent
cases of discontinuous trans- formations from successor to predecessor structures (as in the case of singular
limits) in modern physics. Finally, some doubts have been raised about the strength of the correlation
between continuity (intended as preservation) and approximate truth, without however denying its crucial
role. According to Saatsi (2012), for example, continuity across theory shifts is not enough for the realist: the
kind of content found to be continuous should also be suit- ably explanatory. Another instance is Votsis
(2011), where the conclusion is that the structural continuity argument should be considered as inductively
strong rather than as deductive.
This paper aims at a critical evaluation of the structural continuity
2 con- dition by taking into account one
aspect that is usually neglected in the literature on the subject: namely, the relevance of physical scales when
con- sidering theory change in physics. In fact, physical theories are generally intended to describe a given
range of phenomena. They have specific do- mains of application, commonly defined in correspondence to
some range or level of the adopted physical scale (for example, the energy scale). In the pas- sage from a
Page 29 of 46

predecessor to a successor physical theory, the level at which the phenomena are considered may
remain unchanged the inter-theory re- lations are within the same level (intra-level relations) , or
there may be a variation of the domain of application (usually by a change of the value range of the scale)
the inter-theory relations are between different levels (inter-level relations).
It is a central claim of the paper that whether the domain remains the same or changes in the passage from
one physical theory to another is a question of crucial relevance when evaluating the significance of
structural continuity (as a kind of intertheoretic relation) for a structural realist stance. In particular, it is
maintained that how to intend successive theories is not independent from whether the inter-theory
passage is domain preserving or not. In current discussions on structural continuity, theory change is
generally taken to indicate the substitution of a preceding theoretical description with another more
successful one. The idea is that the first theory is elimi- nated when the second one is proposed. However, as
argued in the paper, this is not what happen in most of the cases of intertheoretic relations typically
considered as representative for discussing structural continuity: in partic- ular, the cases where the relation
between a predecessor theory T and a successor theory T l is such that, in a certain part of the successor
theory T ls domain of application, the results of T l are well approximated by those of the predecessor
theory T .
The paper is articulated into three Parts. The first Part goes back to the historical source of structural realism
by examining how the much debated Fresnel-Maxwell shift is originally discussed by Poincare in his 1902
classic text, Science and Hypothesis. The claim is that the actual issue, in this dis- cussion, is not so much what
is retained in the change from a predecessor to a successor theory, as rather what is true in the case there
are different descriptions of the same physics. The following Parts are devoted to exam- ining the structural
continuity condition in intra-level and inter-level cases, respectively. The conclusion is that (under certain
assumptions) structural continuity is a viable condition for a structural realist stance when the inter- theory
relations are within the same level, while this is generally not the case when the inter-theory relations are
between different levels.
References
H. Poincare [1902: La Science et lHypoth`ese] (1905), Science and Hypothesis, London: Walter Scott
Publishing.
J. Saatsi (2012), Scientific Realism and Historical Evidence: Shortcomings of the Current State of Debate, in
In Henk W. de Regt (ed.), EPSA Philosophy of Science: Amsterdam 2009, Springer, 329-340.
I. Votsis (2011), Structural Realism: Continuity and its Limits, in A. Bokulich and P. Bokulich (eds.),
Scientific Structuralism, Dordrecht: Kluwer.
J. Worrall (1989), Structural Realism: The Best of Both Worlds?, Dialectica 43: 99-124.
29) Tom Pashby Entities, Experiments and Events: Structural Realism Reconsidered
The recent structuralist turn of scientific realism was introduced to smooth out the onto- logical discontinuities of
theory change. However, it has led some to strong metaphysical theses aimed at eliminating non-structural
elements from our ontology entirely. There is a tension here, I claim, with the idea that the foundation of our
empirical knowledge comes from the particular outcomes of experiments, and that ultimately what the scien- tific
realist should be committed to is the existence of experimentally accessible entities like electrons and positrons. I
contend that the structural realist owes an account of how a commitment to theoretical structure suffices to
justify ontological claims about theoretical entities made in specific situations, such as in the laboratory.
This problem can be clearly seen by considering elements of the philosophy of Wilfrid Sellars. In Philosophy and
the Scientific Image of Man, Sellars addresses the conflict be- tween two systematic modes of thinking about the
objective world: the manifest image and the scientific image. Only the scientific image involves the postulation of
impercep- tible entities, and principles pertaining to them, to explain the behaviors of perceptible things.
However, according to James Ladyman and Steven Frenchs Ontic Structural Re- alist, the scientific image
(properly understood) posits structures, not self-subsisting in- dividual entities. Whereas Bas van Fraassens
Constructive Empiricist sought to defang the scientific image by eschewing the ambitions of scientific theories to
represent more than the observable phenomena, the Ontic Structural Realist advocates the elimination of non-
structural objects and thus apparently seeks to eliminate the manifest image.
I contend that to do so would be a mistake. First, as Sellars maintains, the manifest image is open to revision,
rather than elimination. Second, the scientific image has the man- ifest image as a foundation and, in Sellars
words, pre-supposes the manifest image. If Sellars is right then the scientific image cannot get off the ground
without a robust commitment to the perceptible objects of 2 the manifest image. In other words, the very
foundation of scientific realism is a straightforward realism about ordinary objects and events as concrete
particulars. This idea is captured nicely by Ian Hackings Entity Re- alist, who takes the instrumental success of
experimental practices and manipulations (when explained in terms of the existence and properties of
unobservable entities) to jus- tify belief in theoretical entities. It is precisely this belief in entities as individuals
Page 30 of 46

(with explanatory power) that is undercut by Ontic Structural Realism.


Rather than abandoning this entity inspired path to scientific realism, the structural re-
alist can choose to revise (rather than reject) the manifest image. My proposal to do so involves the rejection of
idea of the manifest image that ordinary persisting objects are (metaphysically speaking) self-subsistent
individuals, while preserving the commitment to the existence of concrete particulars. The concrete particulars I
endorse are events, that is, spatio-temporally located happenings or occurrences. The possibility of modifying the
manifest image in this manner was considered by Sellars in Time and the World Or- der, and actively pursued by
Bertrand Russell in The Analysis of Matter (and later works). In short, the proposal is for an event ontology, in
terms of which both the manifest and scientific image may both be understood.
Indeed, Russells proposal for an event ontology provides the historical context for his adoption of structural
realism (a circumstance that is seldom reflected upon in later dis- cussions of Russells structuralism). Where
Russell advised restricting belief to structural relations, these were relations among external events rather than
objects. This provided the means for an account of veridical perception in terms of a series of events which to-
gether comprise both the object perceived and its perception. Considering this account in the context of Sellars
epistemological realism of Empiricism and the Philosophy of Mind, we see that an event ontology provides the
opportunity for a rapproachment of the manifest and scientific images. That is, the capacity of the human
observer to truly perceive objects in the external world need not be underwritten by the existence of what is
perceived qua persisting individual but rather an appropriate concatenation of related events.
But this is precisely what Russell believed of unobservable entities like electrons. When we reflect on the
observable evidence we have for such entities, such as cloud cham- ber tracks or dots on a luminescent detector,
these are best characterized as observable events rather than objects. According to Russell, however, persisting
objects are them- selves nothing more than a series of events. Thus, within an event ontology, this evidence is no
different in kind from that which serves to underwrite our belief in macroscopic entities such as tennis balls,
water, or lightning. In turn, inspired by Sellars, I propose an understanding of our ordinary perception of such
objects in terms of mental capacities for immediate detection which may be extended into the scientific realm of
remote detectors and inferential knowledge. In this way, Russells event ontology provides the means for a
structural realist to bridge the divide between the manifest and scientific images.

30) Angelo Cei The Epistemic Structural Realist Program. Some interference.
The paper is intended to assess the prospects of Epistemic Structural Realism (ESR) to constitute a sound realist
response to antirealist preoccupations raised by deep historical changes in science. This aim is achieved
contrasting various forms of ESR with a case of theoretical change in the history of physics. In particular, I will
devote my attention to the explanation of the Zeeman effect offered in Lorentz Theory of Electrons and how it
looks from the perspective of Relativistic Electrodynamics. The various positions will be contrasted with this case
and the prospects of ESR evaluated in this context.
Deep changes in theoretical frameworks constitute a major challenge for realist positions on science. The family
of antirealist arguments that exploits this historical fact goes under the headings of pessimistic meta-induction
(PMI). The argument questions the fundamental idea that an abductive inference from success to truth is
legitimate and it is the best explanation of the success of science. It does so drawing on the harshness of historical
lessons: past dismissed theories were, after all, instances of successful science but they are now taken as false. On
one hand, there is a wide range of realist attacks to PMI. On the other hand, several theories in the history of
physics exhibit commonalities captured by mathematical structures. Worrall (1989) turned one of this cases into
a proposal for a highly debated version of realism. He insisted that we are justified in believing in the equations of
our best physical theories. These theoretical features are in fact immune from the theoretical changes that are the
focus of the antirealist's concern. The case in point was the retention of Fresnels equations in Maxwells
electromagnetism. Worralls picture conceded something to the antirealist: Fresnel's ether is gone, no track of it
remains in modern science.
Nonetheless, we do have knowledge: it is knowledge of structure though and it is not knowledge of entities.
Hence we ought to embrace Epistemic Structural Realism (ESR). This is view features a variety of alternative
views that range from the adoption of the Ramsey Sentence to updated versions of Russellian structural
knowledge (See Votsis, 2005).
In this work, I chose to confine myself to the context of physics and I intend to present ESR with the following
dilemma developed through a case study: either ESR has nothing particularly structuralist to offer in defence of
realism where structural refers to certain kinds of relations that allegedly survive to the change; or a defence
based merely on structural features might not be sufficient 3 to support a form of realism. This result will emerge
through the analysis of the two exemplar versions of ESR (in this sense we can talk of a program) and of various
criticisms available in the literature concerned with the topic.
The contrasting case for this analysis is the study of the prediction of the Normal Zeeman Effect (NZF). NZF is
notoriously a phenomenon of alteration of the frequency of light due to the effect of a magnetic field on its source.
Page 31 of 46

Depending on the intensity of the magnetic field the effect of alteration of the spectrum of light varies
considerably and a family of diverse effects may occurr. The model adopted for the prediction in Lorentzs theory
of Electrons (1916) explains the Zeeman Effect as a precession in the period of oscillation of a radiating charge.
The radiating charge is an electron whose acceleration explains the emission of light. The alteration on the period
of oscillation of the electron due to the magnetic force exerted by the field determines an alteration in the
frequency of the light. The core features of this explanatory model are the Lorentz Force and a model of the
electron as extended body featuring an harmonic motion. The harmonic motion and the Lorentz Force can feature
a relativistic explanation as well but the Relativistic version of the model prescribes a point charge. A point charge
is in turn incompatible with the original classical explanation. Furthermore, a variety of physical magnitudes
involved in the prediction undergoes to a significant shift from the classical to the relativistic context. In this
context I test the Epistemic Structural Realist Program.
I argue that this case despite its prima facie favourability to the structuralist cause puts a considerable stress on
it. After having set the physics stage, I go on to articulate this argument analysing the presupposition that lie
behind (the various version of) ESR and disambiguating the various conceptions of structure that are left
unaddressed in the literature. The contrast with the case study will show that a particular development of the
position seem to offer the best prospects.
References

Lorentz, H., A., (1916) The Theory of Electron, Leipzig, Teubner


Votsis, I., 2005. The upward path to structural realism, Philosophy of Science, 72: 1361 1372.
Worrall, J.,1989 - Structural Realism. The best of Both Worlds?, Dialectica, 43: 99124

31) Kevin Coffey Is Underdetermination a Problem for Structural Realism?


Enthusiasm for scientific realism is often tempered by considerations of theory under- determination: that is, the
challenge of justifying belief in a particular theory when there are (or likely are) alternative theories equally
adequate to the empirical data. Recently, however, several philosophers have argued that (epistemic) structural
realism is insulated from the threat of underdetermination in a way that other, standard forms of scientific
realism are not. (Structure realism is the view, very roughly, that we ought to commit ourselves only to the
physical structures represented in our best theories, not the ontologies of entities and properties that constitute
the relata in those theories.) For the problem of underdetermination turns on the claim that the realist is
confronted with competing sets of commitmentse.g., competing theoriesthat are equally able to account for
the empirical data. But the commitments of the structural realist only include physical structures, not the
underlying theoretical ontologies. On this basis structural realists have argued that many (alleged) instances of
theoretically equivalent rival theories are actually instantiations of the same underlying physical structures, de-
spite their differing ontologies, and thus do not count as genuine competitors, at least not for the structural
realist; they only count as genuine competitors according to the commitments of standard realism. Structural
realism thus appears to defuse the threat of underdetermination that plagues other forms of scientific realism.
I argue here that the opposite is true, even granting the structural realists strategy. By retreating to structure, the
structural realist avoids one form of underdetermination only at the cost of exposing herself to another form of
underdeterminationstructural underdeterminationwhich doesnt pose a problem for standard realism. This
new form of underdetermination arises on account of the fact that, in eschewing interpretive claims of underlying
ontology, the structural realist is no longer able to identify su- perficially different structural representations
that are, intuitively, equivalent (i.e., that are reformulations of one another). This presents the structural realist
with cases of underdetermination that simply do not arise for the standard realist. Moreover, unlike more
traditional forms of underdetermination, the ubiquity of which is controversial, there are reasons to think that
structural underdetermination is widespread. In this sense, then, I argue that the problem of underdetermination
is actually worse for the structural realist than it is for the standard realist.
This argument brings to the surface a question that has dogged structural realism at least since John Worrall
introduced the term: namely, what is meant by structure ? For one natural way in which the structural realist
might reply is to argue that theoretically equivalent formulations with superficially different structures actually
do exhibit the same underlying structure. But the plausibility of this claim requires that the structural realist say
much more about the notion of structure thats operative in her account. I argue that the widespread Ramsey-
sentence approach to theory structure, for example, isnt able to make the sort of discriminations needed to
defuse many cases of structural underdetermination. I conclude by arguing that recent attempts to refine the
relevant notion of structure fail to appreciate how structural3 underdetermination arises, and thus are unlikely to
provide an adequate resolution of the problem.
Page 32 of 46

32) Michael Vlerick - A biological case against entity realism


Entity realists hold that we should be realist about unobservable entities rather than committing oneself to the
truth of the relational structure of the theory. The argument for this claim is that on an experimental level, were in
direct causal contact with those postulated entities (cf. Hacking 1983). While Im sympathetic to some form of
selective realism, I argue that postulated unobservable entities are a poor choice to hinge claims of scientific
realism on. I base my argument on a biological consideration of what underlies our modelling of unobservable
entities. Modelling unobservables draws from two distinct but related sources. The first is our cognitively hard-
wired folk physical concept of objecthood. The second is our vision based imagination. Both, I argue, are adaptive
products of a blind evolutionary process enabling us to deal with real world problems in a way that promoted
survival and reproduction in ecologically relevant contexts. However, when it comes to the ultimate structure of
the physical world, there is absolutely no reason to assume that these sources are truth-tracking.
According to Krellenstein (1995: 242) perception based concept formation is at source of the way we
conceptualise unobservable entities such as atoms, electrons, and quarks. McGinn (1989: 358) refers to this as the
principle of homogeneity. The theoretical models we use to describe the world, are shaped by an analogical
extension of what we observe. In this regard, we arrive at the concept of a molecule based on our perceptual
representations of macroscopic objects, conceiving of smaller scale objects of the same kind. Underlying the way
we conceptualise unobservables therefore is our innate folk physical concept of objecthood. Empirical evidence in
developmental psychology shows that we are strongly predisposed to view the world as consisting of discrete
objects (see for example Kellman and Spelke 1983 on the notion of objecthood in infants). The notion of
objecthood in this regard is not acquired through inductive learning, but wired into our brain, providing us with an
innate and useful framework to navigate the physical world. However, as in the case of so many innately grounded
folk concepts, their usefulness does not warrant their truthfulness. Take our innate predisposition to classify the
organic world based on an intuition of a hidden trait or essence that members of the same group share with each
other (Pinker, 1997:323). While this enabled our ancestors to make useful predictions about organisms,
essentialism is flatly contradicted by Darwins theory of evolution.
A second source underlying our conceptualisation of unobservables is visualisation. Whether we model electrons
as circling around nuclei, protons being made up from quarks or replace particles by strings, we use mental
imagery and therefore draw from our visual apparatus. Indeed, the fact that mental imagery is connected with our
sense of vision is next to being supported by simple observation confirmed by extensive empirical evidence (cf.
Kosslyn 1980). Therefore, the particular perceptual mechanisms we have evolved provide grounding to the
perception based models we produce in theorizing about the world. Moreover, rather than being restricted to our
five senses in modelling physical entities and properties, we are restricted to vision alone. Evolution did, indeed,
provide us with a dominant sense. As primates we rely primarily on our sense of vision. This entails that our
representation of the world is mainly a visual one, as opposed to many other mammal species who rely more on
their olfactory and/or auditory sense. This dominant sense, rather than just providing us with the set of data upon
which we rely the most in our interaction with the environment, also underlies the way in which we as
cognitively highly developed primates come to conceptualise the physical world.
Postulated unobservable entities, in this regard, are very much an extension of the way our biology predisposes us
to view the world. Given the origin of these predispositions in a blind process merely attuning the cognitive and
perceptual wiring of a particular kind of organism (evolving primates) to a particular set of relevant
environmental threats and opportunities (with regards to survival and reproduction), we should regard those
postulated entities with a healthy dose of scepticism. Indeed, as pointed out, the cognitive predispositions
underlying our manifest worldview are not shaped to enlighten us with the true, objective structures of reality
(intuition based folk physics and biology are notoriously inaccurate), but to endow us with biological fitness in a
thoroughly species-specific context.
My argument, I want to emphasise, is by no means aimed at a wholesale rejection of scientific realism. I merely
claim that entity realists hinge their realism on the wrong components of a scientific theory. A much more
promising form of selective scientific realism, it seems to me, takes the structural (mathematical / relational)
aspect of a theory to be in accord with the external world. A convincing defence of this position is offered by
Ladyman and Ross (2007) and coined ontic structural realism (OSR). According to OSR, our best physical theories
tell us only about structure, not entities. In this perspective, our best scientific theories may very well latch onto
real structures in the world, but doing so postulate models of entities we should not construe literally.
References
Barbour, I. 1974. Myths, models and paradigms: A comparative study in science and religion. New York: Harper &
Row Publishers 3
Hacking, I. 1983. Representing and Intervening: Introductory topics in the philosophy of natural science.
Cambridge, Cambridge University Press.
Kellman, P., Spelke, E. 1983. Perception of partly occluded objects in infancy. Cognitive psychology, 15: 483-
524
Page 33 of 46

Kosslyn, S. 1980. Image and mind. Cambridge: Harvard University Press


Krellenstein, M. 1995. Unsolvable problems, visual imagery and explanatory satisfaction. Journal of mind and
behavior, 16: 235-253
Ladyman, J. and Ross, D., with Spurrett, D. and Collier, J. 2007. Every thing must go: Metaphysics naturalized.
Oxford: Oxford University Press.
McGinn, C. 1989. Can we solve the Mind-Body problem? Mind, New Series, vol. 98, 391: 349-366
Miller, A. 2000. Insights of genius: Imagery and creativity in science and art. Cambridge: MIT press

33) Rune Nyrup Perspectival realism: where's the perspective in that?


Scientific realism is challenged by cases where two or more models relying on inconsistent assumptions provide
the best accounts of different aspects of the same phenomenon. Examples include discrete and continuous
models of water (Teller 2001, Giere 2006b, Chakravartty 2013), and atomic nucleus models (Morrison 2011).
One solution would be to adopt a form of selective realism, restricting realist commitments to consistent parts of
the models (e.g. Chakravartty 2013). An alternative strategy is suggested by Giere's perspectival realism (2006a,
2006b). Perspectivists are realists in a qualified sense, seeing models as accurate only relative to a perspective.
This promises to be an interesting alternative by combining two features. First perspectivism does not rely on
any metaphysical assumptions about the perspective-independent nature of the phenomena studied by science
(causal properties, ontologically primitive structures, etc.). Second, it nonetheless promises to retain a sense of
realism: models still give us knowledge about the world, though of a perspectival kind.
This paper examines whether perspectivism can deliver on these promises. Giere's motivates these claims
through an analogy between his account of scientific theorising and colour vision. Observers with the same
perceptual system will reliably agree on colour ascriptions, and differences in colour depend systematically on
features of the perceived objects. However, the structure of colour vision differs between species and, Giere
argues, nothing in the world singles out any of these as capturing the true colours. At most, different kinds of
colours vision can be claimed more or less useful for certain purposes. Systems of colour vision are supposed to
be analogous to what Giere calls theoretical perspectives, which are constituted by the central equations of
fundamental scientific theories, e.g. Newton's Laws of Motion. These, Giere holds, do not directly represent
anything in the world, but provide a theoretical vocabulary and guidelines for constructing representational
models, e.g. the harmonic oscillator. Only models represent the world, but are at best the most accurate
representation of some phenomenon given the perspective they are constructed within. Thus, just as an object
can be both completely red and green, given different systems of colour perception, perspectivists can regard
water as both continuous and discrete, relative to the perspective of classical wave mechanics and statistical
mechanics, respectively.
The colour analogy is however unhelpful to perspectivists. In order to afford the sense of realism, it has to
presuppose an independently existing object, with objective features which determine how it appears from
different perspectives. But why we should refuse to theorise about these objective features, and restrict ourselves
to perspective-relative claims? Giere stresses that there are no perfect models (Teller 2001) and that
judgements about which models use to depend on contingent, pragmatic factors (our interests, cognitive abilities,
historical context, etc.). But this at best motivates fallibilism, not wholesale scepticism, about capturing the
objective features of the world (Lipton 2007). Giere remarks that all knowledge comes from one perspective or
another, not from no perspective at all (2006a: 92), but this does not rule out that some perspectives, and the
models they produce, give us a privileged representation of the world capturing its objective features
(Chakravratty 2010, Votsis 2012).
A more promising line in Giere's works starts from his agent-based view of representation. On this view, scientific
representation is fundamentally an activity where an agent uses a model to represent a part of the world for some
set of purposes. I develop this suggestion along the lines of deflationary anti-representationalism (e.g. Price
2013), in a way that allow us to retain the promising features of perspectivism I started out highlighting.
The basic idea is this. Following deflationism, all representational claims, i.e. claims seemingly describing
relations between representations and objective features of the world, are interpreted as mere meta-linguistic
devices. These serve purposes of expressing commitments to other claims about the world. Thus, p is true is a
way of committing oneself to p, but 'truth' can also serve to commit oneself to sets of propositions that cannot be
stated explicitly, e.g. everything this model says about wave-propagation is true. Representational discourse
bottom out in claims of the form the system W is similar to the model X in regards to . These claims are
justified to the extent they help us achieve the purposes for which we use X to represent W, e.g. by allowing us to
predict and intervene in its behaviour. Whether this is justified
3 still depends partly on the features of W, but also
pragmatic factors, such as our interests and abilities. Importantly, pragmatic factors here are not mere external
distortions to the reliability of our judgements; they are intrinsic to the very practice of representing. Thus,
although such claims do tell us something about the world, they cannot be presumed to correspond to the
objective features of the world in any substantive sense.
Page 34 of 46

This is however not mere rebranded instrumentalism (Morrison 2011). It is still possible to explain why
particular models are useful for certain purposes, for instance why large collections of discrete particles would
approximate the behaviour of a continuous medium. But this explanation would itself be relying on one or more
discrete models, subject to the same anti-representationalist story told above. For any particular model, we can
explain its relation to the world independently of that model. But there can be no global account of which features
of the world successful models correspond to independently of all models.
Two points about the account sketched above. Firstly, I only present it as the most promising development of
Giere's work insofar as it provides an interesting alternative to selective realism, but not to defend its plausibility
beyond this. Second, while it captures some features of perspectivism, it cannot support the strong analogy to
colour perspectives that Giere stresses. We can still talk of how the world appears from the perspective of some
model, but the sharp distinction between representational models and merely instrumentally justified theoretical
perspectives is blurred and indeed unnecessary on this account.
References:
Chakravartty, Anjan (2010): Perspectivism, inconsistent models, and contrastive explanation, Stud. Hist. Phil. Sci.
41: 405-412.
- (2013): Dispositions for Scientific Realism, in: Greco & Groff (eds.), Powers and Capacities in
Philosophy: The New Aristotelianism, Routledge.
Giere, Ronald (2006): Scientific Perspectivism, University of Chicago Press.
- (2006b): Perspetival Pluralism in: Kellert, Longino, & Waters, (eds.): Scientific Pluralism, University of
Minnesota Press.
- (2009): Scientific Perspectivism: behind the stage door, Stud. Hist. Phil. Sci. 40: 221-
223. Lipton, Peter (2007): The World of Science, Science 316: 834.
Morrison, Margaret (2011): One phenomenon, many models: Inconsistency and Complementarity, Stud. Hist.
Phil.
Sci. 42: 342-351
Price, Huw (2013): Expressivism, Pragmatism and Representationalism. Cambridge UP. Teller, Paul (2001):
Twillight of the Perfect Model Model, Erkenntnis 55: 393-415.
Votsis, Ioannis (2012): Putting Realism in Perspective, Philosophica 84: 85-122.

D [4]: The Semantic View and Scientific Realism

34) Alex Wilson Voluntarism and Psillos Causal-Descriptive Theory of Reference


In his Scientific Realism: how science tracks truth, Stathis Psillos attempts to develop a form of scientific realism in
opposition to what he sees as limited or weaker forms of scientific realism. In this paper, I argue that Psillos
implicit voluntarism undermines his causal-descriptive theory of reference. His theory of reference putatively
picks out natural kinds objectively. However, Psillos three stances of scientific realism are worded vaguely enough
to allow for voluntarism in regards to what counts as a kind constitutive description of a putative entity.
Moreover, Psillos ultimate defense of the inference to the best explanation (IBE) is that he is an epistemic
optimist. If Psillos justification for being a scientific realist in general and a believer in IBE in particular ultimately
relies on voluntarism, then his causal-descriptive theory cannot show how it identifies natural kinds. If his version
of scientific realism cannot identify natural kinds, then Psillos version of scientific realism is as weak as those of
his scientific realist opponents.
The targets of Psillos version of scientific realism are the entity realism of Nancy Cartwright and Ian Hacking, and
the structural realism of John Worrall. The former claim that the entities putatively manipulated in laboratories
are real, but the scientific theories that describe them are false. The latter maintains that the mathematical
structures of scientific theories, and not their propositional content, explain the success of science. Psillos claims
that scientific realism arises from three stances: the metaphysical stance, the semantic stance, and the epistemic
stance. The metaphysical stance asserts that the world is mind-independent and has a natural kind structure. This
first stance is necessary for scientific realism because it states what it is that scientific realism takes to be real. The
semantic stance states that scientific theories take the descriptions of their intended domain, both observable and
unobservable, as being literal and truth-conditioned. The epistemic stance holds that successful scientific theories
are those that are well-confirmed and are approximately true of the world. Psillos claims that these stances are
assumptions that one must hold if one is to identify as a scientific realist. While he concedes that these stances are
ungrounded assumptions based solely on his intuitions, Psillos never explicitly calls his philosophy of science
voluntarist. 3
The forms of scientific realism advanced by Cartwright, Hacking, and Worrall are concessions to the pessimistic
meta-induction. Psillos argues that these forms of concessionary scientific realism are untenable. In the case of
Cartwright and Hacking, he contends that to believe in a theoretical entity one must believe some amount of
theory. One could not distinguish theoretical entities, such as protons and electrons, without believing in theory.
Page 35 of 46

Moreover, one would not be rationally justified in believing in the existence of theoretical entities unless one
believed some part of a theory that tells us what, say, a proton is and what it does. In the case of Worrall, Psillos
contends that the required distinction between theory and structure is unsound. To derive the experimental
consequences required to confirm the theory (or any part of it), theoretical interpretation of the theorys
mathematical structure is necessary. To safeguard his version of scientific realism from the challenge of the
pessimistic meta-induction, Psillos develops a theory of reference that he claims picks out natural kinds from
trivial posits in scientific theories. This theory of reference he calls the causal-descriptive theory of reference since
it incorporates features from both causal and descriptive theories of reference. Psillos contends that not all
descriptions of a theoretical posit should be taken to be referential. Rather, he claims that only those descriptions
that are necessary for differentiating kinds from one another are referential. Natural kinds are those objects in the
world that are mind-independent. Thus, those descriptions that are genuinely referential are called kind
constitutive because they distinguish natural kinds from non-natural kinds.
Psillos uses the transition from luminiferous ether to electromagnetic field as an exemplar to illustrate how his
theory of reference picks out natural kinds and preserves reference across theory change. He claims that the
formers constitutive elements (such as its alleged elastic-solid structure) were not carried over into the
description of the latter terms constitution. Hence, Psillos concludes that the constitutive elements were non-
natural while the kinematic and dynamical elements, which were carried over into the description of the
electromagnetic field, are kind constitutive. Moreover, when identifying natural kind terms, Psillos appeals to the
advocates of a successful theory to see which terms they took to be necessary for explaining the success of their
theory.
However, P. Kyle Stanford (2003) and Hasok Chang (2003) have shown that Psillos treatment of his historical
examples is flawed. Psillos claims the proponents of the luminiferous ether and of caloric dismissed the
constitutive properties of these posits as heuristic. However, the scientists Psillos cites in support of his causal-
descriptive theory differed significantly from him in what they considered to be kind constitutive descriptions of
these entities. Stanford describes how some scientists considered the constitutive properties of the luminiferous
ether to be kind constitutive properties. Chang illustrates how the term caloric fits the description of a natural
kind term according to Psillos standards.
While Stanford and Chang challenge Psillos on historical grounds, I challenge him on philosophical grounds. I
contend that if Psillos can dismiss the constitutive elements of the luminiferous ether and caloric as heuristic,
then entity realists and structural realists can regard descriptions of theoretical entities as heuristics too. The
entity realists and structural realists cannot be shown to be mistaken; rather they are merely more cautious than
Psillos. I conclude that Psillos defense of scientific realism, and of semantic realism in particular, fails because his
voluntarism undermines the objectivity of his philosophy.

35) Alistair Isaac The Locus of the Realism Question for the Semantic View
The realism questionthe question of how our best theories relate to the worldhas traditionally been
addressed within the semantic view (SV) through an analysis of the relationship between theory models and data
models. This is because the data model has commonly been accepted as the window through which science
approaches the world. I argue that the locus of the realism question for the founders of SV was not in the theory
of data models, but in the theory of psychological judgments, particularly judgments of similarity. Reviving this
position today has the advantage of suggesting an avenue of reconciliation between the formal strand of SV and
recent work on modelling which self- consciously perceives itself as offering an informal alternative to SV.
Background: The formal program in SV has turned to increasingly weak mathematical analyses of the relation
between theory and data models, e.g. isomorphism (van Fraassen, 1980), homomorphism (Mundy, 1989), and
partial homomorphism (Bueno, French, and Ladyman, 2002). The lack of a general theory of this relationship
(and the perceived inadequacy of a single mathematical formalism for providing one) has motivated some to
reject the formal program of SV as misconceived (Godfrey-Smith, 2006), and has left even those more
sympathetic to the formal approach with serious reservations (Frigg, 2006). An alternative program has focused
on modelling practice in a more informal way, with the pertinent relation between model and world analysed as
one of similarity (Giere, 1988; Weisberg, 2013). This informal approach to model realism often portrays itself as
rectifying an error at the foundation of SV.
Revisiting the Original Program: I examine two founders of SV and argue that, properly construed, their programs
locate the realism question within the foundations of psychology. Patrick Suppes is canonically recognized as the
founder of SV. However, Mary Hesse independently proposed essentially the same conceptual shift as Suppes;
juxtaposing Suppes and Hesse reveals interesting commonalities
3 in their research programs. In particular, both
figures
(i) call for models to serve as a central focus for philosophy of science; and (ii) provide insight into scientific
method via comparisons of models which do not detour through the theory-world relation. In the case of
Suppes, the project was to mathematically compare models of different theories in order to illustrate synchronic
Page 36 of 46

foundational relations between different parts of science; in the case of Hesse, the project was to compare
models of successive stages in a field's development in order to illustrate the logic of diachronic reasoning in
science.
Suppes (1960) influentially brought the question of data models to the attention of philosophers of science.
Furthermore, it is clear that Suppes considers the question of how data models relate to theory models a crucial
one for the foundations of statistics. However, Suppes also argues for an increasingly detailed hierarchy of
models of experiment. In particular, as more and more aspects of the experimental setup are formalized,
eventually a model of the experimenter herself will need to be included. Insofar as the world is approached ever
more closely through this progressive formalization of the experiment, it is in the limit of this process, i.e. the
foundations of psychology, where the realism debate must be fought. Solamente statistics will never provide a
complete analysis of empirical adequacy since its starting point, the data, is always infected by the judgments of
the experimenter.
The ultimate moral here is also that of Hesse (1961). She demonstrates that failures of fit between model and
world ("negative analogies") are irrelevant for scientific progress. Rather, it is the "neutral analogies," features of
the model for which fit with the world is unknown, which drive theory change. While only models that exhibit
both positive and neutral analogies are apt for a realism debate, it is not the business of science to generate such
models. Crucially, models with negative analogies can play a productive role in scientific reasoning so long as the
scientist keeps track of internal relations between positive, negative, and neutral analogies. Thus, the ultimate
locus for assessing the theory-world relationship is in the scientist's judgments about the status of different
aspects of the model.
A Synthesis: Both Suppes and Hesse locate the theory-world relation relevant for understanding scientific
method in the mind of the scientist. If, as in Suppes' original program, philosophical questions are to be analysed
in terms of model relations, the relevant models are psychological models of the scientist's ideas (or mental
representations) of the theory and of the world, and the pertinent relation between them is her assessment of
their similarity. For Suppes, these psychological models complete that model of the experiment that most closely
approaches the world; for Hesse, they connect stages in a theory's development, illuminating scientific progress.
The relation of fit between theory and world here is assessed in the judgment of the scientist, just as in the post-
Giere modelling literature; on this view, however, the original SV program does not rest on a mistaken
understanding of models, nor is its formal component misguided. Rather, the formal analysis of the relation
between data and theory models continues to be important in the foundations of statistics, but its importance as
a response to the realism question rests on the role of data and theory models in the reasoning of scientists
themselves. This reasoning may also involve the manipulation and comparison of mental representations that,
while they may be studied formally, may not precisely mirror the mathematical structures of theory as presented
in textbooks or analysed by statisticians.

36) Francesca Pero The Role of Epistemic Stances within the Semantic View
The semantic view rises in the Sixties as an analysis on the structure of scientific theories. In fifty years it has both
replaced the Syntactic View and established itself as the orthodox view on scientific theories. In this paper an
assessment of the reasons for the success of the semantic view is provided. The guideline for the assessment is
obtained by merging the general stances presented by van Fraassen (1987) and Shapiro (1983) on, respectively,
the task of philosophy of science and how such a task should be accomplished. As it turns out, the role played
within the semantic analysis of theories by epistemic stances, whether realist or antirealist, should be severely
reconsidered.
Van Fraassen repeatedly presents as the foundational question par excellence of philosophy of science the one
concerning the structure of theories (1980; 1987; 1989). Van Fraassen also suggests that such a question should
be kept distinct from issues concerning theories as objects of epistemic attitudes (such as realism and anti-
realism). As for Shapiro, he claims that philosophy of science cannot be understood as isolated from the practice
of science.
Therefore, any inquiry concerning science should count in scientific practice as a crucial element for its
development. Assuming the tenability of these two stances, in this paper I argue in favor of the following claims
as reasons for the success of the semantic view as an analysis of scientific theories: the semantic view (i) provides
a realistic answer to the right question: what is a scientific theory ? ; (ii) provides such an answer in the correct
manner, i.e., remaining epistemologically neutral; (iii) acknowledges scientific practice as crucial for dealing with
(i) and (ii).
With few notable exceptions (Worrall, 1984; Cartwright et 3 al., 1995; Morrison, 2007; Halvorson, 2012), the
semantic view is widely accepted as the orthodox view on the structure of scientific theories. It is not possible to
identify the semantic approach with a univocal view, since its formulations are many (Suppes, 1967; van
Fraassen, 1970; Giere, 1988; Suppe, 1989; da Costa and French, 1990; van Fraassen, 2008). However, as a
program of analysis whose origin is to be traced back to Beth (1949), although officially identified with the
Page 37 of 46

work by Suppes (1960) the semantic view fixes two tasks to be fulfilled. The first task is to provide a formal
analysis of theories. The second task is to provide an analysis of theories as regarded in the actual practice of
science. Both the tasks are accomplished by introducing the notion of models, generally conceived as Tarskian
set-theoretical structures (see Tarski, 1953).
Notwithstanding the wide literature on the different formulations of the semantic view and on its potential
consistency with either realist or anti-realist stances, a systematic analysis both of its significance and of the
reasons for its orthodoxy status is yet to be provided. The aim of this paper is to provide such an analysis and, in
order to do that, I mean to deploy van Fraassen's and Shapiro's insights concerning philosophy of science.
Van Fraassen claims that as the task of any philosophy of X is to make sense of X, so philosophy of science is an
attempt to make sense of science and, elliptically, of scientific theories (1980, p. 663). This task, van Fraassen
adds, is carried out by tackling two questions. The first question concerns what a theory is per se (internal
question"). This is the question par excellence for the philosophy of science insofar as answering it is preliminary
to, and independent of, tackling issues concerning the epistemic attitude to be endorsed towards the content of a
theory. These issues fall under the second question which indeed concerns theories as objects for epistemic
attitudes (external question").
Van Fraassen's remarks can be consistently supplemented with Shapiro's view on how a philosophical analysis
should be carried out. Shapiro advocates the necessity for any philosophy of X not to be isolated from the
practice of X (1983, p. 525). He explicitly refers to scientific explanation, mentioning that reducing explanation to
a mere description of a target system does not suffice to justify in virtue of what the abstract description relates
to the object described. Without such a justification is indeed impossible to account for the explanatory success of
theory. Only referring to the practice of theory construction allows to account for how science contributes to
knowledge.
The semantic view evidently deals with the foundational question of philosophy of science. As the
syntactic view did, the semantic view aims at providing a picture of scientific theories. However, unlike the
syntactic view, the semantic view succeeds in providing a realistic picture of theories. The syntactic view has
been driven in its formulation by the (anti-realist) Positivistic credo, according to which a programmatic goal for
the analysis of theories is to provide only a rational reconstruction of the latter, i.e., a reconstruction which omits
the scientists' actions and focuses only on their result (i.e., theories. See Carnap, 1955, p. 42). The semantic view,
on the other hand, preserving its neutrality with respect to any preexistent school of thought, whether realist or
anti- realist, succeeds in providing a realistic image of scientific theories which is obtained by focusing on how
science really works" (Suppe, 2000, p. 114).
As a final point, I mean to show that the suggested guideline for justifying the success of the semantic view can
also be employed as a demarcation criterion for establishing the tenability of the available formulations of the
semantic view. Using the guideline as a demarcation criterion, I show that the partial-structure approach (da
Costa and French, 1990; Bueno and French, 2011) fails at being semantic for two reasons. Firstly, it violates the
epistemic neutrality presupposed by the semantic view. Secondly, the partial-structures approach falls short of
integrating the image of theories which it provides with the actual scientific practice.

E [5]: Scientific Realism and the Social Sciences

37) David Spurret Physicalism as an empirical hypothesis


Physicalism is typically understood as the naturalistically motivated metaphysical thesis that everything is (in
some sense) physical. Difficulties accompany specifying each part of this thesis the is, the everything and the
physical. Here I focus on the third, that of saying what counts as physical. A common complaint, variously
articulated, has it that physicalism faced a dilemma. One the one hand, physical could be tied to current physics, in
which case on inductive grounds (given the history of revision in physical theory) physics, and so physicalism, is
false. On the other hand, physical could be tied to some ideal future theory, but in that case physicalism becomes
trivially true.
An additional challenge is articulated in van Fraassen (2002). He endorses a version of the above dilemma, and
concludes that what he calls materialism should be understood as a stance a freely chosen philosophical
orientation not amenable to rational justification. (Those who think that it is rationally justified are, he claims,
guilty of a kind of false consciousness.) Although van Fraassens account of the dilemma contains nothing
significantly new, his posing of it within a broadly empiricist orientation is a useful way of sharpening some of the
issues, because empiricists tend to be both impressed by 3science and wary of metaphysics.
Here I offer a semi-novel account of the physical that responds to the dilemma supposedly facing physicalism in a
way sensitive to van Fraassens empiricist challenge. That is, I defend an empiricism-friendly statement (a) of what
to count as physical, and (b) of a case that the scope of physics thus understood is uniquely co-extensive with the
Page 38 of 46

empirical, a property that is not shared with other domains of inquiry. These points are connected. I argue that it
does justice to a body of historical scientific achievements in several fields (including chemistry and biology, as
well as physics itself) to say that science discovered that there was one science with unrestricted empirical scope,
and this science was, approximately, physics. (I say approximately because some parts of what is institutionally
called physics are recognized to be special sciences.) This discovery was contingent to the extent that alternative
possibilities (in which all sciences were special, or where some science other than physics was not special) have
been taken seriously in the history of science, and abandoned because of scientific discoveries.
If this is correct, it explains the plausibility of some versions of what has come to be known as the via negativa
account of the physical (Spurrett and Papineau 1999). This proposal defuses the dilemma by stipulating that
physics is a causally complete science, and excludes (for example) fundamental mental properties. By being
causally complete, physics thus understood is appropriate for motivating arguments for supervenience, reduction,
identity (and other solutions to the is problem). And by excluding, for example, the fundamentally mental, such
physicalisms are not trivial insofar as future scientific discovery could in principle falsify them. Considerations
from common sense, and the history of science, support excluding fundamental mental properties from the one
science of unrestricted empirical scope (physics), as they do excluding biological properties, chemical properties
and a variety of others.
The form of physicalism I describe here also suggests a way of developing the via negativa response to the alleged
dilemma. It is science itself that identifies (and occasionally revises) the catalogue of special sciences to be taken
seriously, where special is understood in contrast to physics. This catalogue motivates naturalist (and non-
arbitrary) variants of the via negativa (for specific sciences), and helps explain why some particular historical
episodes are distinctively important for making physicalism plausible. Finally, as an empirical thesis of a certain
kind, a response to van Fraassen is natural. Physicalism is a thesis motivated by the history and state of science. It
is good fallibilism to recognise that future evidence may motivate revisions, but bad philosophy to abandon a view
motivated by so much evidence simply because we are not infallible.
References
Spurrett, D. & Papineau, D. (1999) "A note on the completeness of 'physics'", Analysis, 59:25-29.
Van Fraassen, B. (2002) The Empirical Stance, Yale University Press.

F [6]: Anti-Realism

38) Moti Mizrahi The Problem of Unconceived Objections and Scientific Antirealism
According to Stanford (2006, 20), the history of scientific inquiry itself offers a straightforward rationale for
thinking that there typically are alternatives to our best theories equally well confirmed by the evidence, even
when we are unable to conceive of them at the time. Based on the PUA, Stanford advances an inductive
argument he calls the New Induction on the History of Science, which Magnus (2010, 807) reconstructs as
follows:
A New Induction on the History of Science
NI-1 The historical record reveals that past scientists typically failed to conceive of alternatives to
their favorite, then-successful theories.
NI-2 So, present scientists fail to conceive of alternatives to their favorite, now- successful
theories.
NI-3 Therefore, we should not believe our present scientific theories.
If Stanfords New Induction on the History of Science were cogent, then it would show that what Psillos (2006,
135) calls the epistemic thesis of scientific realism, i.e., that mature and predictively successful scientific
theories are well-confirmed and approximately true (cf. Psillos 1999, xix), is not worthy of belief.
Now, Mizrahi (2013) argues that there is a problem parallel to the PUA that applies to Western analytic
philosophy. As Mizrahi (2013) writes:
In much the same way that the history of scientific inquiry itself offers a straightforward rationale for thinking
that there typically are alternatives to our best theories equally well confirmed by the evidence, even when we
are unable to conceive of them at the time (Stanford 2006, p. 20), the history of philosophical inquiry offers a
straightforward rationale for thinking that there typically are serious objections to our best philosophical theories,
even when we are unable to conceive of them at the time. In other words, the historical record shows that
philosophers have
typically failed to conceive of serious objections to their 3well-defended philosophical theories. As the historical
record also shows, however, other philosophers subsequently conceived of serious objections to those well-
defended philosophical theories (original emphasis).
If Stanfords PUA provides the basis for a New Induction on the History of Science, then Mizrahis PUO provides
the basis for a New Induction on the History of Philosophy.
Page 39 of 46

Mizrahis New Induction on the History of Philosophy, then, runs as follows (Mizrahi 2013):
A New Induction on the History of Philosophy
NIP-1 The historical record reveals that past analytic philosophers typically failed to conceive of serious
objections to their favorite, then-defensible theories.
NIP-2 So, present analytic philosophers fail to conceive of serious objections to their favorite, now-defensible
theories.
NIP-3 Therefore, we should not believe our present philosophical theories.
Accordingly, since an alternative scientific theory T2 that accounts for the phenomena just as well as T1
amounts to a serious objection against T1 (Mizrahi 2013), Stanfords PUA is actually a PUO for scientific
theories.
Since empirically viable alternatives to then-well-confirmed scientific theories that turned out to be equally
confirmed by the evidence amount to serious objections to those scientific theories, Stanfords PUA is a PUO for
scientific theories. Accordingly, the parallels between Stanfords New Induction on the History of Science and
Mizrahis New Induction on the History of Philosophy can be seen as follows:
NI-1 [/NIP-1] The historical record reveals that past scientists [/philosophers] typically failed to conceive of
alternatives [/serious objections] to their favorite, then-successful theories.
NI-2 [/NIP-2] So, present scientists [/philosophers] fail to conceive of alternatives [/serious objections] to their
favorite, now-successful theories.
NI-3 [/NIP-3] Therefore, we should not believe our present scientific [/philosophical] theories.
Given Mizrahis (2013) New Induction on the History of Philosophy, I argue that scientific antirealists who
endorse Stanfords PUA and his New Induction on the History of Science face the following problem: if
Stanfords New Induction on the History of Science is a cogent argument for scientific antirealism, then
Mizrahis New Induction on the History of Philosophy is a cogent argument for philosophical antirealism. If that
is the case, however, then it follows that scientific antirealism is not worthy of belief, since scientific antirealism
is a philosophical theory. More explicitly:
(1) Stanfords New Induction on the History of Science is a cogent argument for scientific
antirealism. [Assumption for reductio]
(2) If Stanfords New Induction on the History of Science is a cogent argument for scientific antirealism, then
Mizrahis New Induction on the History of Philosophy is a cogent argument for philosophical antirealism.
[Premise]
(3) Mizrahis New Induction on the History of Philosophy is a cogent argument forphilosophical
antirealism. [from (1) & (2) by modus ponens]
(4) If Mizrahis New Induction on the History of Philosophy is a cogent argument for philosophical
antirealism, then, if scientific antirealism is a philosophical theory, we should not believe it. [Premise]
(5) If scientific antirealism is a philosophical theory, we should not believe it. [from
(3) & (4) by modus ponens]
(6) Scientific antirealism is a philosophical theory. [Premise]
(7) We should not believe scientific antirealism. [from (5) & (6) by modus ponens]
(8) Stanfords New Induction on the History of Science is a cogent argument for scientific antirealism but
we should not believe scientific antirealism. [from (1) &
(7) by conjunction]
Of course, scientific antirealists who endorse Stanfords New Induction on the History of Science cannot
accept (8), since (8) says that they should not believe the conclusion of a cogent argument for their own
position.
References

Magnus, P. D. (2010). Inductions, red herrings, and the best explanation for the mixed record of science. British
Journal for the Philosophy of Science, 61, 803-819.
Mizrahi, M. (2013b). The problem of unconceived objections. Argumentation. DOI 10.1007/s10503-013-9305-z.
Psillos, S. (1999). Scientific Realism: How Science Tracks Truth. London: Routledge.
Psillos, S. (2006). Thinking about the ultimate argument for realism. In C. Cheyne and J. Worrall (eds.),
Rationality & Reality: Essays in Honour of Alan Musgrave (pp. 133-156). Dordrecht: Springer.
Stanford, P. K. (2006). Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives. New
York: Oxford University Press.
3
Page 40 of 46

39) Emma Ruttkamp-Bloem The Possibility of an Epistemic Realism


In this paper the possibility of rescuing realism by suggesting an epistemic account of truth, seemingly against all
metaphysical realist inclinations, is explored. A novel mix of what traditionally would have been called anti-realism, and the
sentiment behind current variations of selective realism, is suggested to be the best way forward for realists.
The anti-realism aspect of what is suggested here necessarily comes in, because of traditional realist attitudes towards
epistemic accounts of truth (e.g. compare Niiniluoto 1999). The account proposed here is one of truth as warranted belief
based loosely on Peirces view of truth as method. The selective realism strand of the mix is the result of the suggestion that
components of theories that can absorb consequences of science-theory interaction (i.e. results of testing and application of
theories), while still functioning in the theory, are the components determining the stance to be taken towards the theories
involved. The selection of such components rests on reference as the mechanism that unveils the truth about reality, in the
following sense: Reference claims are not existential claims that can be true or false, but are rather epistemic tracking devices,
recording various encounters between aspects of reality and scientific theory. Reference claims contain descriptions of causal
properties that can be adapted according to the outcome of empirical (reality)-theoretical (science) encounters, while
remaining the properties in virtue of which postulated entities continue to play the same causal role ascribed to them before
the properties were adapted. And it is such evolutionary progressive descriptions of these (selected) properties that reflects
the quality of evidence for believing theories at a given time.
This mix of anti-realism and selective realism is developed into an epistemic, non-standard version of realism, entitled
naturalised realism. The argument for naturalised realism rests on the following ten tenets: Science is about an independently
existing reality, and realism is about science such that its claims as a philosophy of science are compatible with the nature and
the history of science; the driver of scientific progress is revision; scientific progress is not linear or convergent; the unit of
appraisal for naturalised realism is a consiliated network of theories; continuity in science is a meta-issue which can be found
in terms of methodological continuity; naturalised realism is an epistemology of science, and the epistemological framework for
naturalised realism is fallibilism; the criterion for determining (realist) stances towards the status of the content of science is
the quality of evolutionary progressive interaction between the experimental and theoretical levels of science; an epistemic
account of truth which defines truth as warranted assertability is suggested which is unpacked in terms of truth as method
(where the method at issue is the experimental method); relations of historied reference are offered as epistemological
scorekeepers of what is revealed about nature as true (i.e. warrantedly assertable) through the course of science; and finally,
the traditional scientific realism debate is collapsed into a continuum of stances towards the status of scientific theories.
In the first section of the paper it is argued that the possibilities for traditional realism being validated by contemporary
science have now basically disappeared. In addition, in considering some of the classical arguments against traditional
metaphysical realism, and the responses open to the metaphysical realist, an argument is set up for considering the classical
dichotomy between realism and anti-realism to dissolve into a continuum of possible stances towards theories, depending on
the quality of evidence available at various times of the development of investigation of a certain aspect of reality. It is
explained that it is in this sense that the version of realism offered here is naturalised, as the goal is to devise a form of
realism that mimics the processes and course of science as much as possible in order to find the best way of evaluating the
results of such a fluid, multi-dimensional, ever evolving enterprise as science is. As evolutionary progressiveness is suggested
as the only effective criterion for realists to use (rather than no miracles-arguments in terms of truth and success), the
second section of the paper contains a definition and discussion of this concept.
In the third section of the paper an epistemic account of truth is unpacked and the link between reference and truth is
explored. It is traditionally claimed that true theories refer, meaning that true claims about unobservable entities imply
these entities really exist. The naturalised realist wants to dissolve the link between truth and existence, and thus has to
define truth in a non-semantic manner, as existence claims are not seen as either true or false in any semantic sense. Rather
than giving existence claims a truth value, in naturalised realism the focus is on using such referential claims to monitor how
the self-correcting methods of science directly impact on scientific progress, in the sense that relations of reference are
epistemic score keeping devices of what science reveals on the grounds of evolutionary progressive experimental and
theoretical combinations, rather than indicators of science proving ontological existence of the entities it postulates.
Thus rather than claiming with a Niiniluotian critical realist that theoretical statements in science may be strictly
speaking false but nevertheless truthlike or approximately true (Niiniluoto 1999, 12-13), the naturalised realist claims that
theoretical statements may be true if they are warranted given the sum of knowledge about the particular aspect of reality
under investigation at the time; and whether or not theoretical statements are warranted, depends on the quality of
experimental evidence and of theoretical evolutionary progressiveness available at the time. Truth is assembled as science
progresses through revisions and confirmations. As science interacts with the world, the justification for our beliefs in its
theories deepens according to the evolutionary progressiveness of such theories.
In a naturalised realist account truth then becomes (the result of) a dynamic process of relation-building rather than a
(static) property of sentences, because what is of interest to naturalised realists is the mechanism that effects the unveiling of
truths to various degrees - and that tells us better of what is that it is - at various times. And here it is argued that this
mechanism is the evolutionary progressiveness of theories as portrayed or captured by relations of reference supervening on
it.The naturalised realist claims that in order to make any kind of meaningful contribution to views on truth in science, what is
needed is practical proof that the theories employed by science somehow hook onto the world and the suggestion here is
that this can be determined by checking theory-world interaction in the evolutionary progressive sense of the word and
ultimately by pragmatising semantics (compare Hintikka (1975)) 4 such that it collapses into epistemology.
Science makes progress not because its different revelations are cumulative or converging to the true image of the system
at issue (contrary to Peirces (1955) depiction of truth in How to Make our Ideas Clear), but rather because of a much more
subtle thread that joins the outcomes of different investigations of the same system together. This thread is the self-corrective
Page 41 of 46

method of science (here in line with Peirces (1955) warning that there is no evidence that everything will converge to a given
result in Pragmatism in Retrospect). This method crystallises in what is learned through interaction between theories and
experiments as the result of testing and subsequent revision and affects a kind of meta-continuity of sciences processes which
is much more meaningful that a notion of (static) continuity in terms of accumulation in the trivial sense that theory Tn+1
retains all that theory Tn can explain and predict.
In the final section of the paper the potential of naturalised realism satisfying the demand recently made by writers such as
Ruetsche (2011) and Chang (2012) that realism must be pragmatic and pluralist is explored. It is concluded that the only way
in which to be a realist, given the true nature and current content of science, is not to be one!
Bibliography
Chang, H. (2012) Is Water H2O? Evidence, Pluralism and Realism. Dordrecht, Springer.
Hintikka, J. (1975) the Intentions of Intentionality. Dordrecht, D. Reidel.
Niiniluoto, I. (1999) Critical Scientific Realism. Oxford, Oxford University Press.
Peirce, C.S. (1955). How to make our Ideas clear. In Justus Buchler (Ed.), Philosophical Writings of Peirce. New York: Dover
Publications.
Peirce, C.S. (1955). Pragmatism in Retrospect. In Justus Buchler (Ed.), Philosophical Writings of Peirce. New York: Dover
Publications.
Ruetsche, L .(2011). Interpreting Quantum Theories. Oxford, Oxford University Press.

40) Yafeng Shan What entities exist


Many scientific realists believe in the existence of some entities (e.g. electrons, atoms). Although they differ in
believing what specific entities exist and why those entities exist, it is a minimal account of scientific entity realism
they all accept. The central doctrine is that there are some scientifically studied entities out there. The so-called
scientifically studied entities include all the entities, which are either postulated by some scientific theories, or
experimentally manipulated in the scientific practice. There are two reasons why I use the term scientifically
studied entities rather than more familiar ones like theoretical entities, postulated entities and unobservable
entities. First, I try to avoid the notoriously demarcation problems like how to make a clear distinction between
theoretical and observable entities. Second I wont like to identify all entities in a theory-biased way. Not all entities
studied by scientists are postulated or well defined by theory.
There are two well-known arguments for this doctrine. First is the no miracles argument (NMA). According to the
NMA, the central terms of our best scientific theory genuinely refer because of the success of that theory. (The
success criterion of the Entity-Existence)
The second is the experimental argument (EA). According to the EA, the reality of an entity is confirmed when we
can understand its causal properties by setting up devices that use those causal properties to reliably interfere in
other more hypothetical parts of nature. (The manipulatability criterion)
These are two realist criteria to evaluate whether some entities exist, although not all scientific entity realists
accept both. Nor do I suggest that for scientific entity realists, an entity exists if and only if these two criteria are
fulfilled. However, it is a fact that the success criterion and the manipulatibility criterion are among those most
famous and widely accepted entity realist criteria.
The Electron is a favourite example for scientific entity realists. For them, it seems uncontroversial that electrons
exist since either of the two criteria is fulfilled. However, it is surprising to observe that gene, as a very important
biological concept, was not widely discussed in this context. So, a natural question occurs: Do genes exist? Or, we
may ask it more specifically. Is the term gene the central term in our current best scientific theory? Are genes
experimentally manipulatible?
The answer to the first question is a definite yes! Since 1920s, the term gene has been one of the central terms
in biological sciences.
The answer to the second question is also positive. Genes are widely manipulated in various experiments. A
straightforward example is the highly successful transgenetic technology. Since genes definitely fulfil the two
realist criteria, it is easy for scientific entity realists to conclude that genes exist.
However, for sceptics, genes are dubious. Scepticism arises from the variously inconsistent and complex usages of
the term gene. The concept gene is context-sensitive. There are multiple co-existing conceptions of the gene in
contemporary biological sciences. Roughly speaking, there are three distinct concepts of the gene: instrumental,
nominal and postgenomic. The instrumental genes are factors in a model of the transmission of a heritable
phenotype, or in a population genetic model of a changing population. The instrumental concept of the gene is still
widely employed nowadays in the fields like evolutionary genetics and medical genetics.
The nominal gene is a specific DNA sequence beginning with a start codon and ending with a stop codon. It is a
practical tool, allowing stable communication between bioscientists in a wide range fields grounded in well-
defined sequences of nucleotides. 4
The post-genomic gene is the image of the gene product in the DNA, no matter how fractured and distributed that
image might be, and no matter how much supplementation the transcribed sequences requires to determine the
sequence of elements in the product. According to this conception, the relationship between DNA and gene product
Page 42 of 46

is indirect. The gene is a flexible entity.


It should be noted that these three conceptions are fundamentally distinct. The instrumental gene cannot be
reduced to the nominal gene, while the post-genomic gene challenges the conventional assumptions about the
relationship between genome structure and function, and between genotype and phenotype, which is crucial to the
instrumental gene. Moreover, there are irreconcilable tensions between the instrumental, nominal and post-
genomic conceptions of the gene. For example, the critical property of the instrumental gene is a unit of
recombination a segment of chromosome which regularly recombines with other segments in meiosis and which
is short enough to survive episodes of meiosis for selection to act upon it as a unit, while it is not always the unit of
genetic function, that is the nominal gene.
Given these different conceptions of the gene, does it mean that the question do genes exist should be
reformulated as the following three questions:
Do instrumental genes exist? Do nominal genes exist? Do post-genomic genes exist?
If so, there is no problem about the ontology of genes any more. The question do genes exist is thus misleading.
It is implausible to ask this question at all! Similarly, it is untenable to ask do tubes exist? without specifying
whether tubes in the question refer to the vehicles of London underground or the experimental tubes in the
laboratory.
Therefore, the answer to the question about the ontology of genes is not as straightforward as some realists might
think. It is sufficient to conclude that it is controversial whether genes exist.
What is more, there is a dilemma for the realists who contend the existence of certain scientifically studied entities.
On the one hand, if genes do exist, it seems that neither two criteria is a sufficient condition. Even if genes fulfil the
two realist criteria, it is still controversial whether genes exist. One the other hand, if genes do not exist, then the
realists have to make a further distinction between the entities like electrons and those like genes, both of which
fulfill two realist criteria.
In addition, there is another serious problem for scientific entity realism. If the sceptics are right, there is an
irreconcilable tension between multiple conceptions of the gene. There is no unitary account of the gene. Then do
genes exist becomes a pseudo-question. Thus, genes cannot be the subject of the ontological problem. In other
words, it is evident that there must be a distinction between entities like electrons, which can be the subject of the
realism/antirealism debate, and entities, which cannot be the subject of the realism/antirealism debate.
Unfortunately, neither of realist criteria prohibits asking the questions like do genes exist. Therefore, I argue that
this is a serious question that the realist must address: what entities can be the subject of the ontological problem?

41) Daniel Kodaj From conventionalism to phenomenalism


The paper argues that conventionalist1 puzzles about spacetime are implicit arguments for phenomenalism.
Roughly, the claim is that the conventionalist puzzles arise iff we presuppose that spacetime can be regarded
from outside, stepping out of intraworld spatial experience (Ill try to make this idea clearer as we go along).
Conventionalists solve the puzzles by arguing that the choice between the empirically equivalent alternatives
that arise is a matter of convention, while realists deploy inference to the best explanation and various
piecemeal arguments against specific conventionalist scenarios. Ill argue that phenomenalism gives a simple
and unified solution to all relevant puzzles.
The paper discusses six conventionalist puzzles. The first one, (P1), comes from Reichenbach via Putnam:
Reichenbach used to begin his lectures on the philosophy of space ad time in a way which brought an
air of paradox to the subject. He would take two objects of markedly different size, say an ash tray and
a table, situated in different parts of the room, and ask the students How do you know that one is
bigger than the other? (Putnam 1975: 155)
As Putnam tells us, Reichenbach rebutted common-sense answers to this question by claiming that they all
presuppose some arbitrary coordinative principle such as the thesis that objects are not uniformly distorted
as we move about, that light rays travel in straight lines etc. The five other puzzles are:
(P2) Helmholtzs puzzle about a race of beings who live on the surface of a convex mirror but believe
they live in a flat space (Helmholtz 1881).
(P3) Poincars puzzle about a race of beings who live on a disc but believe that they live on an infinite
plane because they slow down when they approach the edge (Poincar 1952).
(P4) Reichenbachs puzzle about a race of beings who live in the 2D reflection of a 3D surface and think
that their world contains a dome (Reichenbach 1958: 1113).
1 By conventionalism, I mean the kind of underdetermination thesis that is often associated with
Reichenbach. As Ben-Menahem (2006: Ch.1) points out, there 4 is another brand of conventionalism, which is a
thesis about necessary truth. I emphasize that my paper has nothing to do with this second brand of
conventionalism.
(P5) The conventionality of the one-way velocity of light in special relativity.
(P6) Field vs. geometric interpretations of general relativity.
Page 43 of 46

These puzzles differ in dialectical value. (P1)(P4) are fanciful scenarios that are not directly relevant to any
serious physical theory, but these puzzles are nonetheless useful because they make the conventionalist point
very vivid. (P5) and (P6), on the other hand, are directly relevant to the interpretation of real physics, but they
are extremely technical. My strategy will be to use the easier puzzles to get the basic idea through, then Ill
turn to (P5) and (P6).
The gist of my argument is that these puzzles arise if and only if we reify spacetime into an object which can be
looked at from outside, either literally from another spatiotemporal vantage point, as in (P2) and (P4), or by
extending a metric concept related to intraworld spatial experience to states of affairs that are not accessible
in the context of intraworld spatial experience, as in (P1), (P3), (P5), and (P6). To portray (P1)(P6) as
variations on a single underlying theme, I introduce the idea of internal and external metrics:
The internal metric of world W:
IMWP (x, S) = a number that expresses the magnitude of xs spatiotemporal property P (in some standard
unit) as measured by a normal observer in situation S (where S is complex state of affairs that describes
the environment of x and the observer).
The external metric of world W:
EMWP(x, S) = a number that expresses xs spatiotemporal property P (in some standard unit) as x is in
situation S.
I argue that (P1)(P6) are all based on the presupposition that for some spatiotemporal property P, there are
possible worlds with the same internal P-metric as the actual internal P-metric but with an external metric
that is deviant in the following sense:
Deviant external metrics:
EMWP is deviant EMWP involves deviant excess structure =df
(1) For some x, S and S*
(i) S* is a proper part of S
(ii) IMWP (x, S) = k IMWP(x, S*) (k is a real number)
(iii) EMWP (x, S) k EMWP(x, S*)
or (2) For some S, S is in the domain of EMWP and S isnt in the domain of IMWP.
For example, in (P3), we have an internal metric according to two intervals can appear to have the same
length while being, according to the external metric, of different length (type (1) deviant metric). In (P2), we
have an internal metric whose domain does not include the situation of measuring an object on the surface of
the mirror from outside the mirror (type (2) deviant metric).
The paper shows that all puzzles fit this schema, and I relate the schema to technical discussions of (P5)
(Malament 1977) and (P6) (Ben-Menahem 2006: 85127). I argue that the conventionalist puzzles only arise
if we presuppose that there are external metrics, and I claim that if we deny the possibility of external metrics,
then we are led to phenomenalism. I relate this idea to John Fosters (1982) arguments against realism about
spacetime.
References
Ben-Menahem, Yemima (2006): Conventionalism. Cambridge University Press. Foster, John (1982): The Case
for Idealism. Routledge & Kegan Paul.
Helmholtz, Hermann (1881): On the origin and significance of geometrical
axioms. In his Popular Lectures, Longman, Green, and Co., 2772.
Malament, David (1977): Causal theories of time and the conventionality of simultaneity. Nos 11, 293300.
Poincar, Henri (1952): Science and Hypothesis. Dover Publications.
Putnam, Hilary (1975): The refutation of conventionalism. In his Mind, Language, and Reality, Cambridge
University Press, 153191.
Reichenbach, Hans (1958): The Philosophy of Space & Time. Dover Publications.

42) Fabio Sterpetti Optimality models and scientific realism


Optimality models (OM) have recently received much attention by philosophers (Rice 2013, 2012;
Potochnik 2010, 2009; Baron 2013). In fact, OM have been used by platonist philosophers of mathematics
in order to try to establish the existence of mathematical entities, and by scientific realist philosophers of
science to try to secure the genuineness of non-causal scientific explanations.
This relationship between Mathematical Platonism (MP) and Scientific Realism (SR) will be analyzed
with regard to the OM.
After describing the salient features of OM,1 it will be analyzed
4 if OM can be considered to succeed in
being an example of non causal scientific explanation which is acceptable from both a platonist and a
realist point of view. This amounts to state if SR can be safely conjunct with MP.
The answer will be in the negative, because what should allow the realist to accept non causal
Page 44 of 46

explanations as scientific explanations will be shown to be either non compatible with a realist position,
or inadequate in order to justify such acceptance in a non circular way.
1 Following Rice 2013, p. 3-4, OM can been briefly described as follows: Optimality models are
distinguished by their use of a mathematical technique called Optimization Theory, whose goal is to
identify which values of some control variable(s) will optimize the value of some design variable(s) in light
of some design constraints (). An optimality model specifies a constrained set of possible strategies
known as the strategy set. The design variables to be optimized constitute the models currency. An
optimality model also specifies what it means to optimize these design variables (e.g. should a design
variable be maximized or minimized). This is referred to as the models optimization criterion. Once the
strategy set and optimization criterion have been identified, an optimality model describes an objective
function, which connects each possible strategy to values of the design variable(s) to be optimized. ().
The strategy that optimizes the models criterion, in light of various constraints and tradeoffs, is deemed
the optimal strategy. By mathematically representing the important constraints and tradeoffs, an
optimality model can demonstrate why a particular strategy is the best available solution.

Optimality models and Mathematical Platonism


MPs main argument, i.e. the Indispensability Argument (IA), rests on the assumption that the
indispensable role of mathematics in scientific theories justifies the platonists realist claim about the
ontological status of mathematical entities (Bangu 2008). So, the main argument for MP is in some way
parasitic on that for SR, i.e. the No Miracle Argument (NMA) (Psillos 1999).
Platonists put pressure on realists because their main argument mirrors and assumes the realists one in
such a way that realists cannot avoid to confront with the issue of the nature of mathematics.
Many authors have criticized the classical IA, and have argued that mathematics has to be explanatory
indispensable in order to be genuinely indispensable, not just being holistically confirmed by the
empirical success of the scientific theories in which it appears (Field 1989; Melia 2002; Baker 2009).
So, given that platonists conceive of mathematical entities as mind- independent abstract entities, and
abstract entities are normally understood as non-spatiotemporally located and causally inert
(Balaguer 2009), it becomes very important for the platonists to show that there are scientific
explanations: 1) which account for natural phenomena even if they are non causal; 2) in which the
explanatory indispensable role is played by mathematics.
Some authors claim that mathematics is indispensable even in such a stronger way, i.e. that there are
genuine mathematical indispensable explanations of physical phenomena (Baker 2009; Colyvan 2001;
Batterman 2010).
The problem for the platonists is that although a number of examples of such mathematical explanations
have been given, they have until recently lacked a general strategy for predicting the prevalence of this
kind of explanation in science, but if OM can be considered as a general class of mathematical
explanations of natural phenomena, mathematical explanations are likely to be commonplace because of
the centrality of optimality models to science (Baron 2013, p. 2-3).
So, given that OM are widely used in science, if the explanations deriving from the use of OM could be
interpreted as mathematical explanations of natural phenomena, platonists would have solved the
problem of demonstrating the relevance of the cases in which mathematics is explanatory indispensable
in science.

Optimality models and Scientific Realism


Scientific realists have been interested in OM because they have been interested in showing that non
causal explanations are a genuine and acceptable kind of scientific explanation for a realist.
Realists have faced many objections to their classical position, i.e. the idea that we can derive the truth of
scientific theories from their success, and that truth means correspondence to the world, so they have
become aware of the fact that some important features of the scientific theorizing cannot be accounted
for in a traditional realist way.
One of such features is the role of models in science. In fact: 1) models are normally intended as not being
interpretable as literally true, given that they involve at least some form of idealization (Bokulich 2011);
2) models are generally conceived of as mathematical models, and so the problem of the relation between
models and the world amounts to the problem of the relation between mathematics and the world
(Bueno 2011).
Moreover, many realists support the semantic view of theories.4 This makes the problem of the nature of
the models central for the realists. In fact, if following the semantic view, theories are taken to be
collections of models, and models are considered as abstract objects, then theories traffic in abstract
entities much more widely than is often assumed (Psillos 2011, p. 4). The semantic view clearly equates
Page 45 of 46

the abstractness of mathematical entities with that of scientific theories. For example, Suppes states that:
the meaning of the concept of model is the same in mathematics and the empirical sciences (Suppes
1960, p. 12). If the realist wants to claim that the scientific theories are true, and scientific theories are
intended as classes of mathematical models, then it seems that the realist has to embrace MP.
So, given that OM are a kind of mathematical model widely accepted in science, if the explanations
deriving from the use of OM could be interpreted as non causal scientific explanations, realists would
have solved the problem of demonstrating that their conception of reality is broad enough to be
compatible with the scientific practice.
Difficulties for the realist
The problem for the realist is that she has now to account for the applicability of mathematics in a way
compatible with SR.
It seems there are two main possibilities for the realist: 1) to show that relying on explanatoriness
consideration is a valid tool to assess the validity of a scientific explanation (Glymour 1980); 2) to show
that the deepest scientific explanations are not causal, that they are mathematical, and that they can be
successfully trusted because mathematics is able to describe the modal structure of the universe (Lange
2013).
These two options will be analyzed in general, and then specifically tested in the context of the OM, and
will be shown to be deeply related, and both inadequate.
In fact, it can be shown that explanatoriness considerations cannot be equate to confirmation in order to
assess a scientific theory. The problem is that the centrality that the empirical success seems to have for
the realists (and even for the platonists) arguments, cannot be coherently accounted for anymore in the
realist frame if non causal explanations and explanatoriness considerations are accepted in such frame
(Sober 1999; Zamora Bonilla 2003).
The idea that mathematics can tell us what is necessary rests on an assumption which is pythagorean in
character, that is that the modal structure of the world is mathematical, and so that mathematics can
reveal us such structure (Lange 2013). But this is exactly what the realist should demonstrate and not
just assume, in order to justify the acceptability of a mathematical (non causal) explanation as a scientific
explanation.
The problem for such position is that it is not able to give an account for such supposed capability of
mathematics in a naturalistic way. In fact, it seems there are three available ways to justify such capacity
of mathematics: 1) from the success of the previously used mathematical models, but this kind of
inference would amount to a form of the NMA, and would be prone to the same objections; 2) from a non
naturalistic point of view, but this option should not be palatable for those scientific realists who support
even some sort of naturalism;
3) from a naturalistic point of view, relying on an evolutionary account of the human abilities
which give rise to mathematics, but such position can be shown to be circular.
So, it seems reasonable to conclude that there is no easy way for a scientific realist to use OM in order to
show she is able to embrace MP and to make her position coherent.
References
Baker, A. (2009): Mathematical Explanation in Science. The British Journal for the Philosophy of Science
60, 611633
Balaguer, M. (2009): Realism and anti-realism in mathematics. In: Gabbay, D., Thagard, P., Woods, J. (eds.)
Handbook of the Philosophy of Science, vol. 4, Philosophy of Mathematics, 117151. Elsevier, Amsterdam

Bangu, S.I. (2008): Inference to the best explanation and mathematical realism. Synthese 160, 1320
Baron, S. (2013): Optimisation and mathematical explanation: doing the Lvy Walk. Synthese, DOI
10.1007/s11229-013-0284-2
Batterman, R.W. (2010): On the Explanatory Role of Mathematics in Empirical Science. The British
Journal for the Philosophy of Science 61, 125
Bokulich, A. (2011): How scientific models can explain. Synthese 180, 33 45
Bueno, O. (2011): Structural Empiricism, Again. In: Bokulich, P., Bokulich,
A. (eds.) Scientific Structuralism, 81l03, Springer Dordrecht
Colyvan, M. (2001): The Indispensability of Mathematics. Oxford University Press, Oxford
Field, H. (1989): Realism, Mathematics and Modality. Blackwell, Oxford Glymour, G. (1980): Explanations,
Tests, Unity and Necessity. Nos 14, 31
4
50
Lange, M. (2013): What Makes a Scientific Explanation Distinctively Mathematical?. The British Journal
for the Philosophy of Science 64, 485-511
Melia, J. (2002): Response to Colyvan. Mind 111, 7579
Page 46 of 46

Potochnik, A. (2010): Explanatory Independence and Epistemic Interdependence: A Case Study of the
Optimality Approach. The British Journal for the Philosophy of Science 61, 213233
Potochnik, A. (2009): Optimality modeling in a suboptimal world. Biology and Philosophy 24, 183197
Psillos, S. (2011): Living with the abstract: realism and models. Synthese 180, 317
Psillos, S. (1999): Scientific Realism. Routledge, New York
Rice, C. (2013): Moving Beyond Causes: Optimality Models and Scientific Explanation. Nos, DOI
10.1111/nous.12042
Rice, C. (2012): Optimality explanations: a plea for an alternative approach.
Biology and Philosophy 27, 685703
Sober, E. (1999): Testability. Proceedings and Addresses of the American Philosophical Association 73,
4776
Suppes, P. (1960): A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical
Sciences. Synthese 12, 287301
Zamora Bonilla, J.P. (2003): Meaning and Testability in the Structuralist Theory of Science. Erkenntnis 59,
4776

Você também pode gostar