Você está na página 1de 34

G.

Betz

Underdetermination, Model-ensembles and Surprises —


On the Epistemology of Scenario-analysis in Climatology

Stuttgart, February 2008

Institute of Philosophy, University of Stuttgart,


Seidenstraße 36, 70174 Stuttgart/ Germany
gregor.betz@philo.uni-stuttgart.de
www.uni-stuttgart.de/philo

Abstract As climate policy decisions are decisions under uncertainty, being based on a range of future
climate change scenarios, it becomes a crucial question how to set up this scenario range. Failing to
comply with the precautionary principle, the current scenario methodology of the International Panel
on Climate Change (IPCC) seems to violate international environmental law, in particular a provision
of the United Nations Framework Convention on Climate Change. To place the IPCC reports on a
sound methodological basis would imply that climate simulations which are based on complex climate
models had, in stark contrast to their current hegemony, hardly an epistemic role to play in climate
scenario analysis at all. Their main function might actually consist in `foreseeing future ozone-holes'.
In order to argue for these theses, I explain, first of all, the plurality of climate models used in
climate science by the failure to avoid the problem of underdetermination. As a consequence, climate
simulation results have to be interpreted as modal sentence, stating what is possibly true of our climate
system. This indicates that climate policy decisions are decisions under uncertainty. Two general
methodological principles which may guide the construction of the scenario range are formulated and
contrasted with each other: modal inductivism and modal falsificationism. I argue that modal
inductivism, being the methodology implicitly underlying the IPCC reports, is severely awed. Modal
falsificationism, representing the sound alternative, would in turn require a complete overhaul of
current IPCC practice.

Preprint Series Issue No. 2009-2


Stuttgart Research Centre for Simulation Technology (SRC SimTech)
SimTech – Cluster of Excellence
Pfaffenwaldring 7a
70569 Stuttgart
publications@simtech.uni-stuttgart.de
www.simtech.uni-stuttgart.de
Underdetermination, Model-ensembles and
Surprises — On the Epistemology of
Scenario-analysis in Climatology
(The Epistemology of Scenario-analysis in
Climatology)

February 8, 2008

Abstract
As climate policy decisions are decisions under uncertainty, being
based on a range of future climate change scenarios, it becomes a cru-
cial question how to set up this scenario range. Failing to comply
with the precautionary principle, the current scenario methodology
of the International Panel on Climate Change (IPCC) seems to vio-
late international environmental law, in particular a provision of the
United Nations Framework Convention on Climate Change. To place
the IPCC reports on a sound methodological basis would imply that
climate simulations which are based on complex climate models had,
in stark contrast to their current hegemony, hardly an epistemic role
to play in climate scenario analysis at all. Their main function might
actually consist in ‘foreseeing future ozone-holes’.
In order to argue for these theses, I explain, first of all, the plurality
of climate models used in climate science by the failure to avoid the
problem of underdetermination. As a consequence, climate simulation
results have to be interpreted as modal sentence, stating what is pos-
sibly true of our climate system. This indicates that climate policy
decisions are decisions under uncertainty. Two general methodologi-
cal principles which may guide the construction of the scenario range
are formulated and contrasted with each other: modal inductivism
and modal falsificationism. I argue that modal inductivism, being
the methodology implicitly underlying the IPCC reports, is severely
flawed. Modal falsificationism, representing the sound alternative,
would in turn require a complete overhaul of current IPCC practice.

1
1 Introduction

Antarctic ice cores reveal that atmospheric CO2-concentrations have never


been as high as today during the past 700.000 years (Siegenthaler et al.,
2005). This changes the earth’s radiative energy budget and, according
to the consensus of climatologists, will trigger further anthropogenic global
warming (IPCC, 2001). What is much less certain is the overall extent of
this warming, its regional distribution, as well as further consequences such
as precipitation changes, sea level rise or changes of ocean currents. In or-
der to explore these consequences, climatologists construct climate models
whose results serve in turn as the basis for climate policy decisions.1 There
is, however, not a single climate model that is used to run climate simula-
tions but a plurality of models; accordingly, the third assessment report of
the International Panel on Climate Change (IPCC)—the world’s most im-
portant climate science organisation bringing together top researchers from
different fields and maintaining close ties with the UN—derives its results
from 31 climate models (see figure 3 below). Thence the question arises
how climate policy can be based on the, actually conflicting, results of many
different models—and why this plurality of models prevails in the very first
place. These are the question I shall address in this paper.
The dialectical structure of my argumentation is, as depicted in figure 1,
composed of three parts. The first part contains the argumentation in favour
of the pivotal thesis, namely the thesis of model-underdetermination. The
second part comprises arguments that represent methodological approaches
in the philosophy of science which avoid the problem of underdetermina-
1
By using computer models to simulate the climate system we clearly “extend our-
selves” (Humphreys, 2004): Nobel prize winner Svante Arrhenius was the first who, to-
wards the end of the 19th century, attempted to calculate the effects of a doubling of
the atmospheric CO2-concentration (see Crawford, 1997). In spite of several months of
“laborious pencil work” and results that came close to the current estimates, Weart (2003)
underlines that “neither Arrhenius nor anyone for the next half-century had the tools to
show what an increase of CO2 would really do to climate”.

2
Figure 1: Argument map depicting the dialectical structure of the reason-
ing presented in this article. Nodes represent arguments or theses. Arrows
marked with “+” indicate that an argument/thesis supports another ar-
gument/thesis; arrows marked with “–” visualise the attack relationship.
An argument A supports (attacks) another argument B if an only if A’s
conclusion is equivalent (contrary) to one of B’s premisses.

3
tion. I shall only briefly sketch these arguments and indicate why these
approaches, while yielding fruitful insights and guidelines in other disci-
plines, are not applicable in climatology. The third part, being the main
part of my argumentation, explores the methodological implications of the
underdetermination thesis in climatology. Introducing the two alternative
methodological principles of modal inductivism and modal falsificationism,
I shall argue that the current IPCC practice is based on the problematic
principle of modal inductivism and propose that scientific investigations of
future climate change shall rather be based on modal falsificationism.

2 Model-underdetermination in climatology

The thesis of underdetermination is usually stated with respect to theories


or even paradigms, i.e. it asserts that theory choice is underdetermined by
empirical evidence, or, more generally, scientific methods. In contrast, the
central thesis of my argumentation raises this issue with regard to mod-
els, more specifically climate models. Accordingly, the thesis of model-
underdetermination reads (T1 Underdetermination)2 :

Scientific methods determine several rival climate models which


should be adopted according to standards of scientific rationality.

Unless stated otherwise, “climate model” refers to so called general cir-


culation models (GCMs) which are the most complex climate models (see
figure 2) and which the IPCC uses primarily in order to generate its climate
projections (compare figure 3).
Part 1 in the dialectical structure depicts the fairly familiar argumen-
tation for model-underdetermination, taking off with Quine’s famous argu-
ment.
2
Brackets contain the arguments’ and theses’ titles as in figure 1.

4
Figure 2: Overall structure of a general circulation model (GCM). In order to
model the climate system, atmosphere and ocean are divided into boxes by a
three-dimensional grid. For each box, certain variables such as temperature,
wind-/current-vectors, humidity/salinity etc. are specified. The respective
values are then calculated for each time-step. Moreover, GCMs can include
further components representing sea ice, biosphere, ocean-chemistry, soil,
continental ice sheets, etc. Source: McGuffie and Henderson-Sellers (2001).

5
But the total field [of logically connected statements] is so under-
determined by its boundary conditions, experience, that there is
much latitude of choice as to what statements to reëvaluate in the
light of any single contrary experience. (Quine, 1953, pp. 42f.)

In other words, infinitely many models are consistent with finite evidence
(A2 Quine). The prevailing reply to this argument in favour of model-under-
determination insists that standards of scientific success are not restricted to
the criterion of logical consistency with empirical data (A3 More than logic
and T4 Standards of success).3 That idea, namely that models are evaluated
in the light of many different standards of success is, in turn, seized by a
Kuhnian reasoning (compare Kuhn, 1977) which argues that these standards
can be prioritised in an arbitrary way (A5 Standards prioritised) and that we,
consequently, end up with underdetermination, again (A6 Kuhn):

(1) For every two rival climate models M and M*, M* is better than
M with regard to at least one standard of success (i.e. no model
dominates a rival model).
(2) If no model dominates a rival model with respect to standards of
success, then two incompatible models M and M* generally fulfil
the standards of success to the same degree for some prioritisation
of these standards.
(3) Thus: Two incompatible climate models M and M* generally fulfil
the standards of success to the same degree for some prioritisation
of these standards (from 1,2).
(4) Every prioritisation of climatological standards of success is accept-
able.
(5) According to scientific methods, it is rational to adopt that model
which generally fulfils the standards of success for an acceptable
prioritisation to the highest degree.
(6) According to scientific methods, if two models M and M* fulfil the
standards of success for an acceptable prioritisation to the same
3
These standards comprise, for example, explanatory power (see Laudan, 1991).

6
degree and it is rational to adopt M, then it is rational to adopt
M*.
(7) Thus: Scientific methods determine several rival climate models
which should be adopted according to standards of scientific ratio-
nality (from 3,4,5,6).

Premiss (1), whose supporting argument will be considered below, states


that there are no two models such that one surpasses the other with re-
gard to all standards of scientific success. If that is the case, then, ac-
cording to (2), the standards can be prioritised such that two arbitrary
models satisfy the ranked standards to the same degree. Premisses (4)-(6)
add the observation that the prioritisation of standards is arbitrary as well
as two further methodological principles. What follows is even more than
the stated—yet for the overall reasoning sufficient—conclusion, the thesis of
model-underdetermination, namely that all climate models are epistemically
on an equal footing.
Premiss (1) is apparently an important ingredient for the Kuhnian rea-
soning in favour of underdetermination. Because of the diversity of stan-
dards of success in climatology and the trade-offs between them, (1) can be
justified in a particularly compelling way (A7 Standards conflict):

(1) The standards of success which climate models are supposed to


meet comprise:

(a) Empirical adequacy, specifically with respect to: past–present


climate, atmosphere–land–ocean, temperature–precipitation–
seasonal cycle–large scale phenomena (e.g. monsoon), re-
gional climates;

(b) Simplicity (for the sake of computability);

(c) Comprehensiveness, i.e. inclusion of cyrosphere, land bio-


sphere, ocean chemistry and biology, agriculture, ...;

(d) Precision, i.e. resolution;

7
(e) Consistency with more fundamental physical theories, no-
tably thermo- and fluid-dynamics;

(f) Reducibility to more fundamental physical theories, notably


thermo- and fluid-dynamics.

(2) Standards (a)-(f) are conflicting.


(3) If standards of success are conflicting (i.e. there are two standards
such that the more you fulfil one, the less you satisfy the other),
then for every item x and every item y (which are evaluated), x is
surpassed by y with respect to at least one standard of success.
(4) Thus: For every two rival climate models M and M*, M* is better
than M with regard to at least one standard of success (from 1,2,3).

It is a priori true, as (2) claims, that there are trade-offs between the
evaluative standards given in premiss (1). Simplicity, for example, conflicts
with comprehensiveness and precision. As an empirical fact, there are fur-
ther conflicts. Thence, empirical adequacy seems to run counter to consis-
tency with more fundamental physical theories—an observation which does
not merely apply to climate modelling as Winsberg (2003, p. 112) points
out.4 In order to yield at least roughly adequate simulations of our climate
system, 17 out of 31 GCMs that were used in the third IPCC report in-
cluded adjustments of heat, fresh water or momentum fluxes to the effect
that these models do not even satisfy fundamental conservation principles
of physics (see figure 3).
The strength of the argument in favour of trade-offs between scientific
standards in climatology is clearly one explanation of the plurality of climate
models alive. A further explanation consists in the failure and inapplicability
of alternative methodological approaches in climatology, to which we will
turn next.
4
Winsberg (2006) argues that the successful usage of so-called contrary-to-fact prin-
ciples in complex modelling and simulation represents an example for reliability without
truth. Truth, on this account, is no prerequisite for reliability. Truth and reliability, in
other words, can be conflicting epistemic aims.

8
Figure 3: General circulation models used in the third IPCC report. The
last column of the truncated table indicates whether a model incorporates
flux adjustments. Source: IPCC (2001, p. 478).

9
3 The failure of traditional attempts to avoid un-
derdetermination

Different methodological approaches which avoid the underdetermination


problem have been developed in the philosophy of science during the last
century. In this section, corresponding to part 2 of the dialectical struc-
ture (figure 1), I shall argue that these approaches, though possibly yield-
ing insights when applied to other disciplines, are inapplicable to climate
science—a failure which explains why underdetermination reigns in climatol-
ogy. The methodological approaches discussed below comprise Bayesianism,
Cartwright’s nomological machines approach, falsificationism, and a heuris-
tic-deductive account of theory choice. By avoiding underdetermination,
these methodologies do imply that certain premisses in the argumentation
in favour of underdetermination are false, i.e. they attack arguments in
part 1 (A8-A13). These dialectical relations are visualised in the dialectical
structure.
Bayesianism has many faces. One way to sort these out is to distinguish
between Bayesianism as a real-time methodology as opposed to Bayesianism
understood as a reconstructive method that allows to rationalise scientists’
choices ex post. Bayesianism as a real-time methodology, to which the dis-
cussion will be restricted, falls once again into two subcategories which can
be termed, in analogy to the distinction of different types of utilitarianism,
‘rule Bayesianism’ and ‘direct Bayesianism’. Whereas rule Bayesianism at-
tempts to justify (possibly non-probabilistic) methodological rules with ref-
erence to Bayesian epistemology5 , direct Bayesianism applies the Bayesian
framework to individual choices of scientists, thus urging scientists to ex-
plicitly assign subjective probabilities to alternative theories, hypotheses or
models, and to update these probabilities in the light of new evidence ac-
5
E.g. Bayesian Confirmation Theory, see Howson and Urbach (1993); for a critical
assessment compare Mayo (1996).

10
cording to the rule of Bayesian learning. The following brief discussion is
restricted to real-time direct Bayesianism—the branch which has been im-
plemented to some extent in climatology.6 Apparently, its application would
avoid underdetermination. A real-time direct Bayesian could attack the ar-
gumentation for underdetermination at very different stages. Accordingly,
she might insist that premiss (1) of the argument A7 Standards conflict is
false because there is only one fundamental standard of success, namely the
maximisation of rationally updated degrees of belief. In the same line of rea-
soning, premiss (1) of the argument A6 Kuhn might be rejected. Moreover,
a real-time direct Bayesian would argue that the central methodological
premiss (5), also being the assumption denied by the other methodological
approaches discussed below, of that very argument is false, too: Scientists do
not pick a model in the light of some standards of success, all they do is up-
dating their subjective probabilities. I do not touch the issue as to whether
direct Bayesianism as real-time methodology is fruitfully applicable in other
sciences than climatology, yet I do maintain that it is inapplicable in clima-
tology for different, discipline-specific reasons. First of all, it is practically
impossible to calculate the likelihoods of GCMs being true given climate
data: We simply lack the computational resources in order to apply the rule
of Bayesian learning to the set of alternative hypotheses that comprise all
climate models plus their respective different versions. Secondly, attempts
by climatologists to apply Bayesianism to a restricted set of hypotheses show
that the posterior probabilities still depend on the prior probability chosen
which is unacceptable insofar as these probabilities shall form the basis for
climate policy decisions.7
According to the methodology of Cartwright (1999), models are medi-
6
Thus, Morgan and Keith (1995) and Zickfeld et al. (2007) represent examples of expert
elicitations whereas Webster et al. (2003) update priors according to Bayes’ rule; Dessai
and Hulme (2003) review the use of probabilities in climate science.
7
Betz (2007) has argued in a more detailed way against assigning subjective probabil-
ities in climatology. Albert (2003) and Gillies (2000) give general though equally relevant
critiques of Bayesianism as real-time methodology.

11
ators between theories on the one hand and their domains of application
on the other hand.8 Models serve as blueprints for the construction of so-
called nomological machines. These, in turn, give rise to the empirical laws
described by the respective theories. From this perspective, models, not
reality, come first. It is not true that scientists have to choose a model that
is supposed to fit reality—quite the opposite: scientists intervene and ma-
nipulate reality by preparing and carrying out experiments in order to fit
it, reality, to the model. In spite of yielding an interesting perspective on
experimental sciences, this is clearly not how climate science works.
Falsificationism, as a methodology for climatology, suffers the specific
problem that every climate model is, strictly spoken, falsified with regard
to some empirical aspect of our climate system, such as regional precipita-
tion patterns, oceanic temperature profile, atmospheric circulation, seasonal
cycle, etc. Excluding all models that make wrong predictions or have false
empirical implications would simply leave us with no climate model at all.9
If, moreover, ‘unrealistic assumptions’ count as falsifications, too, one can
argue with Winsberg (2006) that it would be counter-productive not to make
use of models with contrary-to-fact assumptions since these might neverthe-
less be reliable and successful.
Finally, underdetermination would vanish on a deductivist account of
theory choice. Likewise, Tetens (2007) suggests that Einstein, developing
his special and general theory of relativity, avoided the underdetermination
problem by logically deducing his theory from well-established physical the-
ories, new evidence and additional, conservative heuristic principles (see also
Tetens, 2006, pp. 441-442). Yet can this analysis be transposed to climate
science? A major problem seems to be that there is no set of heuristic
principles that would determine the hundreds of choices which have to be
made when building a climate model, for instance whether to include a land
8
See also Morgan and Morrison (1999).
9
See also IPCC (2001, pp. 474f.).

12
ice model, if yes: which one, which resolution to use, how to parametrise
processes such as ocean mixing or cloud formation, whether to represent
ocean chemistry, if yes: how, etc. etc.10 The general, underlying point
has been stressed by Morgan and Morrison (1999), Winsberg (2003, 2006),
and Lenhard (2007). These philosophers affirm that models and model-
simulations are “semiautonomous” (Winsberg, 2003): there is no algorithm
for reading models off from theory.11
In sum, alternative methodological approaches fail to solve the underde-
termination problem in climate science. As a matter of fact, climatologists,
instead of trying to avoid underdetermination, seem to have come to terms
with the plurality of climate models. The next section explores its method-
ological implications.

4 Methodological consequences of
model-underdetermination

Model-underdetermination and the plurality of models it induces change


the way we have to interpret the results of climate models. If rival climate
models are epistemically on an equal footing, their empirical implications
cannot be considered true anymore, but must be understood as mere possi-
bility statements. The argument starts with the underdetermination thesis
(A14 Possible, not true):

(1) Model-underdetermination: Scientific methods determine several


rival climate models which should be adopted according to stan-
dards of scientific rationality.
10
For a detailed account of climate model construction compare Shackley (2001).
11
Winsberg’s own pragmatic methodology of model simulations which partly reduces the
credibility of simulation results to the credibility of the model-building techniques—namely
insofar as these have proven to yield reliable and successful models in the past—does not
avoid underdetermination in climate science, either. The techniques currently applied by
climatologists underdetermine choices during model construction, as the variety of climate
models shows (see figure 3).

13
(2) Any two rival climate models have incompatible empirical implica-
tions about past, present, and future climate.
(3) Thus: Empirical implications of our best climate models (i.e. those
that should be adopted according to standards of scientific ratio-
nality) are inconsistent (from 1,2).
(4) Inconsistent empirical statements cannot be considered true.
(5) Thus: Empirical implications of our best climate models cannot
be considered as true statements about past, present and future
climate (from 3,4).
(6) Empirical implications of our best climate models are epistemically
on an equal footing.
(7) If rival scientific hypotheses which are epistemically on an equal
footing cannot be considered as true statements, they are mere
possibility-statements.
(8) Thus: Empirical implications of our best climate models are modal
sentences stating what is possibly true about our climate system
(from 5,6,7).

Note that a Bayesian might challenge premiss (7), yet the inadequacy of
that approach as applied in climate science has been exposed above.
So climate simulation results are just modal sentences. What this in-
dicates—though not strictly implies—is an important fact about climate
policy decisions, namely that they are decisions under uncertainty.12 In
other words: climate policy has to based on knowledge about the possible
consequences of our actions without us being able to assign probabilities to
the alternative outcomes. Epistemically, such kind of decisions require that
the full range of possible future scenarios is specified for each alternative ac-
tion. Thence the question arises how we set up the range of future scenarios.
I see two alternative general methodological principles which can guide the
scenario construction: modal inductivism versus modal falsificationism.
12
I make use of the terminological distinction between “risk” and “uncertainty” intro-
duced by Knight (1921).

14
Modal inductivism states (T15 Modal inductivism):

It is scientifically shown that a certain statement about the future is


possibly true if and only if it is positively shown that this statement
is compatible with our relevant background knowledge.

In contrast, modal falsificationism claims (T16 Modal falsificationism):

It is scientifically shown that a certain statement about the future


is possibly true as long as it is not positively shown that this state-
ment is incompatible with our relevant background knowledge, i.e.
as long as the possibility statement is not falsified.

Let me briefly pinpoint the analogy to classical inductivism and falsifi-


cationism, explaining the names chosen. Whereas classical inductivism and
falsificationism set out the association between theory and empirical data,
their modal counterparts introduced above describe the relationship between
future scenarios and background knowledge. So, like classical inductivism,
modal inductivism assumes the existence of an epistemic foundation: our
background knowledge. Starting from that basis, future scenarios shall be
positively inferred. Modal falsificationism, however, grants the existence of
that basis, yet stipulates to invent arbitrary future scenarios in a first step
before testing them, in a second step, systematically against the basis. Only
those creatively constructed future scenarios that have not been falsified
shall be accepted (as possible).
Modal inductivism is the methodological principle that underlies the
IPCC practice in the third assessment report. As figure 4 illustrates, the
IPCC assumes that for each emission scenario, i.e. set of boundary con-
ditions, the range of future scenarios is spanned by the ensemble of model
simulations. In other words, a future scenario is considered possible if and
only if it is positively inferred from the background knowledge qua model
simulation. And that is modal inductivism. The following argument shows

15
Figure 4: Future global warming scenarios of the IPCC. The possible range
of global mean temperature change in the 21st century is the envelope of pre-
dictions by several climate models given different emission scenarios. Source:
IPCC (2001, p. 555).

16
in more detail how modal inductivism implies the current IPCC methodol-
ogy (A17 Model-ensemble methodology):

(1) Modal inductivism: It is scientifically shown that a certain state-


ment about the future is possibly true if and only if it is positively
shown that this statement is compatible with our relevant back-
ground knowledge.
(2) The relevant background knowledge regarding climate scenarios is
physics.
(3) Thus: It is scientifically shown that a climate scenario is possible
if and only if it is positively shown that this scenario is compatible
with our physical knowledge (from 1,2).
(4) To show that a scenario results from a simulation of some best
climate model is the most appropriate way to positively show that
this scenario is compatible with our physical knowledge.
(5) If (i) being F is equivalent with being G and (ii) the best way to
show that something is G is by doing H, then the best way to show
that something is F is by doing H.
(6) Thus: To show that a scenario results from a simulation of some
best climate model is the most appropriate way to scientifically
show that a climate scenario is possible (from 3-5).
(7) If our best models about some domain yield but possibility state-
ments and the most appropriate way to show that a scenario con-
cerning that domain is possible is by means of demonstrating that
it results from a simulation of some best model, then the future
of domain D should be investigated scientifically by way of con-
structing the scenario range by model-simulation (model-ensemble
methodology).
(8) Empirical implications of our best climate models are modal sen-
tences stating what is possibly true about our climate system.
(9) Thus: Our climate’s future should be investigated scientifically
through the construction of the climate scenario range by model-
simulation (from 6-8).

17
Whilst (7) is merely a special principle of practical reason, the crucial
premiss in this argument, besides (1), is (4), against which one might ob-
ject that model simulations, given the incompatibility of some GCMs with
fundamental physical principles (see above), do not even show that their
results are consistent with our background knowledge. Yet this is merely an
additional problem that adds to the more general objections I will now raise
against modal inductivism.
Modal inductivism requires us to be certain that some consequences are
possible before we take them into account in our policy deliberations. It is
this kind of second-order certainty that contradicts the precautionary princi-
ple which is a well-established principle of international environmental law.
In particular, it is endorsed by the United Nations Framework Convention
on Climate Change (UNFCCC), to whose parties the IPCC reports. Article
3, paragraph 1 of the UNFCCC reads

The Parties should take precautionary measures to anticipate,


prevent or minimise the causes of climate change and mitigate its
adverse effects. Where there are threats of serious or irreversible
damage, lack of full scientific certainty should not be used as a
reason for postponing such measures, taking into account that
policies and measures to deal with climate change should be cost-
effective so as to ensure global benefits at the lowest possible cost.
[. . . ]

Here is a detailed reconstruction of the reductio ad absurdum of modal


inductivism, applying that principle to a fictitious case of a newly invented
material (A18 Precautionary approach):

(1) hModal inductivism: It is scientifically shown that a certain state-


ment about the future is possibly true if and only if it is positively
shown that this statement is compatible with our relevant back-
ground knowledge.i

18
(2) Thus: It is scientifically shown that a newly invented material
is possibly seriously harmful if and only if it is positively shown
that this statement is compatible with our relevant background
knowledge (from 1).
(3) Political action (like limiting the use of a material) isn’t required
unless it is scientifically shown that a newly invented material might
be seriously harmful.
(4) Thus: Political action isn’t required unless it is positively shown
that the material being seriously harmful is consistent with our
background knowledge (from 2,3).
(5) Every type of positive scientific proof requires full scientific cer-
tainty (second-order certainty).
(6) Thus: Political action isn’t required in the newly-invented-material-
case unless full scientific certainty is available (from 4,5).
(7) Where there are threats of serious or irreversible damage, lack of
full scientific certainty shall not be used as a reason for postpon-
ing cost-effective measures to prevent environmental degradation.
(United Nations, 1992)
(8) Thus: Not: It is scientifically shown that a certain statement about
the future is possibly true if and only if it is positively shown that
this statement is compatible with our relevant background knowl-
edge (negate (1) given the contradiction derived between (6) and
(7)).

Thus, while failing to comply with the precautionary principle, the cur-
rent IPCC methodology violates international environmental law.
To stress this important point even further, I would like to give an-
other example directly related to climate change. The third IPCC report
contained a range of future sea-level rise scenarios very similar to the tem-
perature projections presented in figure 4. These scenarios represent the
envelope spanned by model simulations that did not include the—in the
IPCC text explicitly discussed—possibility of ice-dynamical changes in the
West Antarctic ice sheet, i.e. for instance the possibility that the ice flow

19
might accelerate when off-shore, floating ice-shelves disappear (IPCC, 2001,
pp. 671,677-9). Why were these possible consequences not included in the
scenario range? Because there was no model that could calculate these ef-
fects and thence demonstrate with certainty that ice-dynamical changes are
possible. That is modal inductivism: requiring certainty with regard to the
possibility of future scenarios, systematically underestimating the uncertain-
ties, and thereby violating the precautionary principle.
The argument presented against modal inductivism obviously depends
on the precautionary principle which is a normative premiss. Despite being
a principle of international environmental law, it is, however, not uncon-
troversial.13 Does this mean that as soon as one objects the precautionary
principle, the critique of modal inductivism falls to pieces? Not necessarily
so. Let us step back and consider but the inferential relations explicated
in the argument A18 Precautionary approach. What this argument shows is
that the methodology of modal inductivism is inconsistent with a specific
normative stance. Modal inductivism contradicts a principle democratic
decision-makers might want to comply with when taking climate policy de-
cisions. And this sort of value-ladenness, which prevents14 democratically
legitimised decision-makers to adopt the normative point of view they have
been elected to adopt renders modal inductivism unacceptable.15 To com-
plete this meta-argument, it is important to acknowledge that modal falsifi-
cationism does, on the other hand, neither imply nor exclude any normative
principles for decision-making under uncertainty. Specifically, it does not
13
While it figures as a major premiss in Rawls thought-experiment that justifies the
Difference Principle (Rawls, 1971), it has been criticised by Harsanyi (1975). For attempts
to reformulate and implement the precautionary principle see for example Gardiner (2006)
and European Environmental Agency (2001).
14
Or, more precisely: might prevent.
15
At this point, the argumentation is touching another substantial debate in the phi-
losophy of science, namely the question of value-free science. Irrespective of whether
science is necessarily value laden in some sense (as, for different reasons, Putnam (2002)
and Kitcher (2001) have argued recently), my argument rests on the modest idea that
avoidable value-ladenness should be avoided in scientific policy advice (for democratic
reasons).

20
prescribe to adopt the precautionary principle. If policy-makers are pro-
vided with a scenario range which is constructed according to modal falsifi-
cationism, they can nevertheless consistently make use of the Shackle-rule16 ,
for instance, or some other principle for decision-making under uncertainty
which contradicts the precautionary approach.
Besides these sincere objections against modal inductivism, there is also
a positive argument in favour of its alternative, that is modal falsificationism.
When the epistemic basis for decisions under uncertainty shall be provided,
it becomes a crucial question to ask how tolerant is a certain methodology
with respect to ignorance, or lack of knowledge (compare Betz, 2006, p. 197).
More specifically, what are the chances that the worst cases—catastrophic
consequences that might be triggered by our actions—are actually over-
looked? In this respect, modal inductivism comes off much worse than
modal falsificationism, the latter being the more cautious approach. For in
modal falsificationism, worst cases, once being articulated, will figure on the
agenda unless being discarded on the basis of strong scientific arguments.
Not so in modal inductivism, where extreme scenarios have to be the result
of model simulations before being taken into account. This argument in
favour of modal falsificationism can be reconstructed in more detail (A19
More cautious):

(1) According to modal falsificationism, a potential worst case will


figure on the agenda as soon as it has been articulated and unless
it is discarded on the basis of strong scientific argument.
(2) According to modal inductivism, a potential worst case will figure
on the agenda only if it has been positively shown to be possible,
for instance by model simulation.
(3) Potential worst cases that have been articulated and not been dis-
carded on the basis of scientific argument are not necessarily pos-
16
Which ranks the alternative policy measures according to a weighted sum of best and
worst possible outcome (Shackle, 1949).

21
itively shown to be possible.
(4) Potential worst cases that have been positively shown to be possi-
ble are necessarily articulated and have not been discarded on the
basis of scientific argument.
(5) If some criterion of method A is a necessary though not sufficient
condition for the corresponding criterion of method B, then apply-
ing method B implies a systematical tendency to subsume less cases
under the respective criterion as compared to applying method A.
(6) Thus: Applying modal inductivism implies a systematical tendency
to put less worst cases on the political agenda than applying modal
falsificationism (from 1-5).
(7) If method B comprises a systematical tendency to consider less
potential worst cases relative to method A, then the risk of over-
looking potentially catastrophic consequences of our actions are
greater when applying B instead of A.
(8) Future-scenarios can either be set up by using the methodology of
modal inductivism or that of modal falsificationism.
(9) Of an exhaustive set of alternative methodologies, the one that
implies the most cautious approach, the lowest risk of overlooking
potential worst cases, should be used generally.
(10) Thus: Future-scenarios should generally be set up by using the
methodology of modal falsificationism (from 6-9).
(11) Definition of modal falsificationism: If future-scenarios should gen-
erally be set up by using the methodology of modal falsification-
ism, then it is scientifically shown that a certain statement about
the future is possibly true as long as it is not positively shown
that this statement is incompatible with our relevant background
knowledge, i.e. as long as the possibility statement is not falsified.
(12) Thus: It is scientifically shown that a certain statement about
the future is possibly true as long as it is not positively shown
that this statement is incompatible with our relevant background
knowledge, i.e. as long as the possibility statement is not falsified
(from 10,11).

22
Before exploring what the concrete IPCC methodology would have to
look like were it based on modal falsificationism, we shall briefly consider an
implication of modal falsificationism that links our discussion to a different
debate. In The Risk Society, sociologist Ulrich Beck warns us (A20 Beck
against scientfic scrutiny):

By forcing up the scientific standards one minimises the group of


accepted and policy-relevant risks, and as a consequence implic-
itly issues allowances for risk potentiation. To put it pointedly:
Insisting on the purity of scientific analysis leads to the pollution
and contamination of air, food, water and soil, plants, animals and
men. (Beck, 1986, p. 86)

This statement is true as long as modal inductivism reigns. Yet modal


falsificationism turns it upside down (A21 MF risk sensitive). If we force up
standards of scientific testing within the framework of modal falsificationism,
there will be less future scenarios that are positively shown to be impossible,
even more scenarios will therefore be considered in policy deliberations, and
more potential risks will figure on the political agenda.
Let us now turn to the specific methodological consequences of modal
falsificationism. As that methodology urges us to test arbitrarily constructed
scenarios against background knowledge, models and theories have a role
to play only insofar as being part of our background knowledge, i.e. as
serving as background theory when carrying out (statistical) tests. In order
to be suited for this task, model results must, however, not just represent
possibility statements, for in that case all one would learn from a test is that
a scenario might be incompatible with our background knowledge whereas,
according to modal falsificationism, we need to know that it is incompatible.
As climate model results are modal sentences (see argument A14 Possible,
not true above), it follows that they have no epistemic role17 to play in
17
I.e. no role in the process of justification.

23
climate scenario analysis (A22 Anti model):

(1) Modal falsificationism: It is scientifically shown that a certain


statement about the future is possibly true as long as it is not pos-
itively shown that this statement is incompatible with our relevant
background knowledge, i.e. as long as the possibility statement is
not falsified.
(2) The relevant background knowledge regarding climate scenarios is
physics.
(3) Thus: It is scientifically shown that a climate scenario is possible as
long as it is not positively shown that this scenario is incompatible
with our physical knowledge (from 1,2).
(4) If it is scientifically shown that a climate scenario is possible as
long as it is not positively shown that this scenario is incompatible
with our physical knowledge, the only epistemic role of models is to
serve as background theory when testing scenarios against climate
data.
(5) Thus: The only epistemic role of models is to serve as background
theory when testing scenarios against climate data (from 3,4).
(6) A model can only serve as background theory when testing scenar-
ios against climate data if its empirical implications are not mere
possibility statements.
(7) Empirical implications of our best climate models are modal sen-
tences stating what is possibly true about our climate system.
(8) Thus: Climate models have no epistemic role to play in climate
scenario analysis (from 5,6,7).

This is an almost revolutionary conclusion in the light of the predomi-


nance of GCMs as a tool to investigate our climate system. It is probably
not an exaggeration to say that the vast majority of climate institutions are
organised around climate models.18 I should therefore add that the previ-
ous argument does not imply that GCMs are entirely useless: First of all,
18
See, for instance, Edwards (2001). Edwards, moreover, concludes that computer
models “are, and will remain the historical, social, and epistemic core of the climate sci-
ence/policy community” (p. 64). Yet, he reaches this conclusion by implicitly assuming

24
I will indicate below that they might have a heuristic role to play in sce-
nario analysis. Secondly, denying GCMs an epistemic role in the analysis
of future scenarios is not to say that we might not reap insights into our
climate system from these models that are not directly related to project-
ing climate change; climate science is of course more than climate scenario
construction.19
We have so far just deduced a negative implication of modal falsifica-
tionism, telling us what not to do—but what are its positive methodological
consequences? In the light of the previous elucidations of modal falsification-
ism, these are rather obvious: One should come up with as many potential
future scenarios in a first step and then, in a second step, submit these future
scenarios to tests in order to see which ones can be discarded as impossible.
Consider the reconstructed argument, before I discuss its neuralgic point
(A23 Speculating-testing methodology):

(1) Modal falsificationism: It is scientifically shown that a certain


statement about the future is possibly true as long as it is not pos-
itively shown that this statement is incompatible with our relevant
background knowledge, i.e. as long as the possibility statement is
not falsified.
(2) The relevant background knowledge regarding climate scenarios is
physics.
that the epistemic role of climate science is to predict the consequences of alternative
policies. If one conceives climate policy as decision making under uncertainty, however,
identification of possible scenarios instead of (deterministic) prediction becomes climatol-
ogy’s main goal. How this can be accomplished without the use of GCMs will be discussed
below.
19
Likewise, Norton and Suppe (2001) stress that without computer models “we would
be unable to understand the climate system as a single, integrated whole [. . . ]” (p. 67).
The arguments put forward in this paper do not contradict that thesis. I would merely
add that understanding a complex system ought to be distinguished from constructing
possible future scenarios. Oreskes et al. (1994), however, argue with underdetermination
in favour of a more far-reaching thesis, namely that GCMs have no epistemic role to play
in science at all. While accepting the model-underdetermination thesis, Norton and Suppe
(2001) criticise their reasoning by stressing that non-uniqueness poses no problem as long
as scientific results are restricted to common features of all models that have been set up.
Yet, this last reasoning apparently rests on the idea that the set of all models covers the
entire space of physical possibilities—a seemingly unwarranted assumption.

25
(3) Thus: It is scientifically shown that a climate scenario is possi-
ble if and only if it is not positively shown that this scenario is
incompatible with our physical knowledge (from 1,2).
(4) To test a scenario (by statistical means) against climate data as-
suming a highly aggregated, stylised model about our climate sys-
tem is the most appropriate way to positively show that this sce-
nario is incompatible with our physical knowledge.
(5) If (i) being F is equivalent with being G and (ii) the best way to
show that something is not G is by doing H, then the best way to
test whether something is F is by doing H.
(6) Thus: To test a scenario (by statistical means) against climate
data assuming a highly aggregated, stylised model about our cli-
mate system is the most appropriate way to test whether a climate
scenario is possible (from 3-5).
(7) If our best models about some domain yield but possibility state-
ments and the most appropriate way to test whether a scenario
concerning that domain is possible is by means of testing it with
procedure P, then the future of respective domain should be scien-
tifically investigated by way of unrestricted and most speculative
construction of the scenario range which is, in a second step, re-
duced by submitting scenarios systematically to P-tests.
(8) Empirical implications of our best climate models are modal sen-
tences stating what is possibly true about our climate system.
(9) Thus: Our climate’s future should be investigated scientifically
through the unrestricted and most speculative construction of the
scenario range which is, in a second step, reduced by submitting
scenarios systematically to (statistical) tests (from 6-8).

The central premiss of this argument is sentence (4). How can one claim
its truth given that the previous argument (A22) has just shown that cli-
mate models are not suited for the task of testing future scenarios against
background knowledge? The answer is that (4) does not refer to GCMs, but
a different species of climate models that are developed parallel to GCMs.

26
Figure 5: Earth’s radiative energy budget. Arrows and numbers indicate
global mean energy fluxes in W/m2 . Source: IPCC (2001, p. 90).

These models are robust, highly stylised and conceptual. The energy budget
diagram in figure 5 is in a sense such a very aggregated, qualitative model of
our climate system. If the visualised relations were transformed into equa-
tions, this would provide a robust quantitative energy balance model. That
my methodological proposal is not a lost cause is at least suggested by some
studies in climatology that test scenarios about the future warming given a
doubling of CO2-concentrations against palaeo-climate data. Lorius et al.
(1990), in a pioneering work, used a conceptual model describing the prin-
cipal factors influencing global mean temperature in order to test warming
scenarios against data obtained from an antarctic ice-core.
However, questions remain: Are these aggregate models really robust
enough to be considered part of our established background knowledge?
Isn’t our firm background knowledge too thin to falsify even wildest spec-
ulations, that is, don’t we risk to end up with an extremely wide, absurd
range of climate scenarios? Don’t effective tests which would allow to fal-
sify a significant bundle of future scenarios rely on shaky assumptions and

27
models, whose results are possibility statements, too? These are justified
doubts. And I just see the following two replies: (1) It is not possible to
predict whether this methodology will really work or not, and therefore too
early to make such a judgement. Only once a significant amount of cogni-
tive resources has been spent on this research programme can we evaluate
whether it is fruitful or not, and how many future scenarios can actually
be falsified. (2) If we learn that the range of scenarios we cannot discard
is much wider than originally thought, one possible reaction consists in ac-
cepting that result—instead of trashing the methodology—admitting our
uncertainty, and clearly communicating to the decision makers how serious,
in terms of worst cases, our situation is.
With the final argument, we will once more return to the topic of ig-
norance, i.e. the danger that the scenario range is not complete, that we
overlook possibilities, that things might happen we have not even thought
about. In modal falsificationism, it is the first, speculative and creative step
of scenario construction that shall ensure ignorance reduction to the greatest
possible extent. It is here where I see a role for GCMs. For if we assemble a
new GCM, using modules that haven’t been put together before, and press
the start button to see (within months) what will happen, the computer
might actually show us something nobody has ever thought about. The
newly created scenario is then added to the list and has to be considered
possible unless falsified by a scientific test. Metaphorically spoken, GCMs—
creative modelling—might help us to foresee future ozone-holes. This yields
the following argument (A24 Creative modelling):

(1) Climate simulations help to identify new story-lines.


(2) Every mean which helps to identify new story-lines is appropriate
to reduce ignorance in scenario analysis.
(3) Thus: Climate simulations are appropriate to reduce ignorance in
scenario analysis (from 1,2).

28
(4) Every mean that helps to reduce ignorance has a role to play in the
(creative part of an) analysis of a domain, if that domain should
be investigated, in a first step, by the unrestricted and most spec-
ulative construction of a scenario range.
(5) Our climate’s future should be investigated scientifically through
the unrestricted and most speculative construction of the scenario
range which is, in a second step, reduced by submitting scenarios
systematically to tests.
(6) Thus: Climate simulations have a role to play in the creative part
of the scientific investigation of our climate’s future (from 3,4,5).

5 Conclusion

We have reconstructed the argumentation in favour of model-underdetermi-


nation, a problem chiefly arising because of the trade-offs between the diverse
standards of success according to which climate models are evaluated. The
inapplicability of alternative approaches in the philosophy of science which
avoid underdetermination explains the great variety of models used in clima-
tology. As a first consequence, we derived that empirical results of climate
models have to be interpreted as modal sentences, indicating that climate
policy decisions are decisions under uncertainty. The range of future scenar-
ios, which forms the knowledge base upon which such decisions are taken,
can be constructed either according to modal inductivism or modal falsifica-
tionism. While modal inductivism underlies the IPCC reports, it is severely
flawed. Modal falsificationism, representing the more sound methodology,
would, however, require a complete overhaul of current IPCC practice. Nev-
ertheless, that should be the methodology implemented when preparing the
scientific advice for international climate policy.
This paper’s argumentation has some loose, or rather open ends which
call for further elaboration. For one, we touched the issue of Bayesianism as
an integrated methodology for providing scientific policy advice under risk

29
and uncertainty only superficially. A further, controversial debate on the
benefits and limits of that methodology as applied in climatology seems to
me inevitable and urgent. Next, I only referred to one study which seems
to represent a paradigmatic example of the falsificationist methodology. In
order to strengthen the case for modal falsificationism, that example as well
as additional case studies would have to be drawn up, showing how that
methodology is operating in detail. Finally, relating to the new, creative
role of GCMs, it would be illuminating to see whether GCMs have already
contributed to reducing our ignorance.

References
Max Albert. Bayesian rationality and decision making: A critical review.
Analyse & Kritik, 25:101–117, 2003.

Ulrich Beck. Risikogesellschaft - Auf dem Weg in die Moderne. Suhrkamp,


1986.

Gregor Betz. Prediction or Prophecy? The Boundaries of Economic Fore-


knowledge and Their Socio-Political Consequences. DUV, 2006.

Gregor Betz. Probabilities in climate policy advice: A critical comment.


Climatic Change, Manuscript submitted for publication, 2007.

Nancy Cartwright. The Dappled World: A Study of the Boundaries of Sci-


ence. Cambridge University Press, 1999.

Elisabeth Crawford. Arrhenius’ 1896 model of the greenhouse effect in con-


text. Ambio, 26(1):6–11, 1997.

Suraje Dessai and Mike Hulme. Does climate policy need probabilities?
Working Paper 34, Tyndall Centre for Climate Change Research, 2003.

Paul N. Edwards. Representing the global atmosphere: Computer models,


data, and knowledge about climate change. In Miller and Edwards (2001),
pages 31–66.

European Environmental Agency. Late lessons from early warnings: the


precautionary principle 1896-2000. Office for Official Publications of the
European Communities, 2001.

30
S.M. Gardiner. A Core Precautionary Principle. The Journal of Political
Philosophy, 14(1):33–60, 2006.

Donald Gillies. Philosophical Theories of Probability. Routledge, 2000.

John C. Harsanyi. Can the maximin principle serve as a basis for morality?
a critique of John Rawls’ theory. American Political Science Review, 69:
594–606, 1975.

Colin Howson and Peter Urbach. Scientific Reasoning: The Bayesian Ap-
proach. Open Court, Chicago, 2nd edition, 1993.

Paul Humphreys. Extending Ourselves: Computational Science, Empiri-


cism, and Scientific Method. Oxford University Press, New York, 2004.

IPCC. Climate Change 2001: The Scientific Basis; Contribution of Working


Group I to the Third Assessment Report of the Intergovernmental Panel
on Climate Change. Cambridge University Press, 2001.

Philip Kitcher. Science, Truth, and Democracy. Oxford University Press,


Oxford, 2001.

Frank Knight. Risk, uncertainty and profit. Houghton Mifflin, 1921.

Thomas S. Kuhn. Objectivity, value judgement, and theory choice. In


Thomas S. Kuhn, editor, The Essential Tension: Selected Studies in Sci-
entific Tradition and Change, pages 320–329. Chicago University Press,
1977.

Larry Laudan. Empirical equivalence and underdetermination. The Journal


of Philosophy, LXXXVIII(9):449–472, September 1991.

Johannes Lenhard. Computer simulation: The cooperation between exper-


imenting and modeling. Philosophy of Science, 74:176–194, 2007.

C. Lorius, J. Jouzel, D. Raynaud, J. Hansen, and H. Le Treut. The ice-core


record: climate sensitivity and future greenhouse warming. Nature, 347:
139–145, 1990.

Deborah Mayo. Error and the Growth of Experimental Knowledge. Chicago


University Press, Chicago, 1996.

K. McGuffie and A. Henderson-Sellers. Forty years of numerical climate


modelling. International Journal of Climatology, 21:1067–1109, 2001.

Clark A. Miller and Paul N. Edwards, editors. Changing the atmosphere :


expert knowledge and environmental governance, Cambridge, 2001. MIT
Press.

31
M. Granger Morgan and David W. Keith. Climate-change – subjective
judgments by climate experts. Environmental Science & Technology, 29:
A468–A476, 1995.

Mary Morgan and Margaret Morrison, editors. Models as Mediators, Cam-


bridge, 1999. Cambridge University Press.

Stephen D. Norton and Frederick Suppe. Why atmospheric modeling is good


science. In Miller and Edwards (2001), pages 67–106.

N. Oreskes, K. Shrader-Frechette, and K. Belitz. Verification, validation,


and confirmation of numerical models in earth sciences. Science, 263:
641–646, 1994.

Hilary Putnam. The Collapse of the Fact/Value Dichotomy. Harvard Uni-


versity Press, 2002.

Willard Van Orman Quine. Two dogmas of empiricism. In Willard Van Or-
man Quine, editor, From a Logical Point of View, chapter 2, pages 20–46.
Harvard University Press, 1953.

John Rawls. A Theory of Justice. Harvard University Press, 1971.

George L. S. Shackle. Expectations in Economics. Cambridge University


Press, 1949.

Simon Shackley. Epistemic lifestyles in climate chaneg modeling. In Miller


and Edwards (2001), pages 107–134.

Urs Siegenthaler, Thomas F. Stocker, Eric Monnin, Dieter Lüthi, Jakob


Schwander, Bernhard Stauffer, Dominique Raynaud, Jean-Marc Barnola,
Hubertus Fischer, Valérie Masson-Delmotte, and Jean Jouzel. Stable car-
bon cycle? Climate relationship during the late Pleistocene. Science, 310
(5752):1313–1317, 2005.

Holm Tetens. Selbstreflexive Physik. Transzendentale Begründungen am


Beispiel des Strukturenrealismus. Deutsche Zeitschrift für Philosophie, 54
(3):431–448, 2006.

Holm Tetens. Einstein als Philosoph. In Philipp W. Balsinger and Rudolf


Kötter, editors, Die Kultur moderner Wissenschaft am Beispiel Albert
Einsteins. Spektrum Akademischer Verlag, 2007.

United Nations. Report of the United Nations Conference on Environment


and Development (A/CONF.151/26), 1992.

Spencer R. Weart. The Discovery of Global Warming. Harvard University


Press, Cambridge, MA, 2003.

32
Mort D. Webster, Chris E. Forest, John Reilly, Mustafa Babikerand David
Kicklighter, Monika Mayer, Ronald Prinn, Marcus Sarofimand Andrei P.
Sokolov, Peter Stone, and Chien Wang. Uncertainty analysis of climatic
change and policy responses. Climatic Change, 61(3):295–320, 2003.

Eric Winsberg. Simulated experiments: Methodology for a virtual world.


Philosophy of Science, 70:105–125, 2003.

Eric Winsberg. Models of success vs. the success of models: Reliability


without truth. Synthese, 152:1–19, 2006.

Kirsten Zickfeld, Anders Levermann, M. Granger Morgan, Till Kuhlbrodt,


Stefan Rahmstorf, and David W. Keith. Present state and future fate
of the atlantic meridional overturning circulation as viewed by experts.
Climatic Change, 82:235–265, 2007.

33

Você também pode gostar