Você está na página 1de 16

Ethic Theory Moral Prac (2010) 13:318 DOI 10.

1007/s10677-009-9169-3

In Defence of Bad Science and Irrational Policies: an Alternative Account of the Precautionary Principle
Stephen John

Accepted: 17 March 2009 / Published online: 28 April 2009 # Springer Science + Business Media B.V. 2009

Abstract In the first part of the paper, three objections to the precautionary principle are outlined: the principle requires some account of how to balance risks of significant harms; the principle focuses on action and ignores the costs of inaction; and the principle threatens epistemic anarchy. I argue that these objections may overlook two distinctive features of precautionary thought: a suspicion of the value of full scientific certainty; and a desire to distinguish environmental doings from allowings. In Section 2, I argue that any simple distinction between environmental doings and allowings is untenable. However, I argue that the appeal of such a distinction can be captured within a relational account of environmental equity. In Section 3 I show how the proposed account of environmental justice can generate a justification for distinctively precautionary policy-making. Keywords Precautionary principle . Environmental ethics . Relational conceptions of justice . Risk . Equity At national and at global levels, environmental law and policy is increasingly framed in terms of the precautionary principle (ORiordan and Cameron 1994; Trouwborst 2002). However, many argue that the principle reflects an unwarranted mistrust of science, is too unspecific to guide policy, and that more specific versions are as likely to increase as to decrease environmental risks.1 In Section 1, I will show how these charges overlook two concerns which might motivate the precautionary principle: a worry about scientific purity; and a concern to distinguish environmental doings from allowings. In Section 2, I will argue that appeals to an environmental doing/allowing distinction are in fact flawed, but provide materials for a relational view of environmental justice. Finally, in Section 3 I will show how this view justifies a precautionary approach to policy-making.

Most notably by Cass Sunstein (2002, 2005). See also Manson 2002. For further discussion, see Sandin et al. 2002.

S. John (*) Hughes Hall, University of Cambridge, Mortimer Road, Cambridge CB1 2EW, UK e-mail: sdj22@cam.ac.uk

S. John

1 Charges Against the Precautionary Principle Debate over the precautionary principle centres around two formulations; the UN Rio Declaration: where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation (United Nations General Assembly 2002); and the Wingspread Declaration: when an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically (Wingspread Statement 1998). Although these statements differ, they (and others) share a common core: we should seek to prevent some threats of damage, particularly threats of serious environmental damage, even when we lack scientific certainty about their existence or magnitude (Raffensberger and Tickner 1999). The claim environmental policy-makers should take precautions may seem unproblematic. However, the precautionary principle is controversial. The principle is usually understood as an alternative to Cost-Benefit-Analysis (CBA).2 Proponents of CBA typically argue that policies should be assessed in terms of efficiency. To do this, we calculate how much each affected individual is expected to benefit or lose by adoption of various policies. The greater the difference between sum total expected benefits and sum total expected costs, the more efficient the policy. Of course, even if efficiency is a valid social goal, CBA is not unproblematic. First, as CBAs proponents often concede, the construction of a scale for comparing all outcomes of action is controversial (Lenman 2000; Anderson 1993). Second, many defenders of CBA claim that efficiency is not the only consideration relevant to policy, but should be just one input (Schmidtz 2001; Sunstein 2002, Chap.5; Hubin 1994, p10). Despite these complexities, CBA is appealing, because it generates determinate policy proposals via a clear procedure. The precautionary principle, by contrast, is indeterminate: it is unclear which threats demand which precautions at which level of certainty short of full scientific certainty. Proponents of CBA claim they can resolve these ambiguities: precautions are appropriate when the expected benefits of precaution outweigh the expected costs (Sunstein 2002, p.104). As an alternative to CBA, the precautionary principle faces three, inter-related charges.3 First, opponents claim that, even if giving absolute priority to avoiding risks of certain sorts of outcomes (such as serious environmental damage) in policy-making is justifiable, the principle demands contradictory courses of action (Sunstein 2002, p.104). For example, imagine that planting GM crops risks environmental degradation, while not planting GM crops risks starvation in the developing world. Given that these are both serious harms, the precautionary principle would seem to tell us both to plant and not to plant GM crops. This problem is made worse, opponents claim, by the fact that the precautionary principle demands action even when we lack full scientific certainty of a threats existence or magnitude. Imagine we decide that if GM crops pose a risk of bio-diversity

2 3

For a clear statement of philosophical issues in CBA see Copp 1985. See Sandin 2007 for discussion of how these charges are related.

In defence of bad science and irrational policies

depletion, then they should be banned (regardless of food security issues). Using standard scientific methods, we have not established such a risk. The precautionary principle seems to imply that, if there remains any doubt as to their environmental impact, we ought not to plant GM crops. However, any agricultural policy might pose some non-established risk of environmental damage, and therefore might be suspect on precautionary grounds. The precautionary principles weak epistemic standards are therefore alleged to threaten paralysis, since any course-of-action might be suspect (ONeill 2002, Chap.1). Third, proponents of the precautionary principle are accused of overlooking this problem, because they focus on the possible costs of action, rather than of inaction (for example, the risks of GM crops, rather than the risks of continuing the agricultural statusquo) (Wildavsky 1997). The proponent of the principle is thus faced with a dilemma: either the principle applies both to action and inaction (in which case, it leads to paralysis), or it applies only to action, in which case it might lead to greater environmental problems than it prevents. The charges are extremely serious. One response is to accept their cogency and to reformulate the principle accordingly. For example, Stephen Gardiner has interpreted the precautionary principle as a Rawlsian maxi-min rule for decision-making under uncertainty (Gardiner 2006). For Gardiner, the principle is not an alternative to CBA, but to be applied when CBA is inappropriate. An alternative response to the charges is to show that they are not as clear-cut as they first appear. One way to do this is to show that the principle might be motivated by concerns which its opponents overlook. In the rest of this section, I shall identify two such concerns: the first related to full scientific certainty, and the second related to the doing/allowing distinction. When testing whether there is a relationship between two classes of events (including a probabilistic relationship) scientists typically adopt statistical procedures which minimise false positives. This ensures they assert that there is a (determinate or probabilistic) relationship between two classes of events only when they have extremely good reason to believe so. However, such procedures increase the chance of false negatives, that is, failing to assert that there is a relationship when there is a relationship. Therefore, insisting that only claims established with full scientific certainty be used in policy-making might lead to adoption of unacceptably risky policies. Versions of this concern have been developed by several philosophers, and have been claimed to motivate some formulations of the precautionary principle.4 We might, then, deflect the criticism that the principle leads to an incapacitating epistemic free-for-all, by arguing that the principle alerts us to the need to distinguish the epistemic standards proper to normal science (where full scientific certainty is the appropriate standard for claiming threats exist) from those proper to regulatory science (where weaker standards are appropriate).5 Understanding the lack of full scientific certainty clause by appeal to these considerations is particularly powerful as a response to proponents of CBA. The claim that we ought to vary testing methodologies with the costs of false-positives and falsenegatives has a long history in rational choice theory (Hacking 1975, 1990). CBA is closely linked to consequentialist moral theory, and in turn to rational choice (ONeill 1993, Chap.5). Therefore, the defender of the precautionary principle seems, if anything, truer to the foundations of CBA than does the proponent of CBA.
4

See Hansson (1998, 2006, 2007); Shrader-Frechette 1995; and Cranor 1993 for discussion. Although the claim that scientific purity may be problematic is common, there is disagreement over how such worries relate to the precautionary principle. See Jasanoff 1990 for further discussion of the development of regulatory science.

S. John

Consider now the criticism that the precautionary principle focuses on the risks of action, and ignores the (possible) costs of inaction. Opponents claim that this reflects a more general failure of human cognition, such as loss aversion, or an implausible view of nature as benign and fragile (Sunstein 2002, p.42; Sunstein 2003, p.1009). However, recent work by Marion Hourdequin disputes these charges (Hourdequin 2007).6 Hourdequin claims that we often believe that it is worse for humans to destroy environmental goods than for those goods to be destroyed by natural processes; for example, it is worse for a species to become extinct because of human intervention than because of natural pressures. She suggests that this doing/allowing distinction explains (and perhaps justifies) the apparent myopia of the precautionary principle: we are interested in minimising the risks we impose on the environment, even if minimising our impact does not minimise overall risk.7 The possibility that the precautionary principle is motivated by either or both of the concerns outlined above shows that some attacks on the principle are not as clear-cut as they first appear.8 However, I have not shown that these concerns should be reflected in environmental policy-making. Furthermore, combining these two concerns may be problematic, as the epistemic purity concern seems to accuse CBA of not being consequentialist enough, whereas the doing/allowing concern seems to accuse CBA of overlooking non-consequentialist considerations. Therefore, to assess the precautionary principle we need to know whether these concerns are valid and how they relate.

2 Institutional Attitudes and the Doing/Allowing Distinction The suggestion that CBA is inappropriate within environmental policy-making because it ignores the doing/allowing distinction has received little attention. In this section, I shall suggest a good reason for this: establishing the cogency and significance of that distinction within environmental contexts is extremely difficult. So much the worse for the precautionary principle, it may seem. However, this conclusion would be too quick. Rather, appeal to the doing/allowing distinction can be understood as an attempt to capture a more plausible concern, that we should adopt a relational conception of environmental justice. In this section, I shall outline such a conception, before in Section 3 showing how it relates to the precautionary principle, and, in particular, to the full scientific certainty concern. The distinction between environmental doings and allowings faces two difficulties. First, why think the doing/allowing distinction is morally relevant in the context of human interactions with the natural environment? Hourdequins key argument is that naturally incurred risks cannot be described as fair or unfair, but merely as instances of misfortune; humanly caused risks, by contrast, can be described as inequitable. Therefore, if we are concerned with equity, we should not treat action and inaction symmetrically. However,
6 For further discussion along similar lines, see Hughes 2006 and John 2007. Strangely, Sunstein suggests that the precautionary principle might incorporate an acts/omissions distinction, but simply assumes that this distinction is irrelevant to policy (Sunstein 2007).

Note that appeal to a doing/allowing distinction is not the same as status quo bias. For example, for Hourdequin, if (on-going) global warming is anthropogenic, our reasons to prevent warming are stronger than if it is not.
8 Such worries suggest difficulties for Hubins claim that the use of CBA can be accepted independently of our over-arching ethical theory (Hubin 1994).

In defence of bad science and irrational policies

Hourdequins argument is problematic. Given that we normally think of equity in terms of agents treatment of other agents, it is unclear what is involved in treating the natural environment itself in an inequitable manner.9 Distinguishing environmental doings from allowings faces a second problem. Mankinds interactions with the natural environment are so complex that it will often be extremely difficult to establish direct causal links between human action and environmental degradation.10 Many apparent allowings are likely to involve human agency, and human doings often involve natural factors. These worries are intensified as technology grows more powerful, with greater possibilities of unanticipated side-effects. Hourdequin herself notes this problem, suggesting that the complex causal structure of environmental change becomes increasingly difficult for our traditional moral conceptsof agency, responsibility and the distinction between doing and allowingto handle (Hourdequin 2007, p.358).11 There are, then, serious difficulties with applying and justifying a doing/allowing distinction in the environmental context. However, I will now argue that Hourdequins claim in favour of this distinction, that there is a difference between misfortune and inequitable treatment, can ground a non-consequentialist approach to environmental policy. To do so, I will consider recent work by Thomas Pogge on health equity (Pogge 2004). Pogge claims that contemporary discussions of health equity typically identify just institutions as those which promote a particular distribution of health outcomes. He contrasts such recipient-oriented conceptions of justice with the Rawlsian semiconsequentialist conception. According to the semi-consequentialist, the purpose of a social order is not to promote a good overall distribution of goods and ills but to do justice to, or to treat justly all those whose shared life is regulated by this order (Pogge 2004, p.154). Pogge claims that Rawlss general conception of justice is attractive, but, as the case of ill-health illustrates, too crude. First, the interpenetration between the natural and the social makes problematic Rawls neat division of benefits and burdens into natural (pre-institutional) and social (institutionally-created). Second, Rawlsian semiconsequentialism overlooks morally salient differences between ways in which social institutions can relate to the distribution of benefits and burdens. To motivate these claims, Pogge lists six different ways in which the a particular nutritional deficit might relate to social institutions: the deficit might be officially mandated by those institutions; it might be legally authorised; the institutions might foreseeably and avoidably engender the deficit; it might arise from legally-prohibited but barelydeterred interpersonal behaviour; the institutions might avoidably leave unmitigated the effects of a natural defect; the institutions might avoidably leave unmitigated the effects of a self-caused defect. It seems that even if the health outcome is the same in each of these cases, the nature of institutional involvement is relevant to claims about equity. For Pogge, our account of health equity should recognise this and weigh the impact which institutions have on quality of life according to how they have this impact (Pogge 2004, p.156). For Pogge, the distinctions between different ways in which institutions might be implicated in outcomes should not be interpreted solely in causal terms (e.g. on a scale from
9

Hourdequin suggests two other reasons to adopt a doing/allowing distinction, and thus to deny CBA: first, CBA overlooks concerns about unequal distributions of risks; second, the structure of moral agency requires us to distinguish doings from allowings. However, the first of these complaints could be incorporated within CBA. The second claim only shows that we need to distinguish doings from allowings in some, not in every, context. For further discussion of the doing/allowing distinction in environmental ethics see Thompson 2006. See Cranor 2007 p.38 for related concerns.

10 11

S. John

most to least causal involvement). Rather, what matters is not merely the causal role but also what one might call the implicit attitude of the social institutions in question (Pogge 2004, p.158). To understand Pogges claims, imagine two sets of social institutions, the first of which legally discriminates against a racial group, whereas the second of which avoidably fails to help those who recklessly engage in unhealthy behaviour. Even if health outcomes in both societies are the same, and the relevant social institutions are equally causally implicated in both outcomes, the first society seems worse than the second. In the first society, the laws express attitudes which are clearly inequitable, whereas in the second society they do not. Of course, the attitude expressed by the second set of institutions is not unproblematic. However, the first attitude seems more problematic than the second, and it seems that this concern should be reflected in our theory of health equity. Although Pogges theory requires refinement, its relevance to my concerns is clear. Hourdequin suggests that there is a morally salient difference between risk imposition and mere misfortune. Pogges work builds on a similar intuition. However, Hourdequin understands the misfortune/inequitable treatment distinction in terms of a binary doing/ allowing distinction, related, in turn, to a natural/social distinction. Given the interrelationships between the social (what is done) and the natural (what is allowed), her conclusion is unsustainable. By contrast, Pogges work suggests that even if the natural and the social interpenetrate, we can still distinguish between different attitudes (implicitly) expressed by social institutions implicated in the complex causal processes which lead to outcomes. In turn, these institutional attitudes should be a focus of normative political criticism and action. To develop something like Pogges theory in the environmental context, we need an account of equity and we need to identify both the agents of justice (those who express inequitable attitudes in their treatment of others) and the patients of justice (those who suffer inequitable treatment). In the rest of this section I will identify the agents and patients of environmental justice, and return to equity in Section 3. Pogge identifies agents of justice with schemes of social co-operation (what Rawls called the basic structure). However, this is problematic both generallyit is unclear how shared systems of co-operation can express attitudesand in the specific context of environmental policy, where the question is how to think about particular kinds of decisions, rather than social structures generally. I suggest that, in the environmental context, we should identify as primary agents of justice those governmental and transnational policy-making bodies charged with regulating various activities in the name of environmental protection. One reason to focus on such agencies is that because they typically possess (quasi-)coercive powers, it is particularly important that they act equitably. A second reason relates to the aims of this paper: it is such agencies which are usually enjoined to adopt the precautionary principle. Of course, there is a puzzle in understanding the claim that corporate bodies, such as governmental agencies, express attitudes. However, we often speak of the attitudes of corporate agents, and such talk seems less problematic than talk of the attitudes expressed by systems of co-operation.12 What, though, count as patients of justice in the environmental setting? Only agents can be the subjects of equitable or inequitable treatment. Although I have assumed that, at least for some purposes, governmental agencies might be viewed as intentional agents, it seems implausible to view eco-systems as the kinds of agents who might be treated inequitably.

12

On corporate agency in general, see French 1984. For this concept in the context of CBA, see Copp (1985, esp. 138145).

In defence of bad science and irrational policies

Therefore, to incorporate equity concerns in environmental ethics, we need an anthropocentric account of environmental value. This might seem anathema to the concerns of environmentalists. However, the resources available to a sophisticated anthropocentricism should not be under-estimated. Standard anthropocentricism tends to assume that environmental value can be understood as a conjunction of the effects of environmental change on human agents (such as ill-health) and the value which humans place on the environment (as reflected by their willingness-to-pay to prevent degradation).13 Such views over-simplify by treating the environment and human agents as related, but distinct. Rather, an anthropocentric account of environmental value should stress that because human agents are human animals their capacities for agency are shaped and constrained by natural environments. Human capacities for agency are damaged directly when humans fall ill or suffer other direct harms from environmental damage. However, environmental damage might also undercut agency in other ways: living in an unstable environment can undermine our capacity to plan for the future, or leave us vulnerable to exploitation; insuring against catastrophe might leave us with fewer resources with which to pursue our goals; agency relies on practical identity, which is shaped by our sense of how the natural environment has been and ought to be.14 To develop these claims would take much space. However, my point is simply that even if an equity-based approach cannot capture all of the traditional concerns of environmentalists, it may capture more than is normally recognised. Having identified regulatory agencies as agents of environmental justice, and human individuals as the patients harmed by environmental degradation, I shall now sketch how a relational conception of environmental justice relates to the doing/allowing distinction. Typically, regulatory agencies must decide whether some course-of-action to be pursued by other agents, such as business corporations or private individuals, should be allowed to go ahead, should be stopped, or should go ahead in some limited way or with safeguards. The relevant courses-of-action might be novelfor example, the cultivation of GM cropsor on-goingfor example, those allowed by current fishing regimes. When agencies face such a decision they must take into account the (potential) costs of permitting those courses-ofaction. However, it also seems that, as governmental agencies committed to serving the entire population, they should also consider the opportunity costs of not permitting, or limiting, those courses-of-action. As opposed to Hourdequins approach, a relational approach to environmental justice does not, then, in-and-of itself, imply that environmental agencies should pay more attention to what they do (in an extended sense, to mean what foreseeably occurs as a result of allowing courses-of-action to go ahead) than to what they allow (to mean what foreseeably happens when courses-of-action do not go ahead). However, when agencies decide whether or how to regulate, there are multiple ways in which they might take the potential consequences of courses-of-action into account. These different ways of taking consequences into account can be seen as expressing attitudes towards potential patients of justice which, in turn, can be said to be more or less equitable. I suggest that it is with regard to these attitudes that we should frame an account of environmental justice. If so, there is no reason to think, as opponents of the precautionary principle often seem to, that a decision-making procedure must be problematic if it leads to sub-optimal consequences. Even if there is no simple environmental doing/allowing distinction, Hourdequin is right that equity considerations undermine apparently obvious charges against the precautionary principle.
13 14

For a useful outline of such approaches, see Dasgupta 2001. For versions of each of these claims, see ONeill 1996; Dercon 2004; Korsgaard 1996.

10

S. John

3 Reconstructing the Precautionary Principle So far, I have provided a framework for understanding environmental policy-making, rather than a justification of the precautionary principle. It is possible that consideration of the demands of equity might not actually justify the precautionary principle, but instead show that techniques such as CBA reflect equitable attitudes. In this section, then, I shall argue that if we assume equity to demand that the principles underlying decision-making are justifiable to each affected person, then the relational conception of environmental justice justifies distinctively precautionary policies.15 As I understand it, the precautionary principle claims that when some proposed or ongoing course-of-action poses a threat of serious or irreversible environmental damage, regulatory agencies should not allow that course-of-action to go ahead without safeguards (at the extreme, they should ban it), even if they lack scientific certainty about the existence or magnitude of the threat. The precautionary principle seems parasitic on other decisionmaking principles, because it does not give guidance when courses-of-action do not pose a threat of serious or irreversible environmental degradation. The complaint of its critics, however, is that by demanding action even in the absence of full scientific certainty, the principle threatens to swamp all other decision-procedures, and to lead to an endless cycle of precaution. Furthermore, they complain that, despite the appeal to cost-effectiveness in the Rio declaration, proponents of precautionary policies focus attention on regulating or preventing (suspected to be) risky courses-of-action, with disregard for (opportunity) costs. One element of debate over the precautionary principle concerns the concept of full scientific certainty. However, to understand the relationship between ethical and epistemic concerns, it is useful to ask how we might justify precautionary policy-making in circumstances of epistemic transparency, where if there is a risk, then we believe that there is a risk, and if there is no risk, then we do not believe there is a risk. I shall return to decision-making in epistemically murky situations, where we have reason to believe that our beliefs about risks are not complete, in Section 3.2 below.16 3.1 Precaution in Epistemically Transparent Worlds I shall distinguish three kinds of cases in which a regulatory agency might make decisions about some proposed or continuing course-of-action under epistemic transparency: first, cases where the course-of-action would definitely lead to, or is known to pose a risk of, mild or reversible environmental damage affecting some in the population but benefiting others; second, cases where the course-of-action would definitely lead to serious and irreversible environmental damage affecting some but benefiting others; third, cases where the course-of-action is known to pose a risk of serious or irreversible environmental damage affecting some but benefiting others. In these cases, I assume that a regulatory agency must

15 This account of the demands of equity derives, of course, from Scanlons work, and is related to strands in contemporary Kantianism (see Scanlon 1998; ONeill 1996). The precise relationship between my arguments and such theories is, however, beyond the scope of this paper. 16 The distinction between epistemic transparency and epistemic murkiness does not imply any view about the existence of objective risks. Rather, even if all risk claims should be understood in epistemic terms, most theories allow for a gap between what we do believe and what we ought to believe. My claims could be rephrased as distinguishing between circumstances where our beliefs about the risks of action are as they ought to be and cases where we must establish what the correct beliefs are. For a guide to these issues see Mellor 2005.

In defence of bad science and irrational policies

11

choose between allowing the course-of-action, banning it, or allowing a modified version of it to go ahead. Agencies which allow courses-of-action which will cause (or which will risk causing) mild damage affecting some, but benefiting others, need not be seen as expressing inequitable attitudes. Rather, an agency which was concerned with considerations of equity might reasonably base its decisions as to whether or not to regulate by asking whether a system of regulation which, in general, allowed for (risks of) minor forms of environmental damage was one which, overall, was beneficial to each. A system based around a principle of banning all activities which cause or risk environmental damage would be one which, in the long term, each would have reason to reject. Therefore, equity considerations might allow for courses-of-action which cause (or risk) limited or reversible damage by appeal to something like Hanssons principle that exposure of a person to risk is acceptable iff this exposure is part of an equitable social system of risk taking that works to her advantage (Hansson 2003, p.305).17 The contours of such a system would, of course, require further consideration. However, I shall now move onto the more difficult cases. In the second case, some course-of-action is known to cause serious and irreversible environmental damage affecting some in the population, but benefiting others. I shall first discuss such cases on the assumption that the relevant benefits for affected individuals are less weighty than the relevant harms for affected individuals (for example, the benefits of slightly reduced food costs versus the burdens of loss of livelihood). However, I do not assume that the sum-total harms are necessarily greater than the sum-total benefits, at least as judged by CBA. In such cases, I suggest that it is difficult to see how an agency might justify allowing the relevant course-of-action to those individuals who will suffer from environmental degradation. To do so would, in effect, be to base decisions on a principle which allows individuals to suffer agency-undercutting harms for the sake of aggregate minor benefits to many others. According to most contemporary accounts of equity, this form of aggregation is inequitable (see, for example, Scanlon 1998, Chap.5). Therefore, allowing courses-of-action which are known to cause serious harms to some to go ahead is, in some cases, to fail to express equitable attitudes. We might think such damage could be compensated. If so, then it might follow that such courses-of-action are permissible if appropriate compensation mechanisms are in place. Unfortunately, however, the complexities of the dependence of human agency on the environment may make the harms of living in severely-degraded environments uncompensatable. Furthermore, even if we did think that the imposition of harm via environmental degradation on some might be justifiable if compensated, CBA would be of little use in deciding which policies are acceptable. All that CBA tells us is that a policy is potentialPareto optimal (i.e. the sum total benefits are such that each could be moved to a situation preferable to her starting-position), not that it is actually Pareto optimal (i.e. that each is actually made better-off by the policy). Extremely efficient policies may not involve actual compensation (Copp 1987). I suggest, then, that equity considerations make the use of CBA in cases where environmental damage is serious and irreversible extremely problematic. Conversely, they seem to support a principle of not allowing courses-ofaction which would wreak such devastation, regardless of the expected overall balance of costs and benefits. This line-of-reasoning may, however, face difficulties as a defence of precautionary policy-making. In some cases refusing to impose environmental damage on some might be to forego saving (equal or greater numbers of) others from equally serious harm. For
17

See Lenman 2008 for a related suggestion.

12

S. John

example, refusing to allow the planting of a crop which will damage parts of the environment might be to forego helping many starving people. Surely, the response might run, at least when the relevant harms and benefits are of roughly equal moral weight, regulatory agencies should consider the claims of those who would be benefitted by allowing a course-of-action as well as those who would be harmed by allowing it (a claim I endorsed in Section 2). If so, perhaps equity demands that we ought to decide whether to allow courses-of-action by considering the effects of both allowing and not allowing in terms of some kind of limited aggregation (e.g. where we only aggregate roughly equal harms and benefits). In some cases, such principles might permit courses-of-action which are known to cause serious or irreversible environmental degradation. These are, however, the sorts of courses-of-action which the precautionary principle is normally understood to disallow. Therefore, even if equity considerations make CBA problematic, in the absence of commitment to an environmental doing/allowing distinction, it seems that a relational conception of environmental justice may not justify precautionary policies. In response, I suggest that a focus on the demands of equity might allow us to generate a distinctively precautionary approach without either appealing to a doing/allowing distinction or denying the importance of considering the claims of those who would be helped by foregone courses-of-action. Most regulatory contexts display an important asymmetry. Not allowing a course-of-action which would lead to agency-undercutting harm for some but which would benefit extremely badly-off others is not necessarily incompatible with adopting alternative courses-of-action which would benefit those we fail to help. Allowing such a course-of-action is, by contrast, necessarily incompatible with helping those who are harmed (assuming that such harm is uncompensatable). If so, then it seems that those who will be harmed by allowing a course-of-action to go ahead have a stronger complaint against the regulator s decision than do those who would be helped by allowing that course-of-action to go ahead (to the extent that the latter s complaints could, in principle, be neutralised by adopting further policies). From this, I suggest that it follows that equitable decision-making should not be based on a principle which mandates coursesof-action whenever they can be expected to have a positive aggregative effect (even when the relevant consequences are limited to roughly equivalent forms of burden and benefit). Rather, at least when those we fail to help may still be helped via other routes, equity considerations demand that policy-making should rest on a principle which disallows courses-of-action known to lead to agency-undercutting harm. I have argued that when some agency allows a course-of-action which is known to cause agency-undercutting environmental damage, the fact that doing so will benefit some is never sufficient to claim that the decision is equitable. Rather, in general, equity considerations will favour disallowing known-to-be-harmful courses-of-action, even when they would lead to better outcomes. What, though, of cases where some agency must decide whether or not to allow some course-of-action which is known to pose a risk (rather than a certainty) of serious or irreversible environmental damage affecting some, but definitely benefitting others? There are, of course, good reasons to permit proposed courses-of-action which produce important benefits. However, when an agency permits a course-of-action which poses a risk of serious harm to some on the grounds that this will benefit others, those placed at risk of harm have good prima facie reason to object to that course-of-action. Furthermore, to appeal to expected aggregate expected benefits to justify permitting such courses-of-action would not seem to take these complaints seriously. However, unlike in the case where a policy will necessarily cause harm, when policies only risk harm, these complaints might be taken into account by adopting further measures which aim to eliminate, substantially reduce or mitigate the risks associated with the course-of-action. Modifying a course-of-action through

In defence of bad science and irrational policies

13

adopting precautionary measures can be seen as a way of attempting to ensure that decisions are justifiable both to those who would benefit from a course-of-action and to those placed at risk by a particular proposed courses-of-action. In short, there are good equity reasons to think that, in general, a principle that courses-of-action which pose a risk of harm to some should not go ahead without precautions best expresses the concerns of equity.18 Furthermore, note that attempting to limit such precautions by claiming that they are inefficient would be, in effect, to attempt to justify the burdens (risks of harm) inflicted on some by appeal to benefits enjoyed by others. As I argued above, such a mode of justification seems at odds with equity concerns. Therefore, for agencies to express an attitude of equitable treatment they should take precautionary measures to mitigate or to reduce the risks of harm associated with the policies they pursue, even when those precautions are inefficient by the standards of CBA. This is not to say that agencies must always adopt every possible precaution. It is, however, to say that when policies pose risks, precautions are typically necessary, and the limits on those precautions need to consider justifiability to each person, rather than expected net outcomes.19 Of course, there might be extreme cases, where no precautions could be taken against the risks associated with some course-of-action which would definitely benefit some to a great degree (much as there might be cases where policies which will definitely harm some are the only way in which to help many suffering others). In these cases, we must decide whether sometimes imposing significant harms or risks of harm may be justifiable to reduce the serious harms already suffered by others. However, few real-life cases are this stark. Rather, in many cases, we have a range of options, including going ahead with policy but taking precautions. It is in this range of cases, I suggest, that the ethical appeal of the precautionary principle is best understood. My arguments help us to understand two distinctive, and puzzling, features of precautionary policy-making. First, I have shown how we might justify focusing more attention on the costs and risks of various policies, rather than on the overall balance of outcomes, without appeal to a problematic doing/allowing distinction. Second, I have shown how and why precautionary concerns only apply in cases of serious damage by showing how equity considerations may vary between cases where courses-of-action cause (or risk) minor damage and cases where they cause (or risk) serious or irreversible damage. In the first case, the long-term benefits of a system allowing such damage might be to the benefit of those harmed, whereas in the second case, the relevant harms are uncompensatable. Two important clarifications are in order. First, although I hope to have shown how a relational account of environmental justice generates results which seem in accord with both statements and applications of the precautionary principle, I have not specified exactly what kinds or levels of precaution are necessary with regard to which degrees of risk of which forms of harm. There is good reason for this. As Pogges account of public health ethics suggests, the demands of equity may be extremely complex (to include, for example, concerns for historical injustice). As such, they are unlikely to be captured by any
18

Cranor 2007 and Lenman 2008 both discuss how Scanlons account of equity might apply in the context of imposing risks of harm (including environmental harm). My arguments differ from Cranors in that they do not start from claims about how those who suffer from imposed risks perceive those risks. My focus differs from Lenmans, because I am concerned specifically with institutional actors whom I assume have special responsibilities to promote welfare.

19 These claims can be seen as fleshing out Lenmans principle that in imposing risks on a population of people, I should act in a manner consistent with my being guided by the aim of being able to satisfy each member of that population that I acted in ways supported by reasons consistent in principle with the exercise of reasonable precaution against their coming to harm (Lenman 2008, 111).

14

S. John

algorithmic formula. Therefore, as I understand it, the precautionary principle should not be understood as itself a decision procedure, but more as an aide memoire, which reminds policy-makers of some of the key demands of equity. If so, then it may be a mistake to ask how the precautionary principle should guide particular decisions, or how we can check whether policy-makers have applied the precautionary principle.20 Rather, what really matters is that environmental policy-making expresses equitable attitudes: the precautionary principle is useful insofar as it reminds policy-makers of that multi-faceted demand. Second, in discussing precautionary policy-making, I have focused on cases where agencies must decide whether or not to allow certain sorts of activities to go ahead or continue. The precautionary principle is often invoked in cases where agencies must decide how to respond to some exogenous threat with low or uncertain probability but potential high impact (for example, a terrorist attack or meteor strike).21 Nothing I have said helps us to understand decision-making in such cases. I accept this may seem a limitation on my arguments. However, the idea that there is a single principle which tells us how to deal with all risks of serious or irreversible damage is, I think, a mistake. At least, demands for such a principle seem in tension with the concerns of a relational account of justice, according to which normative attention should not focus solely on securing certain sets of outcomes, but on the attitudes expressed in policy-making. 3.2 Precaution in an Epistemically Murky World Our world is epistemically murky. Our knowledge of the existence and magnitude of risks associated with actions is fallible.22 Thus far, my arguments have not depended on knowing precisely who will suffer the burdens and benefits associated with policies: my claim is not that policy must be guided by the actual complaints of those we help or harm, but that it must be based on considering what sorts of principles could be justified to reasonable agents.23 Furthermore, thus far I have stressed that the precise probability of harm associated with some course-of-action is irrelevant to the claim that we should, in general, regulate that course of action. However, it is clear that the degree of probability will, even on a relational account, often be relevant to what precautions we should take. Furthermore, even if acting equitably does not require knowing the actual views of those who will be benefited and burdened by a course-of-action, it requires knowing whether a course-of-action is the kind of course-of-action which has certain sorts of benefits and risks. Therefore, we need to know how regulatory agencies should decide on whether courses-ofaction do pose risks of harm, and, if so, how great those risks are. In Section 1, I suggested that we can understand one motivation behind the precautionary principle as a concern that standard approaches to statistical testing, including attempts to establish claims about risks, are likely to generate false negatives.
20

On my reading, then, such questions as whether the precautionary principle is itself justiciable (as discussed, for example, in Fisher 2001), would be misguided. What matters is that regulatory agencies act equitably; this might occur without explicit appeal to the precautionary principle; and appeal to the precautionary principle might not be sufficient for equitable treatment in some cases.

21

See, for exqueryample, Wiener and Stern 2006, for this way of understanding the principle in a discussion of the threat of terrorism.

In Hanssons terminology, we are faced not just with endodoxastic uncertaintyuncertainty over the outcomes of actionsbut metadoxastic uncertaintyuncertainty over the correctness of our endodoxastic assessments (Hansson 2006, 233234).
22 23

Although, of course, actual consultation may be necessary in many cases for all sorts of reasons not discussed in this paper.

In defence of bad science and irrational policies

15

In turn, given that we often (erroneously) treat absence of evidence as evidence of absence, basing policy only on those claims we have established with scientific certainty might lead us to adopt unacceptably risky policies. I claimed, then, that proponents of the precautionary principle might be understood as suggesting that regulatory science (i.e. scientific research intended to guide policy-decisions) should employ different epistemic standards from those employed in normal science. How might we understand these issues within my proposed relational framework? We can recast this problem in terms of attitudes: agencies which base policy only on those claims which have been established with full scientific certainty can be said to express an attitude which values epistemic purity over reducing risk to human agents. It may not be immediately clear that such an attitude should be thought of as inequitable: basing policy only on claims established with scientific certainty does not disadvantage certain groups more than others. What might be problematic, however, in insisting that regulatory science be held to the same epistemic standards as normal science is that such a policy might implicitly make a value judgment that epistemic purity matters more than reducing risks. In turn, this value judgment might not be shared by those whose lives are shaped by regulatory (in)action, and, therefore, the burdens individuals suffer may be determined through processes to which they have reasonable objections. My question, then, is whether we can justify a commitment to acting only on claims which have been established with full scientific certainty to those who might be exposed to unnecessary risk as a result of our policies. If not, then we have good reason to think that policy-makers should consider risks even when those risks have not been established with full scientific certainty, but only to some lesser degree of certainty. One answer here would be to say that individuals have some interest in policy being based only on those claims which are extremely likely to be true. However, consider an everyday case: imagine someone recovering from pneumonia who takes an umbrella to work even when the sky is clear. It seems that he is acting as if he believes it might rain, even though he lacks certainty. In everyday contexts, then, part of what is involved in adopting a precautionary attitude is to act as if one believes certain claims as a way of guarding against the possibilities which follow if those claims are true. Therefore, to say that individuals have some interest in policy-making being guided only by those claims established with scientific certainty, rather than to a lower degree of certainty, misrepresents a familiar feature of everyday life (Sandin 2007). This is not to say that it is always reasonable to (act as if we) believe that there is a possibility that we will suffer some harm. For example, the choice to take an umbrella to work would be ridiculous if we lived in the desert. However, the degree of certainty at which it becomes reasonable to act as if there is a possibility of rain need not be identical with the degree of certainty at which the scientist would claim that there is a possibility of rain. It is not as if we have a stark choice between requiring scientific certainty for all of our beliefs and an epistemic policy where anything goes. The context of environmental policy making is, of course, far more complex than the everyday context. However, it would be disingenuous to say that a refusal to base policies on claims which have not been established with full scientific certainty is justifiable to individuals because it reflects their own value judgments. A second defence of reliance on claims established with full scientific certainty would appeal to practical considerations. One claim against use of the precautionary principle in policy contexts is that its application would lead to contradictory recommendations. It might seem that the epistemically wanton character of the principle generates this result. Therefore, we might seek to justify reliance only on claims established with full scientific certainty by saying that only a system which adopts such a standard is practical. Clearly, choice of some principle (in this case, an epistemic principle demanding a high level of

16

S. John

certainty for fact-claims in policy) can be justified if the alternative principles would be impracticable. However, this defence is confused. Remember, according to the arguments of 3.1, precautionary decision-making does not tell us that when our policies pose some risk of serious or irreversible damage, then we should never go ahead with those policies. Rather, as I have claimed, the insight captured by the principle is that going ahead with policies which pose risks of serious damage is acceptable only when we take precautions against those risks. I can see no reason to think that such an approach to policy-making (intended for an epistemically transparent world) is necessarily self-contradictory. However, if an approach intended to help us decide how to think about risky courses-of-action is not selfcontradictory, then there is no reason why varying the epistemic standards by which we establish risk-claims must make that approach self-contradictory. Of course, adopting a lower standard-of-proof for generating fact claims might require us to guard against illusory threats. However, waste is not the same as paralysis. In this section I have suggested that a regulatory agency which countenances as real possibilities only those claims (including probabilistic claims) which have been established with full scientific certainty might fail to express an equitable attitude. I have not argued that such procedures are necessarily inequitable. Perhaps in this arena, there will be reasonable disagreement over which principles best express equity concerns. If so, there might be good reason to appeal to further considerations (say, the problems of establishing a robust institutional framework for regulatory science) which might suggest good reason to treat full scientific certainty as the gold standard for admitting claims about risks into policy-making. This is, I admit, a result which is unlikely to impress many defenders of the precautionary principle. However, let me finish by noting three appealing features of my approach to understanding the lack of full scientific certainty clause. First, my arguments do not imply the implausible claim that adherence to a high standard-of-proof is always problematic, either from an epistemic or an ethical perspective. For example, in some contextssuch as in a scientific laboratory or in a criminal trial there might be excellent reasons to adhere to very high standards-of-proof. What I have suggested is that in certain contexts, insisting on a high standard-of-proof may be unjustifiable; this is useful as a defence against the claim that precautionary policy-making in general reflects an unwarranted mistrust of science.24 Second, there is already much debate over how precisely we should set standards of proof in the regulatory context if we deny the importance of full scientific certainty.25 What I hope to have done is to provide a non-consequentialist framework within which we can understand such debates. Third, even defenders of CBA sometimes allow that we need some kind of decisionprocedure for deciding what to do in circumstances of uncertainty, cases where we cannot assign determinate probabilities to any of the possible outcomes of our action. In such cases, it is frequently claimed that a maxi-min rule should be adopted. Adoption of such a maxi-min rule is often claimed to be an application of precautionary thought. As I noted in Section 1 above, this is the kind of strategy used by Stephen Gardiner. I hope to have suggested that it is a mistake to think of the precautionary principle as simply a proposal as to how to reason under circumstances of uncertainty. Rather, even in situations of epistemic transparency, we might have reason to adopt precautionary measures. Furthermore, the very distinction between decision-making under risk and under
24 Compare Sunstein, a large goal of cost-benefit analysis is to increase the role of science in risk regulation (Sunstein 2002 108). 25 For practical suggestions along these lines, see the essays in Harremoes et al. 2002.

In defence of bad science and irrational policies

17

uncertainty is problematic. A proposal such as Gardiner s overlooks the fact that were we to adopt different epistemic standards, circumstances of uncertainty might become circumstances of risk.26

4 Conclusion Kip Viscusi has written that within the highly charged political context of policy development, it is almost always possible to conceive of some notion of risk equity to justify even the most inefficient policy interventions (Viscusi 2000, 845). In this paper, I have argued that Viscusi is right, but that this is no objection to talk of equity in the context of risk. Rather, I propose that environmental policies should be irrational, as judged by the standards of efficiency, and should even be based on bad science.
Acknowledgments I am extremely grateful to Katherine Angel, Jo Burch-Brown, Karsten Klint-Jensen, Tim Lewens, Serena Olsaretti, Onora O'Neill, Martin Peterson, Per Sandin and Jo Wolff for extremely useful comments on some of the arguments in this paper. I also owe a particular debt of gratitude for many discussions on this topic to Charlotte Goodburn.

References
Anderson E (1993) Value in ethics and economics. Harvard University Press, Cambridge, Mass Copp D (1985) Morality, reason and management science: the rationale of cost benefit analysis. Soc Philos Policy 2:128152 Copp D (1987) The justice and rationale of cost-benefit analysis. Theor Decis 23:6587 Cranor C (1993) Regulating toxic substances. Oxford University Press, Oxford Cranor C (2007) Towards a non-consequentialist approach to acceptable risks. In: Lewens T (ed) Risk: philosophical perspectives. Routledge, London Dasgupta P (2001) Human well-being and the natural environment. Oxford University Press, Oxford Dercon S (2004) Introduction. In: Dercon S (ed) Insurance against poverty. Oxford University Press, Oxford Fisher E (2001) Is the precautionary principle justiciable? J Environ Law 2001(13):315334 French P (1984) Collective and corporate responsibility. Columbia University Press, New York Gardiner S (2006) A Core Precautionary Principle. J Polit Philos 14:3360 Hacking I (1975) The emergence of probability. Cambridge University Press, Cambridge Hacking I (1990) The taming of chance. Cambridge University Press, Cambridge Hansson S-O (1998) Setting the Limit. Occupational health standards and the limits of science. Oxford University Press, Oxford Hansson S-O (2003) Ethical criteria of risk acceptance. Erkenntnis 59:291309 Hansson S-O (2006) Economic (ir) rationality in risk analysis. Econ Philos 22:231241 Hansson S-O (2007) Philosophical problems in Cost Benefit Analysis. Econ Philos 23:163183 Harremoes P et al (2002) The precautionary principle in the 20th century: late lessons from early warnings. Earthscan, London Hourdequin M (2007) Doing, allowing, and precaution. Environ Ethics 29:33958 Hubin D (1994) The moral justification of benefit/cost analysis. Econ Philos 10:169194 Hughes J (2006) How not to criticise the precautionary principle. J Med Philos 31 Jasanoff S (1990) The fifth branch. Harvard University Press, London

26

Furthermore, Gardiner claims that identifying realistic threats under uncertainty will employ thick concepts. However, this implies that reliance on scientific testing assumes only thin concepts. The problem raised above is that we treat the norms of standard scientific testing as if they were thin, when they reflect a value judgment.

18

S. John

John SD (2007) How to take deontological concerns seriously in risk-cost-benefit analysis: a re-interpretation of the precautionary principle. J Med Ethics 33:221224 Korsgaard C (1996) The sources of normativity. Cambridge University Press, Cambridge Lenman J (2000) Preferences in their place. Environ Values 9:431451 Lenman J (2008) Contractualism and risk imposition. Polit Philos Econ 7(1):99122 Manson N (2002) Formulating the precautionary principle. Environ Ethics 24:263274 Mellor DH (2005) Probability: a philosophical introduction. Routledge and Kegan Paul, London ONeill J (1993) Ecology, policy and politics: human well-being and the natural world. Routledge, New York-London ONeill O (1996) Towards justice and virtue. Cambridge University Press, Cambridge ONeill O (2002) Autonomy and trust in bioethics. Cambridge University Press, Cambridge ORiordan T, Cameron J. (eds) (1994) Interpreting the precautionary principle. Cameron May, London Pogge T (2004) Relational conceptions of justice. In: Anand F, Peter F, Sen AK (eds) Public health, ethics and equity. Oxford University Press, Oxford, pp 135162 Raffensberger C, Tickner J (1999) Introduction: to foresee and forestall. In: Raffensberger C, Tickner J (eds) Protecting public health and the environment: implementing the precautionary principle. Island, Washington, D.C, pp 111 Sandin P (2007) Common sense precaution and varieties of the precautionary principle. In: Lewens T (ed) Risk: philosophical perspectives. Routledge, London Sandin P et al (2002) Five charges against the precautionary principle. J Risk Res 5:287299 Scanlon T (1998) What we owe to each other. Harvard University Press, Cambridge, MA Schmidtz D (2001) A place for cost-benefit analysis. Philos Issues 11:148171 Shrader-Frechette K (1995) Practical ecology and foundations for environmental ethics. J Philos 92(12):621 635 Sunstein C (2002) Risk and reason. Cambridge University Press, Cambridge Sunstein C (2005) Laws of fear. Cambridge University Press, Cambridge Sunstein C (2003) Beyond the Precautionary Principle. Univ PA Law Rev 1003:10031058 Sunstein C (2007) Moral heuristics and risk. In: Lewens T (ed) Risk: philosophical perspectives. Routledge, London Thompson A (2006) Environmentalism, moral responsibility, and the doctrine of doing and allowing. Ethics Place Environ 9(3):269278 Trouwborst A (2002) Evolution and status of the precautionary principle in international law. Kluwer Law International, The Hague United Nations General Assembly (2002) Rio Declaration on Environment and Development. Report of the United Nations Conference on Environment and Development Rio de Janeiro, 314 June 1992. A/ CONF.151/26. Vol I. New York: UN Viscusi K (2000) Risk Equity. J Legal Stud 29(2):843872 Wiener J, Stern J (2006) Precaution against terrorism. J Risk Res 9:393447 Wildavsky A (1997) But is it true?. Harvard University Press, Cambridge, Mass Wingspread Statement (1998) The precautionary principle. Rachels environment and health weekly 586 February 19, 1998, accessed via http://www.psrast.org/precaut.htm (June 202h 2008)

Você também pode gostar