Você está na página 1de 16

12_52.2greenhalgh 304–18:02_51.

3schwartz 320– 4/3/09 5:43 PM Page 304


a critique

Trisha Greenhalgh and Jill Russell

ABSTRACT The idea that policy should be based on best research evidence might
appear to be self-evident. But a closer analysis reveals a number of problems and para-
doxes inherent in the concept of “evidence-based policymaking.” The current conflict
over evidence-based policymaking parallels a long-standing “paradigm war” in social
research between positivist, interpretivist, and critical approaches. This article draws
from this debate in order to inform the discussions over the appropriateness of evi-
dence-based policymaking and the related question of what is the nature of policy-
making. The positivist, empiricist worldview that underpins the theory and practice of
evidence-based medicine (EBM) fails to address key elements of the policymaking
process. In particular, a narrowly “evidence-based” framing of policymaking is inher-
ently unable to explore the complex, context-dependent, and value-laden way in
which competing options are negotiated by individuals and interest groups. Sociolin-
guistic tools such as argumentation theory offer opportunities for developing richer
theories about how policymaking happens. Such tools also have potential practical
application in the policymaking process: by enhancing participants’ awareness of their
own values and those of others, the quality of the collective deliberation that lies at the
heart of policymaking may itself improve.

Just as no one would argue that clinicians should practice medicine without regard to evidence,
so it would seem an incontestable, self-evident proposition that policymakers should base their
decisions on evidence. . . . Once we move away from platitudinous generalizations and start

Research Department of Primary Care and Population Health, University College London.
Correspondence: Trisha Greenhalgh, Research Department of Primary Care and Population
Health, University College London, 206 Holborn Union Building, Highgate Hill, London N19 5LW,
United Kingdom.
E-mail: p.greenhalgh@pcps.ucl.ac.uk.

Perspectives in Biology and Medicine, volume 52, number 2 (spring 2009):304–18

© 2009 by The Johns Hopkins University Press

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 305

Evidence-Based Policymaking

unpacking the notion of EBP—modeled on EBM—it turns out to be highly contestable and
— R. Klein (2000)

VIDENCE-BASED POLICYMAKING is also referred to as the “problem solving”

E model of policymaking, “instrumental rationality,” and “technical rational-
ity” (Bacchi 2000; Elliott and Popay 2000; Lindblom 1959). In this model, a pol-
icy problem is defined and research evidence used to fill an identified knowledge
gap, thereby solving the problem. The relation between the research evidence
and the resulting policy is assumed to be essentially linear and direct (Black
2001). The making of the policy decision is seen as a technical, logical process
comprising the selection, synthesis, and critical evaluation of best research evi-
dence, from which the obvious (or at least, a preferred) answer to a particular
policy problem will emerge and can then be implemented (Miller 1990).
Rudolf Klein is an octogenarian academic who has, over the years, displayed
an uncanny knack of predicting when a seemingly inspirational idea in social
policy is about to fall on its nose. Britain’s Labour government had swept to
power in 1997 on a manifesto based on the “New Public Management”—in
other words, that every problem in society has an evidence-based solution that
should be identified and driven into policy (Parsons 2002). After three years of
a government bent on “finding out what works and implementing it,” Klein’s
was a rare voice pointing out the intellectual fault lines in such an approach.To
understand Klein’s grievance with evidence-based policymaking, we turn to the
research paradigm debates in social science research.We will argue that evidence-
based policymaking holds on to positivist assumptions, and that serious flaws can
be exposed by critical alternatives to positivism (interpretivism and critical re-
search). Furthermore, each of these more critical approaches offers alternative
accounts of how evidence influences policy.

Paradigms in Policy Research

The study of health-care policy is an interdisciplinary (and philosophically con-
tested) field of study with diverse roots in political science, health services re-
search, clinical epidemiology, and organization and management. Researchers in
these different fields tend to align themselves in relation to a long-standing “par-
adigm war” in social research between positivism and alternative philosophical
stances, each of which leads to different conceptual and methodological perspec-
tives on what policymaking is and the legitimate ways to build the knowledge
base about it (Kuhn 1962).
Any approach to social research carries assumptions about the nature of social
reality (ontology), the nature of knowledge (epistemology), and the purpose of
research and its link with the social world. For the purposes of considering

spring 2009 • volume 52, number 2 305

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 306

Trisha Greenhalgh and Jill Russell

research on the social world (of which policymaking is an example), we can

divide the different approaches into three broad schools: positivist, interpretivist
(also known as hermeneutic, though philosophers recognize subtle differences
between these), and critical (Orlikowski and Baroudi 1991).
Positivist research is the dominant approach in the scientific study of the nat-
ural world. Such studies place high value on experiment and observation and are
characterized by formal hypotheses, careful measurement, and drawing of infer-
ences about a phenomenon from a sample to a stated population.The positivist
researcher assumes that understanding social phenomena is primarily a problem
of objective measurement, and that there are best methods of acquiring this
knowledge (for example, the “hierarchy of evidence” used in evidence-based
medicine; Guyatt et al. 1995).Theory is rarely center stage in positivist research,
which views method—particularly the controlled experiment, the randomized
trial, and the “standardized” and “validated” questionnaire—as the cornerstone of
quality. It is noteworthy, for example, that the well-established quality criteria for
clinical trials (for example, the CONSORT statement for randomized trials;
Moher, Schulz, and Altman 2001) do not include any requirement for a plausi-
ble and coherent mechanism of action, though others have challenged this defi-
ciency (Hawe, Shiell, and Riley 2004).
Interpretivist researchers assume that social reality is produced and reproduced
through the actions and interactions of people. Hence, social reality can never be
objectively known or unproblematically studied; it can only be explained in con-
text by getting inside the world of key actors. Explanations of events are “causal”
in an interpretive sense (“X may help explain Y”), but not in a linear, predictive
sense (“X led to Y”). In this approach, preferred research methods are naturalis-
tic (that is, occurring in the real world and using real experiences and talk as the
key data sources), and quality is defined in terms of plausibility of explanation
rather than in terms of objective “proof.” For the interpretivist, research can
never be value-neutral, as the researcher is always implicated in the phenomena
being studied (and must demonstrate a reflexive awareness of how his or her
background, theories, and assumptions might have influenced the collection and
analysis of data).
Whereas positivism predicts the status quo and interpretivism explains the sta-
tus quo from the perspective of the actors, critical research seeks to reveal the in-
herent contradictions and conflicts in the status quo—and to give social actors
the tools to transform it. It assumes that the social world can only be understood
historically, through an analysis of what it has been and what it is becoming. Be-
cause of this commitment to a processual (emerging over time) view of phe-
nomena, critical research studies tend to be longitudinal and to focus on the
material and social conditions of domination and oppression. Again, the re-
searcher is seen as subjective and part of the research, rather than objective and
distanced from it.Twenty years ago, “interpretive” and “critical” research studies

306 Perspectives in Biology and Medicine

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 307

Evidence-Based Policymaking

were readily distinguishable from one another (the latter, for example, often took
a Marxist or feminist worldview and were published in “alternative” journals),
but the boundaries between these perspectives have blurred considerably over
the last few years. Note that classifying a study as “critical” in the philosophical
sense (in other words, as searching for hidden meanings and power struggles)
does not mean that the authors are necessarily “critical thinkers” (that is, com-
petent scholars). Many positivist studies are highly critical in the latter sense, and
some philosophically “critical” studies represent poor scholarship.

The Evidence for Policymaking

Limitations of “Evidence-Based” Policymaking

Where does all this leave evidence-based policymaking? The answer is that, at
least as expressed in its most idealistic form, the term is bound up in the assump-
tions of positivism—or, as Feyerabend (1999) prefers to call it, of naïve rational-
ism. Positivist thinking appears to underlie and justify many policy documents
and manifestos. For example, in a speech to the Economic and Social Research
Council in 2000, the U.K. Home Secretary David Blunkett stated that: “This
Government has given a clear commitment that we will be guided not by dogma
but by an open-minded approach to understanding what works and why.This is
central to our agenda for modernizing government: using information and
knowledge much more effectively and creatively at the heart of policy-making
and policy delivery” (p. 2). Blunkett emphasized the preference of his New
Labour government for quantitative studies over qualitative, for prediction over
understanding of mechanism, and for interventions with a known “effect size.”
This reflected a more general set of assumptions by the Blair government that
“policy decisions should be based on sound evidence.The raw ingredient of evi-
dence is information. Good quality policy making depends on high quality
information, derived from a variety of sources—expert knowledge; existing
domestic and international research; existing statistics; stakeholder consultation;
evaluation of previous policies” (Modernising Government White Paper, Cab-
inet Office 1999, p. 31; cited in Wells 2007).1
Evidence-based policymaking assumes that the ethical and moral issues faced
by policymakers can be reduced to questions of “best evidence,” and that what
is actually going on in the world can be equated with what the chosen metrics
indicate is going on. It also assumes that empirical research, especially on “the
impact of intervention X on outcome Y,” will provide the answer to most if not
all policy questions; that if we do enough research, we will abolish situations in

1 For additional examples of the naïve rationalist perspective on the policymaking process, see de-
tailed critiques by Sanderson (2003) and Wells (2007).

spring 2009 • volume 52, number 2 307

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 308

Trisha Greenhalgh and Jill Russell


1. Policy problems may be intractable or not clearly enough delineated to be amenable to empirical
2. Financial constraints may make evidence-based recommendations unaffordable.
3. Research evidence may be ambiguous because it contains irreducible uncertainties.
4. Research evidence may be irredeemably value-laden (for example, when health inequalities are
studied in terms of individual-level risk factors rather than the impact of redistributive fiscal meas-
ures, an inherent value judgment has been made about who should take responsibility for these
5. Research evidence may be more or less applicable to a particular local context (for example, tech-
nical data on cost-effectiveness may be inadequate for a locally implementable decision).
6. Other types of evidence (experience, personal testimony, local information, colleagues’ opinions)
may compete with research evidence.
7. Research evidence deficiencies may be related to the framing and underlying assumptions of the
research question.
8. The policymaking process may be diffuse, iterative, and even haphazard, so that “decisions” can
only be identified in retrospect if at all.
9. Policy decisions may be taken for reasons other than evidence of effectiveness.
10. Policymaking time scales may be out of step with those of generating or locating research

Source: Based on empirical studies and reviews of the policymaking process in health care; for detailed references,
see Russell et al. 2008.

which the available evidence is irrelevant, ambiguous, uncertain, or conflicting;

that evidence from research is value-free and context-neutral; and that such evi-
dence is of greater value than evidence from personal experience or opinion.
Methodologically, evidence-based policymaking assumes that deficiencies in
research evidence are largely due to flaws in the design or execution of the re-
search study; that the policymaking process comprises a series of technical steps
(ask focused question → search for evidence → appraise evidence → implement
evidence at policy level); and that policy decisions can be studied as discrete
events, bounded by time. Finally, on a practical level, evidence-based policymak-
ing assumes that the research evidence, if reliable and complete, will determine
a largely unproblematic course of action (see Table 1).
All these assumptions have been shown to be questionable. Numerous empir-
ical studies of the policymaking process, summarized in Table 2, have demon-
strated that in practice, the ethical and moral questions inherent to the policy-
making process cannot be reduced to issues of evidence; that deficiencies in
research evidence are not generally resolvable by undertaking more or bigger
studies; that the policymaking process does not consist of a series of technical
“stages”; that the evidence considered in policymaking goes far beyond conven-
tional research evidence; and that policy decisions do not usually occur as clearly
defined “decision points.” The reality of policymaking is messier, more haphaz-
ard, and constrained by practicalities such as time and budget.

308 Perspectives in Biology and Medicine

Model: Policymaking as . . . Philosophical basis Key arguments

Iteration: dialogical model Interpretivist:Weiss’s theory • Research evidence is one of several knowledge sources which policymakers draw on in an iterative
(Elliott and Popay 2000) of policy as enlightenment process of decision making. Other sources include their own experience, the media, politicians,
(Weiss 1977) colleagues, and practitioners.
• The influence of research on policymaking is diffuse, providing fresh perspectives and concepts as
well as data.
• Social knowledge is jointly constructed from the interactions between researchers and others.
Collective understanding Interpretivist:Wenger • The acquisition, negotiation, adoption, construction, and use of knowledge in decision making is
(Gabbay et al. 2003) communities of practice unpredictably contingent on group processes.

spring 2009 • volume 52, number 2

(Wenger 1996) • The types of knowledge drawn upon include experiential, contextual, organizational, and practical,
as well as empirical or theoretical.
• Knowledge is shaped by personal, professional, and political agendas and is transformed and integrated
12_52.2greenhalgh 304–18:02_51.3schwartz 320–

into a group’s collective understanding.

• Groups of policymakers engage in dynamic processes of sense-making in order to negotiate meaning
and understanding and are influenced by roles, networks, and knowledge resources both within and
outside the group.

Enactment of knowledge Critical-interpretivist: Use of evidence depends on a set of social processes, such as:
(Dopson and Fitzgerald Polanyi personal knowl- • Sensing, interpreting, and integrating new evidence with existing (including tacit) evidence;
2005) edge (Polanyi 1962; • Relating new evidence to the needs of the local context;
Wenger 1996) • Reinforcement or marginalization by professional networks and communities of practice;
5:43 PM

• Discussing and debating evidence with local stakeholders;

• Taking joint decisions about its enactment.
Becoming: immanent Critical: post-structuralism • Policy change is best conceptualized as movement within indeterminate or ambiguous relationships.
model (Wood, Ferlie, (Derrida 1978) • Differentiating between research and practice is of limited utility, as the boundary between them is
Page 309

and Fitzgerald 1998) always indeterminate.

• Phenomena such as “knowledge,” “evidence,” and “practice,” are not natural or necessarily distinct,
but are constituted through local and contingent practices and through the different interests of
the actors involved.
• There is no such entity as “the body of evidence”: there are simply competing (re)constructions of
evidence able to support almost any position.

Evidence-Based Policymaking
12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 310

Trisha Greenhalgh and Jill Russell

These realities do not, of course, negate the hierarchy of evidence or the need
for adequately funded, well-designed research studies. Yes, we need robust epi-
demiological and clinical trial evidence to inform policy. But no, this evidence
will not, in and of itself, tell us what the right policy is for any particular situation.
Political theorists have also questioned the desirability of “evidence-based pol-
icy.” The very idea of evidence-based policy unduly elevates the role that science
can ever play in solving sociopolitical problems. Schwandt (2000), for example,
has argued that “as we increasingly look to science for guidance in overcoming
the quotidian problems of social life, there emerges the expectation of the mas-
tery of society by scientific reason” (p. 225).
The overriding emphasis in evidence-based policy on “what works” arguably
eclipses equally important questions about desirable ends and appropriate means.
What matters is not merely what works, but what is appropriate in the circum-
stances, and what is agreed to be the overall desirable goal (Sanderson 2003).The
problem, as critics of the evidence-based policy movement see it, is that politi-
cal problems are turned into technical ones, with the concomitant danger that
political programmes are disguised as science (Saarni and Gylling 2004).
Should we spend limited public funds on providing state-of-the-art neonatal
intensive-care facilities for very premature infants? Or providing “Sure Start”
programs for the children of teenage single mothers? Or funding in vitro fertil-
ization for lesbian couples? Or introducing a “traffic light” system of food label-
ing, so that even those with low health literacy can spot when a product con-
tains too much fat and not enough fiber? Or ensuring that any limited English
speaker is provided with a professional interpreter for health-care encounters?
Of course, all these questions require “evidence”—but an answer to the question
“What should we do?” will never be plucked cleanly from massed files of scien-
tific evidence. Whose likely benefit is worth whose potential loss? These are
questions about society’s values, not about science’s undiscovered secrets.
Hammersley (2001) has argued that the effect of the dominant culture of evi-
dence-based policy devalues democratic debate about the ethical and moral
issues faced in policy choices and erodes practitioners’ confidence in their abil-
ity to make judgments by marginalizing professional experience and tacit
knowledge (Hammersley 2001). The application of scientific method to con-
temporary life has led to the deformation of what Aristotle called praxis (practi-
cal wisdom or, in contemporary terms, embodied knowledge): “the ailment is
the growing inability to engage in decision making according to one’s own re-
sponsibility as we continue to concede that task to experts in all social institu-
tions” (Schwandt 2000, p. 225).
Interpretivist and Critical Perspectives on Policymaking
Table 2 shows a number of alternative framings of what policymaking is. In
contrast to “policymaking as getting [research] evidence into practice” (positivist

310 Perspectives in Biology and Medicine

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 311

Evidence-Based Policymaking

framing), other authors have encouraged us to consider “policymaking as itera-

tion,” and “policymaking as developing collective understanding,” “policymak-
ing as enactment of knowledge,” “policymaking as becoming” (interpretivist
and/or critical framing).
All these approaches (which have more in common with one another than
any of them has in common with the naïvely evidence-based approach) assume
a much more diffuse and indirect influence of evidence on policy. All the inter-
pretivist and critical perspectives shown in Table 2 consider evidence within the
context of dynamic patterns of interaction, adaptation, and sense-making among
policymakers; to a greater or lesser extent, they also offer a critical analysis of the
political, social, and economic conditions that gave rise to particular policy prob-
lems. Whereas naïve rationalists dismiss “political context” as a troublesome side
issue, political scientists typically see policy problems as constructed through the
varied perceptions and social interpretations of the political actors involved
(Shaw n.d.). For them, policymaking is essentially a process of incremental deci-
sion making or “muddling through,” involving negotiation across these multiple
perspectives (Lindblom 1959). Policy and politics are intertwined with “solu-
tions” flowing from the different kinds of problem definition that are produced
(Bacchi 2000; Bonell 2002; Kingdon 1995).
Back in 1993, Rudolf Klein was warning policymakers against seeking a
“technical fix” to the contentious problem of priority-setting in health care.
Later, as EBM became a social movement offering precisely this technical fix
(Pope 2003), Klein wrote in support of debate and deliberation:

Given conflicting values, the process of setting priorities for health care must
inevitably be a process of debate. It is a debate, moreover, which cannot be
resolved by an appeal to science and where the search from some formula or set
of principles designed to provide decision-making rules will always prove elusive.
Hence the crucial importance of getting the institutional setting of the debate
right . . . the right process will produce socially acceptable answers—and this is
the best we can hope for. (Klein and Williams 2000, 20–21)

A critical reading of this debate suggests that setting priorities for health care is
a discursive process (that is, it involves argument and debate).The policy-as-dis-
course perspective embraces a number of approaches that are centrally con-
cerned with how policy problems are represented. Policymakers are not simply
responding to “problems” that exist in the community, they are actively framing
problems and thereby shaping what can be thought about and acted upon.
According to Stone (1988): “The essence of policymaking in political com-
munities [is] the struggle over ideas. Ideas are at the centre of all political con-
flict. . . . Each idea is an argument, or more accurately, a collection of arguments
in favour of different ways of seeing the world” (p. 11).Within this conceptual-
ization of policymaking, the understanding of “what evidence is” takes on a very

spring 2009 • volume 52, number 2 311

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 312

Trisha Greenhalgh and Jill Russell

different meaning. Evidence can no longer be considered as abstract, disembod-

ied knowledge separate from its social context:

There is no such entity as “the body of evidence.”There are simply (more or less)
competing (re)constructions of evidence able to support almost any position.
Much of what is called evidence is, in fact, a contested domain, constituted in
the debates and controversies of opposing viewpoints in search of ever more
compelling arguments. (Wood, Ferlie, and Fitzgerald 1998, p. 1735)

A number of empirical studies of health policy as discourse have been under-

taken, though in general, these are not well understood or widely cited in main-
stream health services research. Steve Maguire (2002), for example, describes a
longitudinal case study of the development and introduction of drugs for the
treatment of AIDS in the United States from 1981 to 1994. Detailed analysis of
extensive field notes and narrative interviews with people with AIDS, activists,
researchers, industry executives, and policymakers led his team to challenge three
assumptions in the evidence-into-policy literature: (1) that there is a clear dis-
tinction between the “evidence producing” system and the “evidence adopting”
system; (2) that the structure and operation of these systems are given, stable, and
determinant of, rather than indeterminate and affected by, the adoption process;
and (3) that the production of evidence precedes its adoption. Maguire’s study
found the opposite: that there was a fluid, dynamic, and reciprocal relationship
between the different systems involved, and that activists “successfully opened up
the black box of science” via a vibrant social movement which, over the course
of the study, profoundly influenced the research agenda and the process and
speed of gaining official approval for new drugs. For example, whereas the sci-
entific community had traditionally set the gold standard as placebo controlled
trials with hard outcome measures (such as death), the AIDS activists successfully
persuaded them that placebo arms and “body count” trials were unethical in
AIDS research, spurring a shift towards what is now standard practice in drug
research—a new drug is compared with best conventional treatment, not pla-
cebo, and “surrogate outcomes” are generally preferred when researching poten-
tially lethal conditions.The role of key individuals in reframing the issue (“hard
outcomes” or “body counts”) was crucial in determining what counted as best
evidence and how this evidence was used in policymaking.
Importantly, Maguire’s fieldwork showed that AIDS activists did not simply
“talk their way in” to key decision-making circles by some claim to an inherent
version of what was true or right. Rather, they captured, and skillfully built
upon, existing discourses within society, such as the emerging patients’ rights
movement and the epistemological debates already being held within the aca-
demic community that questioned the value of “clean” research trials (which
only included “typical” and “compliant” patients without co-morbidity). They
also collaborated strategically with a range of other stakeholders to achieve a

312 Perspectives in Biology and Medicine

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 313

Evidence-Based Policymaking

common goal (“strange bedfellows . . . pharmaceutical companies along with the

libertarian, conservative right wing allied themselves with people with AIDS and
gays” (p. 85). Once key individuals in the AIDS movement had established them-
selves as credible with press, public, and scientists, they could exploit this credi-
bility powerfully: “their public comments on which trials made sense or which
medications were promising could sink research projects” (p. 85).

“Fair” Policymaking: A Process of Argumentation

In summary, interpretivist and critical research on the nature of policymaking
shows that it involves, in addition to the identification, evaluation, and use of re-
search evidence, a complex process of framing, deliberation, negotiation, and col-
lective judgment. Empirical research studies also suggest that this is a sophisti-
cated and challenging process. In a qualitative research study of priority-setting
committees in Ontario, for example, Singer and colleagues (2000) identified fac-
tors such as representation of multiple perspectives, opportunities for everyone
to express views, transparency, and an explicit appeals process as key elements of
fair decision making. An important dimension of this collective deliberation is
the selection and presentation of evidence in a way that an audience will find
credible and appealing.
If we wish to better understand the deliberative processes involved in policy-
making, and how evidence actually gets “talked into practice” (or not) at a micro
level of social interaction, then we require a theoretical framework that places
central focus on language, argumentation, and discourse. Philosophical work on
argumentation can be traced back to Aristotle’s classic Rhetoric. Aristotle classi-
fied rhetoric as a positive, scholarly activity and saw it as having three dimen-
sions: logos (the argument itself—the “evidence” in modern parlance); ethos (the
credibility of the speaker); and pathos (the appeal to emotions). Evidence-based
perspectives on health-care policymaking tend to define the last two of these as
undesirable “spin,” to be systematically expunged so that the policymaking
process can address “pure” evidence in an objective, dispassionate way. But an
extensive literature from political science suggests that the “What should we
do?” questions are addressed more effectively, not less, through processes of rhe-
torical argumentation (Fischer 2003; Majone 1989; Miller 2003; Stone 1988;
Young 2000).
Booth’s (1974) definition of rhetoric as “the art of discovering warrantable
beliefs and improving those beliefs in shared discourse,” highlights the value of
rhetoric in bringing to the fore the role of human judgment in policymaking.
Rhetorical theory reminds us of the human agency involved in the use of evi-
dence in policymaking, and indeed requires us to shift from equating rationality
with EBM-type procedures to considering rationality as a situated, contingent
human construction:“The constructive activity of rationality occurs through the
discovery and articulation of good reasons for belief and action, activities that are

spring 2009 • volume 52, number 2 313

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 314

Trisha Greenhalgh and Jill Russell

fundamental to deliberation. Rationality concerns a process or activity (not a

procedure) that guarantees criticism and change (not correctness)” (Miller 1990,
p. 178). Sanderson (2004) suggests that this alternative conceptualization of
rationality and focus on the deliberative processes of reason giving, argument,
and judgment has much to offer to those around the policymaking table:

we need to work within a broader conception of rationality to recognise the

validity of the range of forms of intelligence that underpin “practical wisdom,”
to acknowledge the essential role of fallible processes of craft judgement in
assembling what is to be accepted as “evidence,” and to incorporate deliberation,
debate and argumentation in relation to the ends of policy and the ethical and
moral implications of alternative courses of action. From this perspective, the
challenge faced by policy makers is seen not as a technical task of reducing
uncertainty through the application of robust, objective evidence in the pursuit
of more effective policies, but rather as a practical quest to resolve ambiguity
through the application of what John Dewey calls “creative intelligence” in the
pursuit of more appropriate policies and practice. (p. 376)

A rhetorical perspective suggests that in allocating resources to publicly

funded programs, for example, policymakers must first consider different fram-
ings of health problems (Schon and Rein 1990), then contextualize relevant evi-
dence to particular local circumstances, and finally weigh this evidence-in-con-
text against other evidence-in-context pertaining to competing policy issues. Is
fertility a “medical need” or a “social expectation”? Is childhood obesity the
result of “poor parenting” or of “obesogenic environments”? Is the withholding
of an expensive drug for macular degeneration “denying sight-saving treatment
to a defenseless pensioner” or “an attempt to allocate public resources equi-
tably”? Once problems are framed, by what process might resources be fairly al-
located between “infertility treatment,” “parenting programs,” “disincentivizing
junk food in schools,” and “drugs for macular degeneration”?
In The New Rhetoric, two contemporary philosophers have extended Aris-
totle’s theory of argumentation to include an emphasis on understanding the
audience, in particular their points of departure (in other words, where the audi-
ence is “coming from”; Perelman and Olbrechts-Tyteca 1971). If the policy
question is whether a new routine immunization should be introduced for chil-
dren, an academic who presents the relevant systematic review from the Co-
chrane database is likely to impress the other academics on the committee, but
he or she is also likely to lose ground when a lay member offers a clip from a
tabloid newspaper about an alleged vaccine-damaged baby, or when a front-line
clinician says “In 30 years I’ve never seen a case of the disease this vaccine is sup-
posed to prevent.” In such a situation, parents of young children are likely to be
particularly amenable to an argument that is framed in highly personal terms
(“Professor Greenhalgh has already had her own children immunized with this
vaccine”); finance officers may be more persuaded by one framed in terms of

314 Perspectives in Biology and Medicine

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 315

Evidence-Based Policymaking

cost-effectiveness (“If we achieve X% coverage we are likely to prevent Y hospi-

tal admissions”); and nonacademic clinicians may be particularly open to an ap-
peal to the credibility of the speaker (“Professor Greenhalgh is an expert on
EBM, and we should listen to experts”). All arguments may be factually correct,
but different audiences will be more or less receptive to different framings. The
detailed analysis of the policymaking process through the lens of “the new rhet-
oric” is an exciting research technique whose principles we have reviewed in de-
tail elsewhere (Russell et al. 2008), and which we are currently applying to the
analysis of health-care resource allocation in the United Kingdom.

Expressions such as “knowledge translation” and “getting evidence into practice”
are, as Klein suggested, seductive metaphors for the policymaking process. But
they are fundamentally inaccurate, because policymaking is not about applying
objective evidence to solve problems that are “out there” waiting for solutions.
It is about constructing these problems through negotiation and deliberation, and
using judgements to “muddle through”—that is, to make context-sensitive
choices in the face of persistent uncertainty and competing values (Lindblom
1959; Parsons 2002). Policymaking in health care, as in other fields of public pol-
icy, is thus about “framing and taming ‘wicked’ problems” (Gibson 2003).
As many leading protagonists of evidence-based clinical medicine and public
health have argued, research evidence can and should inform policy judg-
ments—but this evidence does not in and of itself provide the answer to the eth-
ical question of “what to do” (and in particular, “how to allocate resources”)
(Black 2001; Davey Smith, Ebrahim, and Frankel 2001; Gabbay and le May 2004;
Giacomini et al. 2004; Lomas 2005; Mulrow and Lohr 2001).Yet despite a strong
mandate from within EBM to extend beyond naïve rationalism, technical fixes
remain the holy grail of many government departments (Syrett 2003), and some
sectors of the EBM community have gone on the defensive (Rosenstock and
Lee 2002). An exploration beyond the Medline-indexed literature reveals a
wealth of insights in the interpretivist and critical philosophical traditions, most
notably from political science, which promises to enrich and extend the analysis
of health-care policymaking, especially in relation to the application of argu-
mentation theory to its empirical study. We are both reassured and excited that
researchers in this field have begun to break free from their positivist shackles
and to embrace broader philosophical foundations.

Bacchi, C. 2000. Policy as discourse: What does it mean? where does it get us? Discourse
Black, N. 2001. Evidence based policy: Proceed with care. BMJ 323:275–79.

spring 2009 • volume 52, number 2 315

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 316

Trisha Greenhalgh and Jill Russell

Blunkett, D. 2000. Influence or irrelevance: Can social science improve government.

Speech to the Economic and Social Research Council, Feb. 2. London: ESRC.
Bonell, C. 2002.The politics of the research-policy interface: Randomised trials and the
commissioning of HIV prevention services. Sociol Health Illn 24(4):385–408.
Booth,W. 1974. Modern dogma and the rhetoric of assent. Chicago: Univ. of Chicago Press.
Davey Smith, G., S. Ebrahim, and S. Frankel. 2001. How policy informs the evidence.
BMJ 322:184–85.
Derrida, J. 1978. Writing and difference, trans. A. Bass. London: Routledge.
Dopson, S., and L. Fitzgerald. 2005. Knowledge to action? Evidence-based health care in con-
text. Oxford: Oxford Univ. Press.
Elliott, H., and J. Popay. 2000. How are policy makers using evidence? Models of research
utilisation and local NHS policy making. J Epidemiol Community Health 54(6):461–68.
Feyerabend, P. 1999. Rationalism, relativism and scientific method. In Knowledge, science
and relativism: Philosophical papers, vol. 3, ed. J. Preston, 200–11. Cambridge: Cambridge
Univ. Press.
Fischer, F. 2003. Reframing public policy: Discursive politics and deliberative practices. Oxford:
Oxford Univ. Press.
Gabbay, J., and A. le May. 2004. Evidence-based guidelines or collectively constructed
“mindlines”? An ethnographic study of knowledge management in primary care. BMJ
Gabbay, J., et al. 2003.A case study of knowledge management in multi-agency consumer
informed “communities of practice”: Implications for evidence-based policy devel-
opment in health and social services. Health 7(3):283–310.
Giacomini, M., et al. 2004. The policy analysis of “values talk”: Lessons from Canadian
health reform. Health Policy 67:15–24.
Gibson, B. 2003. Framing and taming “wicked” problems. In Evidence-based health policy,
ed.V. Lin and B. Gibson, 298–310. Oxford: Oxford Univ. Press.
Guyatt ,G. H., et al. 1995. Users’ guides to the medical literature. IX.A method for grad-
ing health care recommendations. Evidence-Based Medicine Working Group. JAMA
274(22):1800–1804. Published errata appear in JAMA 275(16):1232.
Hammersley, M. 2001. Some questions about evidence-based practice in education. In
Evidence-based practice in education, ed. R. Pring and G. Thomas, 133–49. Milton
Keynes: Open Univ. Press.
Hawe, P., A. Shiell, and T. Riley. 2004. Complex interventions: How “out of control” can
a randomised controlled trial be? BMJ 328(7455):1561–63.
Kingdon, J. 1995. Agendas, alternatives and public policy. New York: HarperCollins.
Klein, R. 1993. Dimensions of rationing:Who should do what? BMJ 307:309–11.
Klein, R. 2000. From evidence-based medicine to evidence-based policy? J Health Serv
Res Policy 5:65–66.
Klein, R., and A.Williams. 2000. Setting priorities: What is holding us back—inadequate
information or inadequate institutions? In The global challenge of health care rationing,
ed. A. Coulter and C. Ham, 15–26. Buckingham: Open Univ. Press.
Kuhn,T. S. 1962. The structure of scientific revolutions. Chicago: Univ. of Chicago Press.
Lindblom, C. 1959.The science of muddling through. Public Admin Rev 19(2):79–88.
Lomas, J. 2005. Using research to inform healthcare managers’ and policymakers’ ques-
tions: From summative to interpretive synthesis. Healthc Policymaking 1(1):57–71.

316 Perspectives in Biology and Medicine

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 317

Evidence-Based Policymaking

Maguire, S. 2002. Discourse and adoption of innovations: A study of HIV/AIDS treat-

ments. Health Care Manage Rev 27(3):74–78.
Majone, G. 1989. Evidence, argument and persuasion in the policy process. New Haven: Yale
Univ. Press.
Miller, C. R. 1990. The rhetoric of decision science, or Herbert A. Simon says. In The
rhetorical turn: Invention and persuasion in the conduct of inquiry, ed. H. Simons, 162–84.
Chicago: Univ. of Chicago Press.
Miller, C. R. 2003.The presumptions of expertise:The role of ethos in risk analysis. Con-
figurations 11:163–202.
Moher, D., K. F. Schulz, and D. G. Altman. 2001. The CONSORT statement: Revised
recommendations for improving the quality of reports of parallel-group randomised
trials. Lancet 357(9263):1191–94.
Mulrow, C. D., and K. N. Lohr. 2001. Proof and policy from medical research evidence.
J Health Politics Policy Law 26(2):249–66.
Orlikowski, W. J., and J. J. Baroudi. 1991. Studying information technology in organiza-
tions: Research approaches and assumptions. Info Syst Res 2(1):1–28.
Parsons,W. 2002. From muddling through to muddling up: Evidence based policy-mak-
ing and the modernisation of British government. Public Policy Admin 17:43–60.
Perelman, C., and L. Olbrechts-Tyteca. 1971. The new rhetoric: A treatise on argumentation.
Notre Dame: Univ. of Notre Dame Press.
Polanyi, M. 1962. The tacit dimension. New York: Anchor Day.
Pope, C. 2003. Resisting evidence: the study of evidence-based medicine as a contem-
porary social movement. Health 7: 267–82.
Rosenstock, L., and L. J. Lee. 2002. Attacks on science: The risks to evidence-based pol-
icy. Am J Public Health 92(1):14–18.
Russell, J., et al. 2008. Recognizing rhetoric in health care policy analysis. J Health Serv
Res Policy 13(1):40–46.
Saarni, S., and H. Gylling. 2004. Evidence based medicine guidelines: A solution to
rationing or politics disguised as science? J Med Ethics 30:171–75.
Sanderson, I. 2003. Is it “what works” that matters? Evaluation and evidence-based pol-
icy-making. Res Papers Educ 18(4):331–45.
Sanderson, I. 2004. Getting evidence into practice: Perspectives on rationality. Evaluation
Schon, D., and M. Rein. 1990. Frame reflection: Towards the resolution of intractable policy con-
troversies. New York: Basic Books.
Schwandt, T. 2000. Further diagnostic thoughts on what ails evaluation practice. Am J
Eval 21(2):225–29.
Shaw, S. n.d. Reaching the parts that other theories and methods can't reach: How and
why a policy-as-discourse approach can inform health-related policy. Health, forth-
Singer, P., et al. 2000. Priority setting for new technologies in medicine: Qualitative case
study. BMJ 321:1316–18.
Stone, D. 1988. Policy paradox:The art of political decision making. New York: Norton.
Syrett, K. 2003. A technocratic fix to the “legitimacy problem”? The Blair government
and health care rationing in the United Kingdom. J Health Polit Policy Law 28(4):715–

spring 2009 • volume 52, number 2 317

12_52.2greenhalgh 304–18:02_51.3schwartz 320– 4/3/09 5:43 PM Page 318

Trisha Greenhalgh and Jill Russell

Weiss, C. 1977.The many meanings of research utilization. Public Admin Rev 39:426–31.
Wells, P. 2007. New Labour and evidence based policy making: 1997–2007. People Place
Policy Online 1(1):22–29.
Wenger, E. 1996. Communities of practice: Learning, meaning and identity. Cambridge:
Cambridge Univ. Press.
Wood, M., E. Ferlie, and L. Fitzgerald. 1998. Achieving clinical behaviour change: A case
of becoming indeterminate. Soc Sci Med 47(11):1729–38.
Young, I. M. 2000. Inclusion and democracy. Oxford: Oxford Univ. Press.

318 Perspectives in Biology and Medicine

All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.