Você está na página 1de 6

The truth is out there

• 19 February 2000
• Michael Cross

DOES science tell us the truth? How do we tell the difference between science and non-
science? If one group of scientists says that genetically modified foods are harmless and
another says they are dangerous, who should we believe? To answer these questions we
must think about the way scientists reach their conclusions.

Science's goal is to discover the laws of nature, which we assume exist independently of
humans. We find these laws by collecting facts and assembling new theories to explain
them. Good science is conducted publicly. Scientists release their results in a way that
allows others to scrutinise them and try to duplicate them or show that they are wrong. Few
people seriously doubt that science works. It has been hugely successful in giving us
explanations of the world around us. It has the power ultimately to explain all natural
phenomena, even if in practice some problems are proving very difficult. Science has also
allowed us to create technologies such as drugs to treat cancer or the laser in your CD or
MiniDisc player.

WHAT IS SCIENCE?

Testing ideas

No one has yet defined what science is in a way that satisfies everyone. Science, for
example, cannot give absolute proofs of the laws of nature because, although we can test an
idea repeatedly, we can never be sure that an exception does not exist. Some religious
fundamentalists and TV psychics exploit this difficulty, and claim that science is just another
set of beliefs, with no more validity than any other. But while science may not give us
absolute truth, this doesn't mean we must give equal time to magicians and the like. Far
from it.

To see why, we need to examine the philosophy of science. Like other branches of
philosophy this involves thinking about thinking (the word originally meant "love of
wisdom"). The philosophy of science uses similar methods to a mathematical proof: a
step-by-step examination of assumptions, data and conclusions.

A classic philosophical question is: "Do I exist? How do I know that I am not just a program in
some immense supercomputer that is feeding me false sensations about a simulated world?"
The French mathematician and philosopher René Descartes (1596-1650) answered this
question with a proof involving the famous statement, "I think, therefore I am." In other
words, the act of doubting that we exist proves we exist; there must be something that
thinks about the problem of proving existence.

The philosophy of science examines scientific method and asks what it can tell us. Science
deals with empirical knowledge. This is knowledge about the Universe that we acquire by
examining how it appears to our senses—enhanced, if necessary, by instruments such as
microscopes or particle accelerators—rather than by sitting and thinking. Empiricism sounds
like common sense, but as a way of learning about the world it is comparatively recent. It
triumphed in the scientific revolution of the 16th and 17th centuries, when Galileo Galilei,
Robert Boyle, Isaac Newton and others showed that facts gained from empirical observations
could revolutionise our picture of the world.

This was where science parted company with magic. Although there was some overlap at
the time—Newton was an enthusiastic alchemist, and mystical texts may have inspired him
to think of gravity—there is a basic difference between science and magic. Science involves
repeatable observations and open publication. There are no hidden or "occult" texts, and
when an experiment does not work we do not blame the heavens, the experimenter's lack of
spiritual purity or—a favourite of today's TV magicians—"bad vibes" from critical observers.

Empiricism creates its own philosophical problems, however. How do facts lead to theories
and laws of nature? Imagine an experiment involving observations of apples. After watching
apples fall from trees, and verifying that apples will also fall if dropped from the hand, or
from the top of a tall building or other tall structures, we reason that a fundamental law is
responsible. We call it gravity, and we predict that when we release an apple or any similar
object in midair, it will fall to the ground.

When we make a prediction based on past experience, we are moving from statements
based on our observations, such as "the apple fell to the ground", to universal statements
such as "all apples in the future will fall to the ground". This leap from the singular to the
universal is called inductive reasoning. Inductive reasoning appeals to common sense, but
is logically flawed. The empiricist philosopher David Hume (1711-76) pointed out that there
can be no logical connection across time. Just because something has happened many times
in the past does not prove that it will happen in the future. Karl Popper (1902-94) pointed
out that scientific verification doesn't actually prove anything. No matter how many times
we record in our notebooks the fact of observing a white swan, we get no closer to proving
the universal statement that all swans are white. Popper decided that science finds facts not
by verifying statements but by falsifying them. We may never be able to prove that all
swans are white, but the first time we see a black swan we can firmly disprove it.

To reason in this way runs counter to intuition (see Figure 1). Logically, however, it is very
powerful, and scientists make good use of this power. Popper said that science progresses
by testing hypotheses. One scientist holds up a hypothesis for examination— for example,
that gravity bends light waves. Colleagues or rivals then subject this hypothesis to
experimental tests that could show it to be false. If the hypothesis survives repeated tests, it
becomes accepted as scientific truth.

Popper's ideas provide a link between theory and experiment. They tell us that no matter
how many tests a hypothesis survives, we will never have a philosophical proof that it is
true. Popper wrote: "There can be no ultimate statements in science . . . and therefore none
which cannot in principle be refuted." This makes a willingness to accept falsification central
to science. Scientists must behave rationally and gracefully, by stating in advance what
experimental observations would disprove their hypothesis, and if such findings do emerge,
accepting that their hypothesis was wrong.
This was important to Popper, who was born in Austria and whose life was dominated by
struggles against ideologies such as those of Nazi Germany, which tolerated no doubts.
Popper also contrasted Albert Einstein's theories of relativity with Karl Marx's theories of
history. While Einstein offered his followers tests, such as solar eclipses, which might have
disproved his theories, Marxists were undeterred when history did not unfold according to
prediction. Popper also attacked Freudian psychology and Darwinian evolution for what he
saw as their unfalsifiability.

Most working scientists today would go along with the idea of falsification. But Popper's
ideas leave us with several problems:

- Falsification alone cannot distinguish science from non-science. The hypothesis that
reindeer can fly is falsifiable by any scientist with access to a herd of reindeer, a high cliff
and an unusually compliant ethics committee. No one, however, would describe the
hypothesis as scientific.

- Where do hypotheses come from? One answer might be that they are merely the
application of general principles. For example, they might be inspired by the principle—
named after the medieval philosopher William of Occam—known as Occam's razor: the
simplest explanations are the best—or that the Universe everywhere obeys the same laws of
physics. But this brings us back to the problem of induction.

- Science doesn't progress through falsification. In a strictly Popperian system, we would


have to abandon the laws of chemistry every time a school student got the wrong result in a
chemistry practical. Clearly, we do not do this. We blame the student's error, or if confronted
with a run of anomalous findings, contaminated samples or faulty instruments. Sometimes
this is wrong. Scientists rejected early evidence of a hole in the ozone layer over Antarctica
because, rather than accepting such unexpected results, they assumed that the satellite
collecting the data was faulty. This leads us to the next problem.

- How to explain scientific revolutions, discoveries which transform understanding? Leaps


of genius like the theory of evolution by natural selection, or the theory of relativity, appear
to be neither new bricks in the wall of knowledge nor the consequence of falsifying previous
theories.

WAYS OF SEEING

Paradigm shifts

The last question was tackled by Thomas Kuhn (1922-96). In his book The Structure of
Scientific Revolutions, published in 1962, Kuhn said that scientific revolutions need creative
thinking of a kind that cannot grow out of the old order. He dismissed Popper's picture. "No
process yet disclosed by the historical study of scientific development at all resembles the
methodological stereotype of falsification by direct comparison with nature," he said.

Kuhn suggested that science does not develop by the orderly accumulation of facts and
theories, but by dramatic revolutions which he called paradigm shifts. The worlds before
and after a paradigm shift are utterly different—Kuhn's term was "incommensurable"— and
experiments done under the old order may be worthless under the new.

The switch between before and after is as dramatic as that which occurs when looking at a
trick gestalt-switch picture (Figure 2). You cannot reject one view without replacing it with
the other. Such switches are rare. Kuhn's examples include the Copernican revolution, which
adopted the idea that the Earth orbited the Sun and not the other way round, the discovery
of oxygen, and Einstein's theories of relativity. By contrast, most "normal" research takes
place within paradigms. Scientists accumulate data and solve problems in what Kuhn called
"mopping-up operations".

Inevitably, some research throws up findings that do not fit the paradigm—perhaps an
unexpected wobble in a planet's orbit around the Sun. In Popper's model these would
immediately falsify the paradigm's central theory. But according to Kuhn, scientists prefer
to cling to old paradigms until a new one is ready. The anomaly is either discarded or,
preferably, worked into the existing paradigm. In this way the elegant model of an Earth-
centred Universe developed in the second century BC by the Greek astronomer Ptolemy
accumulated more and more subsidiary orbits to account for astronomers' subsequent
observations.

After a while, however, the anomalies build up into a crisis of confidence, and science stalls.
Eventually a genius comes up with a new paradigm. Copernicus realised that the observed
orbits of planets made sense when he placed the Sun, rather than the Earth, in the centre of
the Solar System. Kuhn said that such leaps happen only in times of crisis.

In times of paradigm shift, hard scientific facts can become meaningless, or change their
meaning entirely. For years, scientists made measurements on a substance called
phlogiston, which they thought was given off when objects burnt. The discovery of oxygen
rendered phlogiston meaningless. But chemists could not discover oxygen until they decided
to treat it as a distinct gas. In other words, oxygen had to be invented as well as discovered
(Figure 3).

Individual scientists are loath to make such leaps, Kuhn says. The revolution occurs only
when practitioners under the old paradigm either die or retire. It takes a new generation to
carry the torch of the new paradigm.

Many people have criticised Kuhn. They say his use of the word "paradigm" is imprecise. He
chooses his examples overwhelmingly from physics, and they say other sciences may
change in different ways. And scientists do not seem as reluctant to make paradigm shifts as
Kuhn implies. The discovery of DNA's double-helix structure utterly changed the way we
think about biology, yet biologists accepted it with enthusiasm, replacing a model based on
metabolism with one based on information. Did this make it less than a paradigm shift?

Likewise the discovery in the late 1980s of new materials that become superconductors at
relatively high temperatures was eagerly pursued by scientists. Such breakthroughs must
throw into doubt Kuhn's distinction between "normal" science—the mopping-up of facts—
and revolutionary science.

Finally, Kuhn does not tell us where revolutionary ideas come from. We enjoy the folklore of
scientific breakthroughs happening by accident, as with Alexander Fleming and penicillin, or
through the work of outsiders such as Einstein. Sadly for Hollywood, such stories are often
myths. Although Einstein was working as a patent office clerk when he came up with his
theories of relativity, he had steeped himself in contemporary work on physics. Fleming
spotted penicillin's effects because he was an expert in bacteriology, working in a
laboratory. In science, chance favours the prepared mind.

Most worrying, if Kuhn is right, science is just a matter of fashions and a kind of crowd
psychology, with nothing to distinguish it from pseudoscience. This problem concerned the
Hungarian Imre Lakatos (1922-74), who refined some of Popper and Kuhn's ideas in a way
that makes such a demarcation clear. Instead of "normal" and "revolutionary" science,
Lakatos drew a distinction between progressive and degenerative research programmes.
A progressive research programme is one that leads to the discovery of facts that were
previously unknown. An example is Newton's theory of gravity, which allowed Halley to
predict the return of the comet that now bears his name. A degenerating research
programme allows no such predictions; rather, it must itself be modified to cope with
inconvenient facts. Lakatos cites Marxism, which although it describes itself as a science has
a poor record of predicting a crucial phenomenon—political revolutions.

In progressive research programmes the appearance of awkward facts, such as


unaccountable wobbles in a planet's orbit, is not necessarily fatal to the core hypothesis.
Scientists can ignore them if the central hypothesis is still coming up with "unexpected,
stunning, predictions", Lakatos says. Revolutions happen gradually as progressive research
programmes replace degenerating ones. But even in progressive research, facts come after
theories.

Theories are clearly made up by humans: they are socially constructed in modern jargon.
Does this mean that scientific facts are too? The idea that science is a social construct
intrigues many people, especially those thinkers described as "postmodernists". If, according
to Popper, scientific laws are impossible to verify logically and, according to Kuhn, the same
findings can mean different things before and after a scientific revolution, how can science
claim to be any more objective than any other cultural pursuit?

No one would deny that culture, values and beliefs shape our choice of what science to do.
Drugs companies began researching AIDS when it affected people who could afford to buy
medicines rather than rural Africans. Military spending on research and development is
responsible for similar biases. Scientists believe, however, that the basic facts of the
Universe are there to be discovered, whatever the motivation for doing so. We spent billions
of pounds developing nuclear weapons, and in the process learned a lot about some strange
metal alloys. But we would have found the same facts in a race to build the ultimate
ploughshare. The "science wars" being fought out between academics, especially in North
America, question whether this assumption is generally true. Philosophers such as Bruno
Latour in Paris study science as a social phenomenon, and suggest its results are little more
than social rituals.

Some scientists are horrified by the spectre of relativism, which holds that ideas are not
universal or absolute but differ from culture to culture, individual to individual. A relativist
would assert that science is only one way of discovering the nature of the physical world.
The anarchist philosopher Paul Feyerabend (1924-94), perhaps mischievously, took the
relativist argument to its logical conclusion: "There is no idea, however ancient and absurd,
that is not capable of improving our knowledge." In Against Method (1975) he defended the
Church's indictment of Galileo. It was rational, he said, because there was at the time no
reason to suppose that Galileo's crude telescopes could show the mountains on the Moon
that he claimed to have seen. The Church believed that the Moon was a perfect smooth
sphere quite unlike Earth.

TRUST AND TRUTH

Science and non-science

One vigorous defender of science's special place is the biologist and author Richard Dawkins.
He notes that when relativist philosophers fly to an international conference on
postmodernism, they generally put their trust in a high-technology airliner rather than a
magic carpet. And, of course, absolute relativism contains its own contradiction. "Those who
tell you there is no absolute truth are asking you not to believe them," says the
contemporary philosopher Roger Scruton. "So don't."

One battle in the "science wars" is over Darwin's theory of evolution (see "Evolution under
attack"). Some assaults on evolution come from a particularly stormy debate over
evolutionary psychology or, as it is sometimes called, sociobiology. This attempts to
explain people's patterns of behaviour—whether it be fear of snakes, or why we enjoy
particular kinds of landscape gardening—solely in terms of evolutionary advantage.
Evolutionary psychology is controversial because it can be used to justify types of behaviour,
such as violence, which are generally considered unacceptable. It is possible to challenge
the science of evolutionary psychology without challenging evolution itself.

The phenomenon of consciousness is another problem area. Philosophers and scientists


both stake a claim to holding the key to understanding consciousness. The fact that some
computer scientists believe they can create artificial consciousness gives the debate extra
spice. But any such project will first have to define what constitutes consciousness, which is
probably a job for philosophy. Science and technology can then take over.

Finally, there is the question of exactly what science is. As we have seen, Popper's
falsification criterion alone is not enough to distinguish science from non-science. In fact,
if we look at the whole array of science, from particle physics to cell biology to ecology to
engineering, it is hard to find any single practice that they all have in common. Even
openness is not always there: much research is kept secret for military or commercial
reasons.

A way out is to use a concept developed by one of the most important 20th-century
philosophers, Ludwig Wittgenstein (1889-1951), that of family resemblance. There are
many groups of human activities that are impossible to define exactly. For example, it's hard
to say what a game is, but when we see a new game we have no trouble deciding that that's
what it is, because of the things it shares with other members of the games family. Likewise
with science: all we can say about good science is it has most of the qualities of other
activities we call good science, including empiricism, peer review and openness to
refutation.

Those who work in this family believe that truth is out there. Perhaps not always in the
strictest philosophical sense, but enough for practical purposes and definitely enough to
distinguish science from propaganda and muddled thinking. Scientists do not need to be shy
of admitting that its laws are always provisional. That is not a weakness, but science's
greatest strength.

Você também pode gostar