Escolar Documentos
Profissional Documentos
Cultura Documentos
Preface ....................................................................................... 13
COMMEMORATION OF ACADEMICIANS
André Lichnerowicz – by P. Germain ..................................... 40
James Robert McConnell – by G.V. Coyne ............................. 43
Gerhard Herzberg – by C.N.R. Rao ........................................ 44
Carlos Chagas Filho – by C. Pavan ......................................... 45
Hermann Alexander Brück – by M.J. Rees ............................ 48
Johanna Döbereiner – by C. Pavan ......................................... 50
Joseph E. Murray........................................................................ 57
Sergio Pagano ............................................................................. 58
Frank Press.................................................................................. 59
Rafael Vicuña .............................................................................. 60
Chen Ning Yang .......................................................................... 62
Ahmed H. Zewail ........................................................................ 62
Antonino Zichichi ....................................................................... 63
SCIENTIFIC PAPERS
PART I: Science for Man and Man for Science (Working Group)
W. ARBER: Context, Essential Contents of, and Follow-up to,
the World Conference on Science Held in Budapest in
June 1999 ........................................................................... 75
P. GERMAIN: Technology: between Science and Man ............. 80
R. HIDE: Spinning Fluids, Geomagnetism and the Earth’s
Deep Interior ..................................................................... 86
Y.I. MANIN: Mathematics: Recent Developments and cultural
aspects ................................................................................ 89
J. MITTELSTRASS: Science as Utopia......................................... 95
G.V. COYNE: From Modern Research in Astrophysics to the
Next Millennium for Mankind ......................................... 100
P. CRUTZEN: The Role of Tropical Atmospheric Chemistry in
Global Change Research: the Need for Research in the
Tropics and Subtropics ..................................................... 110
E. MALINVAUD: Which Economic System is Likely to Serve
Human Societies the Best? The Scientific Question ...... 115
P.H. RAVEN: Sustainability: Prospects for a New Millennium ... 132
B.M. COLOMBO: Choice, Responsibility, and Problems of
Population.......................................................................... 156
J. MARÍAS: The Search for Man ............................................... 163
CONTENTS 11
A.H. ZEWAIL: Time and Matter – Science at New Limits ....... 426
A.H. ZEWAIL: The New World Dis-Order – Can Science Aid
the Have-Nots? .................................................................. 450
M. ODA: Why and how Physicists are Interested in the Brain
and the Mind ..................................................................... 459
V.C. RUBIN: A Millennium View of the Universe .................... 464
R. OMNÈS: Recent Trends in the Interpretation of Quantum
Mechanics .......................................................................... 475
P. CARDINAL POUPARD: Christ and Science ............................... 484
J. MITTELSTRASS: On Transdisciplinarity ................................. 495
M. HELLER: ‘Illicit Jumps’ – The Logic of Creation................ 501
AHMED H. ZEWAIL
INTRODUCTION
Until 1800 AD, the ability to record the timing of individual steps in
any process was essentially limited to time scales amenable to direct sen-
sory perception – for example, the eye’s ability to see the movement of a
clock or the ear’s ability to recognize a tone. Anything more fleeting than
the blink of an eye (~0.1 second) or the response of the ear (~0.1 mil-
lisecond) was simply beyond the realm of inquiry. In the nineteenth cen-
tury, the technology was to change drastically, resolving time intervals
into the sub-second domain. The famous motion pictures by Eadweard
Muybridge (1878) of a galloping horse, by Etienne-Jules Marey (1894) of
a righting cat, and by Harold Edgerton (mid-1900’s) of a bullet passing
through an apple and other objects are examples of these developments,
with millisecond to microsecond time resolution, using snapshot photog-
raphy, chronophotography and stroboscopy, respectively. By the 1980’s,
this resolution became ten orders of magnitude better [see Section III],
reaching the femtosecond scale, the scale for atoms and molecules in
motion (see Fig. 1).
For matter, the actual atomic motions involved as molecules build and
react had never been observed before in real time. Chemical bonds break,
form, or geometrically change with awesome rapidity. Whether in isolation
or in any other phase of matter, this ultrafast transformation is a dynamic
process involving the mechanical motion of electrons and atomic nuclei.
The speed of atomic motion is ~ 1 km/second and, hence, to record atom-
ic-scale dynamics over a distance of an angström, the average time required
1
Adapted from the Lecture published in Les Prix Nobel (1999).
TIME AND MATTER – SCIENCE AT NEW LIMITS 427
Figure 1. Time scales of cosmological, geological, human and molecular events; from the
big bang to the femtosecond age.
is ~ 100 femtoseconds (fs). The very act of such atomic motions as reactions
unfold and pass through their transition states is the focus of the field of
femtochemistry. With femtosecond time resolution we can “freeze” struc-
tures far from equilibrium and prior to their vibrational and rotational
motions, and study physical, chemical, and biological changes.
Ultrafast pulsed laser techniques have made direct exploration of this
temporal realm a reality (Sections III & IV). Spectroscopy, mass spec-
trometry and diffraction play the role of “ultra-high-speed photography”
in the investigation of molecular processes. A femtosecond laser probe
pulse provides the shutter speed for freezing nuclear motion with the nec-
essary spatial resolution. The pulse probes the motion by stroboscopy, i.
e. by pulsed illumination of the molecule in motion and recording the
particular snapshot. A full sequence of the motion is achieved by using an
428 AHMED H. ZEWAIL
Figure 2. Light and matter – some historical milestones, with focus on duality and
uncertainty.
430 AHMED H. ZEWAIL
Figure 3. Coherent, localized wave packet (de Broglie length ~0.04 Å) of a diatomic mol-
ecule (iodine); 20 fs pulse. The contrast with the diffuse wave function limit (quantum
number n) is shown. The inset displays a schematic of Thomas Young’s experiment
(1801) with the interference which is useful for analogy with light.
Figure 5. Arrow of Time in chemistry and biology – some of the steps in over a century
of development (see text).
at different distances. Knowing the speed of the flow, one could translate
this into time, on a scale of tens of milliseconds. Such measurements of
non-radiative processes were a real advance in view of the fact that they
were probing the “invisible”, in contrast with radiative glows which were
seen by the naked eye and measured using phosphoroscopes. Then came
the stopped-flow method (B. Chance, 1940) that reached the millisecond
434 AHMED H. ZEWAIL
leagues, dye lasers were rapidly replaced and fs pulse generation became a
standard laboratory tool; the state-of-the-art, once 8 fs, is currently ~ 4 fs
and made it into the Guinness Book of World Records (Douwe Wiersma’s
group). The tunability is mastered using continuum generation (Alfano &
Shapiro) and optical parametric amplification.
In the late sixties and in the seventies, picosecond resolution made it
possible to study non-radiative processes, a major detour from the studies
of conventional radiative processes to infer the non-radiative ones. As a
beginning student, I recall the exciting reports of the photophysical rates of
internal conversion and biological studies by Peter Rentzepis; the first ps
study of chemical reactions (and orientational relaxations) in solutions by
Ken Eisensthal; the direct measurement of the rates of intersystem cross-
ing by Robin Hochstrasser; and the novel approach for measurement of ps
vibrational relaxations (in the ground state of molecules) in liquids by
Wolfgang Kaiser and colleagues. The groups of Shank and Ippen have
made important contributions to the development of dye lasers and their
applications in the ps and into the fs regime. Other studies of chemical and
biological nonradiative processes followed on the ps time scale, the scale
coined by G.N. Lewis as the “jiffy” – the time needed for a photon to travel
1 cm, or 33 picoseconds.
Stimulated by earlier work done at Caltech in the 1970’s and early 80’s on
coherence and intramolecular vibrational-energy redistribution (IVR), we
designed in 1985 an experiment to monitor the process of bond breakage
(ICN* –> I + CN). The experimental resolution at the time was ~400 fs and
we could only probe the formation of the CN fragment. We wrote a paper,
ending with the following words: “Since the recoil velocity is ~ 2 x 10 5 cm/s,
the fragment separation is ~ 10Å on the time scale of the experiment (~500 fs).
With this time resolution, we must, therefore, consider the proximity of frag-
ments at the time of probing, i.e., the evolution of the transition state to final
products.” This realization led, in two years time, to the study of the same
reaction but with ~40 fs time resolution, resolving, for the first time, the ele-
mentary process of a chemical bond and observing its transition states.
One year later in 1988, we reported on the NaI discovery which repre-
sents a paradigm shift for the field. There were two issues that needed to be
established on firmer bases: the issue of the uncertainty principle and the
influence of more complex potentials on the ability of the technique to
436 AHMED H. ZEWAIL
probe reactions. The alkali halide reactions were thought of as perfect pro-
totypes because they involve two potentials (covalent and ionic) along the
reaction coordinate: the separation between Na and I. The resonance
motion between covalent and ionic configurations is the key to the dynam-
ics of bond breakage. How could we probe such motion in real time? We did
the femtochemistry experiments on NaI and NaBr, and the results were
thrilling and made us feel very confident about the ability to probe transi-
tion states and final fragments. The experiments established the foundation
for the following reasons:
First, we could show experimentally that the wave packet was highly
localized in space, ~ 0.1Å, thus establishing the concept of dynamics at
atomic-scale resolution. Second, the spreading of the wave packet was min-
imal up to a few picoseconds, thus establishing the concept of single-mol-
ecule trajectory, i.e., the ensemble coherence is induced effectively, as if the
molecules are glued together, even though we start with a random and
noncoherent ensemble – dynamics, not kinetics. Third, vibrational (rota-
tional) coherence was observed during the entire course of the reaction
(detecting products or transition states), thus establishing the concept of
coherent trajectories in reactions, from reactants to products. Fourth, on the
fs time scale, the description of the dynamics follows an intuitive classical
picture (marbles rolling on potential surfaces) since the spreading of the
packet is minimal. Thus, a time-evolving profile of the reaction becomes
parallel to our thinking of the evolution from reactants, to transition
states, and then to products.
The NaI case was the first to demonstrate the resonance behavior, in real
time, of a bond converting from being covalent to being ionic along the
reaction coordinate. From the results, we obtained the key parameters of
the dynamics such as the time of bond breakage, the covalent/ionic cou-
pling strength, the branching of trajectories, etc. In the 1930’s, Linus
Pauling’s description of this bond was static at equilibrium; now we can
describe the dynamics in real time by preparing structures far from equi-
librium. Numerous theoretical and experimental papers have been pub-
lished by colleagues and the system enjoys a central role in femtodynamics.
The success in the studies of elementary (NaI and ICN) reactions trig-
gered a myriad of other studies in simple and complex systems and in dif-
ferent phases of matter. These studies of physical, chemical, and biological
changes are reviewed elsewhere (see Further Readings Section). Fig. 6 gives
a summary of the scope of applications in femtochemistry, and Fig. 7 lists
four general concepts which emerged from these studies. Fig. 8 highlights
TIME AND MATTER – SCIENCE AT NEW LIMITS 437
Figure 6. Areas of study in femtochemistry (and femtobiology) and the scope of appli-
cations in different phases.
438 AHMED H. ZEWAIL
Concepts
Coherence
single-molecule dynamics
Resonance
non-equilibrium dynamics
Transition-state Landscape
family of structures in dynamics
Figure 7. Concepts central to dynamics with femtosecond resolution. For details, see ref-
erences by the author in the section, Further Reading.
TIME AND MATTER – SCIENCE AT NEW LIMITS 439
Figure 8. Some other areas of advances, from physics to medicine to technology. For
reviews, see Further Reading section.
440 AHMED H. ZEWAIL
Figure 9. Ultrafast electron diffraction machine built at Caltech for the studies of molec-
ular structures with spatial and temporal resolution on the atomic scale.
For DNA, we found that the local involvement of the base pairs controls
the time scale of electron transfer. The degree of coherent transport criti-
cally depends on the time scale of molecular dynamics defining the so-
called dynamical disorder. Static disorder, on the other hand, is governed
by energetics. The measured rates and the distance range of the transfer
suggest that DNA is not an efficient molecular wire.
For proteins, our current interest is in the studies of the hydrophobic
forces and electron transfer, and oxygen reduction in models of metallo-
enzymes. For the former, we have studied, with fs resolution, the protein
Human Serum Albumin (HSA), probed with small (ligand) molecules. This
protein is important for drug delivery. The ligand recognition is controlled
by the time scale for entropic changes which involves the solvent. For
model enzymes of O2 transport, we examined novel picket-fence structures
which bind oxygen to the central metal with ~ 85% efficiency at room tem-
perature. In this system, we observed the release of O2 in 1.9 ps and the
442 AHMED H. ZEWAIL
recombination was found to occur on a much longer time scale. These are
fruitful areas for future research, especially in that they provide prototype
systems for O2 reduction in the transition state at room temperature.
Studies in femtobiology are continuing in our laboratory and include the
recognition in protein-DNA complexes (Fig. 10).
Our interest in this area goes back to the late 1970’s when a number of
research groups were reporting on the possibility of (vibrational) mode-
selective chemistry with lasers. At the time, the thinking was directed along
two avenues. One of these suggested that, by tuning a CW laser to a given
state, it might be possible to induce selective chemistry. It turned out that
its generalization could not be made without knowing and controlling the
time scales of IVR in molecules. Moreover, state-selective chemistry is quite
different from bond-selective chemistry. The second avenue was that of IR
multiphoton chemistry. In this case, it was shown that the initial IR coher-
ent pumping could be used for selective isotope separation. Such an
approach has proven successful, even on the practical scale, and Vladelin
Letokhov has called the process “incoherent control”.
In 1980, I wrote a Physics Today article in a special issue on laser chem-
istry suggesting the use of ultrashort pulses (not CW or long-time lasers) to
control the outcome of a chemical reaction (Fig. 11). The title of the paper
was: Laser Selective Chemistry – Is it Possible? The subtitle stated the mes-
sage, ”With sufficiently brief and intense radiation, properly tuned to specific
resonances, we may be able to fulfill a chemist’s dream, to break particular
selected bonds in large molecules.” Ultrashort pulses should be used to control
the system in the desired configuration by proper choice of the coherence
time (duration) and delay and the ability to localize the system in phase space.
Experimentally, we had already developed methods for the control of
the phase of the field of optical pulses with the idea of using the phase
(“pulse shaping”) to control molecular processes – collisions, inhomoge-
neous broadenings and even photon locking which could inhibit relaxation;
the time scale was ns and for the control of IVR, fs pulses were needed.
Prior to this work, the optical pulse field,
was simply defined by the envelope A(t) and the frequency ; the phase (t)
TIME AND MATTER – SCIENCE AT NEW LIMITS 443
Figure 10. The protein (histone I)/DNA system studied in this laboratory, with the aim
of elucidating the dynamics and the elements important in molecular recognition and
chromatin condensation.
444 AHMED H. ZEWAIL
Figure 11. Matter control with ultrashort laser pulses; suggestion made in the 1980
paper and the experimental demonstration and theoretical justification published near-
ly 20 years later.
VI. PERSPECTIVES
The key to the explosion of research can perhaps be traced to three pil-
lars of the field.
Three points are relevant: (i) The improvement of nearly ten orders of
magnitude in time resolution, from the 1950’s (milli)microsecond time
scale to present femtosecond resolution, opened the door to studies of new
phenomena and to new discoveries; (ii) the cornerstone of reactivity, the
transition state of structures in motion, could be clocked as a molecular
species TS ‡, providing a real foundation to the theoretical hypothesis for
ephemeral species [TS] ‡, and leading the way to numerous new studies.
Extensions will be made to study transition state dynamics in complex sys-
tems, but the previous virtual status of the transition state has now given
446 AHMED H. ZEWAIL
Two points are relevant: (i) The transition from kinetics to dynamics.
On the femtosecond time scale, one can see the coherent nuclear motion
of atoms – oscillatory or quantized steps instead of exponential decays or
rises. This was proved to be the case for bound, quasi-bound or unbound
systems and in simple (diatomics) and in complex systems (proteins). (ii)
the issue of the uncertainty principle. The thought was that the pulse was
too short in time, thus broad in energy by the uncertainty principle ∆t
∆E~h - , but localization is consistent with the two uncertainty relation-
ships and coherence is the key. The energy uncertainty ∆E should be com-
pared with bond energies: ∆E is 0.7 kcal/mol for a 60 fs pulse. At the 1972
Welch Conference, in a lively exchange between Eugene Wigner and
Edward Teller, even picosecond time resolution was of concern because
of the perceived fundamental limitation imposed on time and energy by
Heisenberg’s uncertainty principle.
Three points are relevant: (i) In retrospect, the femtosecond time scale
was just right for observing the “earliest dynamics” at the actual vibrational
time scale of the chemical bond; (ii) the time resolution offers unique
opportunities when compared with other methods. Processes often appear
complex because we look at them on an extended time scale, during which
many steps in the process are integrated; (iii) the methodology is versatile
and general, as evidenced by the scope of applications in different phases
and of different systems. It is worth noting that both excited and ground
state reactions can be studied. It has been known for some time that the use
of multiple pulses can populate the ground state of the system and, there-
fore, the population and coherence of the system can be monitored. The
TIME AND MATTER – SCIENCE AT NEW LIMITS 447
use of CARS, DFWM, SRS, π-pulses or the use of direct IR excitation are
some of the approaches possible. Two recent examples demonstrate this
point: one invokes the use of IR fs pulses to study reactions involving
hydrogen (bond) motions in liquid water, work done in France and
Germany; and the other utilizes CARS for the study of polymers in their
ground state, as we did recently. Ground-state dynamics have also been
studied by novel fs photodetachment of negative ions, and the subfield of fs
dynamics of ions is now active in a number of laboratories.
VII. EPILOGUE
As the ability to explore shorter and shorter time scales has progressed
from the millisecond to the present stage of widely exploited femtosecond
capabilities, each step along the way has provided surprising discoveries,
new understanding, and new mysteries. In their editorial on the tenth
anniversary of femtochemistry, Will Castleman and Villy Sundström put
this advance in a historical perspective. The recent Nobel report addresses
with details the field and its position in over a century of developments (see
Further Readings). Fig. 6 summarizes areas of study and the scope of appli-
cations in different phases and Fig. 8 highlights some advances in other
areas, including medicine, nanotechnology, and metrology (see Further
Reading). Developments will continue and new directions of research will
be pursued. Surely, studies of transition states and their structures in chem-
istry and biology will remain active for exploration in new directions, from
simple systems to complex enzymes and proteins, and from probing to con-
trolling of matter.
Since the current femtosecond lasers (4.5 fs) are now providing the
limit of time resolution for phenomena involving nuclear motion, one may
ask: Is there another domain in which the race against time can continue
to be pushed? Sub-fs or attosecond resolution may one day allow for the
direct observation of the coherent motion of electrons. I made this point in
a 1991 Faraday Discussion review and, since then, not much has been
reported except for some progress in the generation of sub-fs pulses. In the
coming decades, this may change and we may view electron rearrange-
ment, say, in the benzene molecule, in real time, recalling, as in the fem-
tosecond domain, “the uncertainty problem” is not a problem provided
coherence of electron states is created.
Additionally, there will be studies involving the combination of the
“three scales”, namely time, length and number. We should see extensions
448 AHMED H. ZEWAIL
FURTHER READING
Some Books
AHMED H. ZEWAIL1
On our planet, every human being carries the same genetic material
and the same four-letter genetic alphabet. Accordingly, there is no basic
genetic superiority that is defined by race, ethnicity, or religion. We do not
expect, based on genetics, that a human being of American or French ori-
gin should be superior to a human from Africa or Latin America. Moreover,
it has been repeatedly proven that men and women from the so-called
developing or underdeveloped countries can achieve at the highest level,
usually in developed countries, when the appropriate atmosphere for
excelling is made possible. Naturally, for any given population, there exists
a distribution of abilities, capabilities and creativity.
In our world, the distribution of wealth is skewed, creating classes
among populations and regions on the globe. Only 20% of the population
enjoys the benefit of life in the “developed world”, and the gap between the
“haves” and “have-nots” continues to increase, threatening a stable and
peaceful coexistence. According to the World Bank, out of the 6 billion
people on Earth, 4.8 billion are living in developing countries; 3 billion live
on less than $2 a day and 1.2 billion live on less than $1 a day, which
defines the absolute poverty standard; 1.5 billion people do not have
access to clean water, with health consequences of waterborne diseases,
and about 2 billion people are still waiting to benefit from the power of the
industrial revolution.
1
Ahmed Zewail received the 1999 Nobel Prize in Chemistry and currently holds the
Linus Pauling Chair at Caltech.
THE NEW WORLD DIS-ORDER – CAN SCIENCE AID THE HAVE-NOTS? 451
The per capita GDP 2 has reached, in some western, developed coun-
tries, $35,000, compared with about $1,000 per year in many developing
countries and significantly less in underdeveloped populations. This factor
of 40-100 times the difference in living standards will ultimately create dis-
satisfaction, violence and racial conflict. Evidence of such dissatisfaction
already exists and we only have to look at the borders of developed-devel-
oping/underdeveloped countries (for example, in America and Europe) or
at the borders between the rich and poor within a nation.
Some believe that the “new world order” and “globalization” are the
solution to problems such as population explosion,3 the economic gap and
social disorder. This conclusion is questionable. Despite the hoped-for new
world order between superpowers, the globe still experiences notable exam-
ples of conflict, violence and violations of human rights. The world order is
strongly linked to political interest and national self-interest, and in the
process many developing countries continue to suffer and their develop-
ment is threatened. Globalization, in principle, is a hopeful ideal that
aspires to help nations prosper and advance through participation in the
world market. Unfortunately, globalization is better tailored to the
prospects of the able and the strong, and, although of value to human com-
petition and progress, it serves the fraction of the world’s population that is
able to exploit the market and the available resources.
Moreover, nations have to be ready to enter through the gate of glob-
alization and such entry has requirements. Thomas Friedman, in his book
“The Lexus and the Olive Tree”, lists the following eight questions in try-
ing to assess the economic power and potential of a country: “How wired
is your country? How fast is your country? Is your country harvesting its
knowledge? How much does your country weigh? Does your country dare
to be open? How good is your country at making friends? Does your coun-
try’s management ‘get it’? and How good is your country’s brand?” These
2
Per capita gross domestic product (GDP) in U. S. dollars is the total unduplicated
output of economic goods and services produced within a country as measured in mon-
etary terms according to the U. N. System: Angola (528), Canada (19,439), China (777),
Hong Kong (24,581), Egypt (1,211), Israel (17,041), North Korea (430), South Korea
(6,956), Switzerland (35,910), U. S. A. (31,059), and Yemen (354). From U. N. Statistics
Division.
3
Overpopulation of the world, and its anticipated disasters, is not a new problem.
It has been a concern for many millennia, from the time of the Babylonians and
Egyptians to this day. Joel Cohen, in his book “How Many People Can the Earth
Support?” provides a scholarly overview of the global population problem.
452 AHMED H. ZEWAIL
tified above point to the essentials for progress, which are summarized in
the following: (1) Building the human resources, taking into account the
necessary elimination of illiteracy, the active participation of women in
society, and the need for a reformation of education: (2) Rethinking the
national constitution, which must allow for freedom of thought, minimiza-
tion of bureaucracy, development of a merit system, and a credible
(enforceable) legal code; (3) Building the Science Base. This last essential of
progress is critical to development and to globalization and it is important
to examine this point further.
There is a trilogy which represents the heart of any healthy scientific
structure: First, the Science Base. The backbone of the science base is the
investment in the special education among the gifted, the existence of cen-
ters of excellence for scientists to blossom, and the opportunity for using
the knowledge to impact the industrial and economical markets of the
country and hopefully the world. In order to optimize the impact, this plan
must go hand-in-hand with that for the general education at state schools
and universities. This base must exist, even in a minimal way, to ensure a
proper and ethical way of conducting research in a culture of science which
demands cooperation as a team effort and as a search for the truth. The
acquisition of confidence and pride in intellectual successes will lead to a
more literate society. Second, the Development of Technology. The science
base forms the foundation for the development of technologies on both the
national and international level. Using the scientific approach, a country
will be able to address its needs and channel its resources into success in
technologies that are important to, for example, food production, health,
management, information, and, hopefully, participation in the world mar-
ket. Third, the Science Culture. Developing countries possess rich cultures of
their own in literature, entertainment, sports and history. But, many do not
have a “science culture”. The science culture enhances a country’s ability to
follow and discuss complex problems rationally, and based on facts, while
involving many voices in an organized, collective manner – scientific think-
ing becomes essential to the fabric of the society. Because science is not as
visible as entertainment, the knowledge of what is new, from modern devel-
opments in nutrition to emerging possibilities in the world market,
becomes marginalized. With a stronger scientific base, it is possible to
enhance the science culture, foster a rational approach, and educate the
public about potential developments and benefits.
The above trilogy represents a major obstacle to the have-nots, as many
feel that such a structure is only for those countries which are already
THE NEW WORLD DIS-ORDER – CAN SCIENCE AID THE HAVE-NOTS? 455
What is the return to rich countries for helping poor countries? And
what payoff do rich countries get for helping poor countries get richer?
These two questions were asked by Joel Cohen in his book mentioned
before. At the level of a human individual, there are religious and philo-
sophical reasons which make the rich give to the poor – morality and self-
protection motivate us to help humankind. For countries, mutual aid pro-
vides (besides the issue of morality): insurance for peaceful coexistence and
cooperation for preservation of the globe. If we believe that the world is
becoming a village because of information technology, then in a village we
must provide social security for the unprivileged, otherwise we may trigger
revolution. If the population is not in harmony, grievances will be felt
throughout the village and in different ways.
Healthy and sustainable human life requires the participation of all
members of the globe. Ozone depletion, for example, is a problem that the
developed world cannot handle alone – the use of propellants with chloro-
fluorocarbons (CFCs) is not only by the haves. Transmission of diseases,
global resources, and the Greenhouse Effect are global issues and both the
haves and have-nots must address solutions and consequences. Finally,
there is the growing world economy. The market (and resources) of devel-
oping countries is a source of wealth to developed countries and it is wise
to cultivate a harmonious relationship for mutual aid and mutual econom-
ic growth. I heard of a recent phrase, “Give us the technology and we will
give you the market!”, used to describe the US-China relationship.
A powerful example of visionary aid is the Marshall Plan given by the
United States to Europe after World War II. Recognizing the mistake made
in Europe after W.W.I, the U. S. decided in 1947 to help rebuild the dam-
aged infrastructure and to become a partner in the economical (and politi-
cal) developments. Western Europe is stable today and continues to pros-
per – likewise its major trading partner, the United States of America. The
U. S. spent close to 2% of its GNP on the Marshall Plan for the years 1948-
51. As pointed out by Cohen, a similar percentage of the $6.6 trillion of the
1994 U. S. GNP will amount to $130 billion, almost ten times the $15 bil-
lion a year currently spent for all non-military foreign aid and more than
280 times the $352 million the U. S. gave for all overseas population pro-
grams in 1991. The commitment and generosity of the Marshall Plan result-
ed in a spectacular success story. The world needs a rational commitment
to aid and aid partnerships.
It is in the best interest of the developed world to help developing coun-
tries become self-sufficient and a part of the new world order and market.
458 AHMED H. ZEWAIL
MINORU ODA
Hitherto, brain research has been carried out with pathological, physi-
ological and psychological approaches. Apart from the experts in neuro-
science, scientists in general are now more interested in brain research, as
I have emphasised here in this Academy on some occasions in the past [1,
2]. And meetings by physicists have been held.
Ever since Masao Itoh founded a Brain Research Institute in the
RIKEN Institute, Japan, and gathered together there a group of (experi-
mental and theoretical) neuroscientists, as well as scientists from a variety
of disciplines, more and more physicists have become interested in the
brains of humans, small animals such as cats, dogs, apes, fish, and insects
such as bees and ants.
Three approaches adopted by physicists may be delineated:
1) understanding the brain as though it were a computer;
2) producing new concepts where the computer is seen as an imitator
of the brain;
3) observing physical traces left on the brain caused by the activities of
the brain.
If, as suggested, the traces left in the brain are observable more or less
randomly and can be freely detected, then the relevant issues may be listed
as follows:
The brain of the foetus: when and how does the brain of human and
animal foetuses grow to be brain-like in the mother’s body? What is the
physical mechanism involved?
What should be said about the imprinting of the brains of chicks as dis-
covered by K. Lorenz?
When we or animals sleep what happens to our brains?
460 MINORU ODA
Now, what could be the techniques to look into traces in the brain which
have a high spatial resolution? The X-ray astronomy technique, which we
may call the Fourier Transform Telescope (FTT), can be converted.
The principle of the FTT, which was conceived as a high angular-reso-
lution telescope for the angular resolution of arc-seconds for X-ray imag-
ing, may be understood with reference to Fig. 1. The metal grid structure
shown in Fig. 2 (see p. XIII) is attached to a solid structure and fixed with
a certain relative distance. Combination of the grid units with a certain spa-
tial frequency and a unit of the same frequency, a quarter phase displaced,
and with a certain angular orientation with the corresponding grid, can
produce a point on the Fourier u-v plane as shown in Fig. 3. The configu-
ration of the points on the u-v plane produces an image. The principle of
the FTT can be converted to that of the Fourier Transform Microscope
(FTM) to achieve high-resolution images of the traces in the brain.
WHY AND HOW PHYSICISTS ARE INTERESTED IN THE BRAIN AND THE MIND 461
Fig. 1
462 MINORU ODA
Fig. 3
REFERENCES
VERA C. RUBIN
(4) Almost all research questions that we have asked have yet to be final-
ly answered. We do not know the age of the universe, the ages of the oldest
stars, the density of the universe, how life originated on Earth, how soon
galaxies originated following the big bang, and the ultimate fate of the uni-
verse. We do not know what the dark matter is.
These are the questions that will be answered in the future.
Let me start with a description of what we can see and what we know,
by describing the geography of the universe around us. When you look at the
sky on a dark night from the northern hemisphere, every star that you see is
a member of our own spiral Galaxy. That means the stars are gravitational-
ly bound, all orbiting in concert about the very distant center of our Galaxy.
The stars are not distributed at random, but are flattened to a plane. The
sun, our star, is located in the plane. And when we look through the plane
we see on the sky a band of millions of stars, which we call the Milky Way.
Most ancient civilizations told stories to explain the Milky Way. It was
not until 1609 that Galileo turned his newly perfected telescope to the sky
and discovered that the Milky Way was composed of “congeries of stars”. I
like to say that Galileo advanced science when he took a cardboard tube,
and placed a small lens at one end and a large brain at the other. Galileo’s
great genius was not only that he discovered things never before seen in the
sky, but that he was able to understand and interpret what he saw.
Our Galaxy contains more than just stars. It contains clouds of gas and
dust which may obscure more distant objects. From our position in the
Galaxy we cannot see all the way to its center. When we use the telescope
to look away from the plane of our Galaxy, we see external galaxies, each
an independent agglomeration of billions of stars. In each galaxy, all the
stars are gravitationally bound to that galaxy. Telescopic views of external
galaxies convince us that we understand the structure of our Milky Way.
For the purposes of what I am going to be discussing today, there are
two major facts to remember. In a galaxy, stars are very, very far apart.
Relative to their diameters, the average distance between one star and the
next is enormous. That is not true for galaxies. Virtually every galaxy has a
companion galaxy within a few diameters. The second fact is that gravity is
the force that controls stellar motions. Gravity is also the dominant force in
the evolution of galaxies.
Figure 1 (see p. XIV) is a near-infrared photograph of the center and
about 90 degrees on either side of the disk of our Galaxy. It was made by
the orbiting COBE Satellite. This is the best picture I know of our Galaxy,
and this is the photograph that should hang in schoolrooms around the
466 VERA C. RUBIN
world. Just as students should know about continents and oceans and polar
caps and volcanoes, so they should also know about galaxies and the struc-
ture of the universe.
Seen face on, our Galaxy would look like the wide open spirals (Figure
2; see p. XIV) that we photograph with our telescopes; our sun is located
far from the nucleus on the edge of a spiral arm. The sun, carrying the plan-
ets with it, has an orbit which carries it once around the galaxy in two hun-
dred million years. Hence the sun has orbited about 20 or 30 times around
the Galaxy since its formation.
The spiral arms of a galaxy are not fixed loci of stars, but regions in
which the stellar density is high. A star will move into an arm and out of an
arm, spending a longer period in the arm because of the higher gravity
there. A fair analogy is a traffic jam on a road. If you look at a road where
there is a bottleneck you will always see cars there, but from one time to the
next they will not be the same cars.
As the universe cooled following the hot big bang, atoms of hydrogen and
helium eventually formed, and from these elements the first generation of
stars was made. It takes two things to form a star: gas, and gravity sufficient
to cause the gas to contract to high density. As more gas particles are gravi-
tationally attracted to the increasingly massive protostar, their infall energy
is transformed to heat, raising the central temperature. Ultimately, the cen-
tral temperature will be high enough that the star will start fusing atoms of
hydrogen into helium, and helium atoms into heavier elements. During its
lifetime, a star is a chemical factory for making elements more complex than
the elements from which it formed. As it forms elements up to iron, energy
is released, producing the starlight to which our eyes are sensitive. It is not a
coincidence that human eyes have their peak sensitivity in the region of the
spectrum which matches the peak of the visible solar radiation.
Ultimately, the star reaches an energy crisis, for it takes energy to pro-
duce chemical elements heavier than iron. The star reaches the end of its
life, either gently, by shedding its outer atmosphere (Figure 3; see p. XV) or
explosively by becoming a supernova (Figure 4). In either case, the star
enriches the gas between the stars with heavier elements; these are the
building blocks of future generations of stars. In some cases, it is the shock
wave from the exploding supernova that compresses nearby gas, thus initi-
ating a new generation of star formation. These stars are composed of atoms
including carbon and nitrogen and oxygen, the elements necessary for life.
Star formation is a messy process, and the forming star is left with a
residual disk of debris particles. Planets will ultimately form in this disk
A MILLENNIUM VIEW OF THE UNIVERSE 467
Figure 4. An image of NGC 4526, a galaxy in the Virgo cluster, taken with the Hubble
orbiting telescope. The bright star is a supernova in that galaxy, SN 1994D. Credit:
V.Rubin, STScI, and NASA.
from the merging of dust particles. From these particles will emerge the
Earth with its iron core and its floating continents, and its biology and its
living creatures. Without our Galaxy to gravitationally retain the gas atoms
and molecules, without stars to make the heavy elements, and without the
debris from our forming sun, the Earth and its living forms would not exist.
John Muir would have liked to know this story.
468 VERA C. RUBIN
Our Galaxy is not alone in space. We have two small nearby satellite
galaxies, the Magellanic Clouds, visible in the sky with the naked eye from
the southern hemisphere. The Magellanic Clouds orbit our Galaxy, and
each orbit carries them through the gas layer of our disk. The gas between
the stars is tidally disrupted as the Clouds move through. The Clouds lose
energy, and their orbits diminish. Ultimately the Magellanic Clouds will
cease to exist as separate galaxies. Instead, they will merge and become part
of our Galaxy. A tidal tail of gas, pulled out of the Magellanic Clouds on a
previous passage, is observed today by radio telescopes as a large arc of gas
across the sky. The Sagittarius Dwarf, an even closer galaxy located beyond
the nucleus on the other side of our Galaxy, is currently being pulled apart.
Such smaller galaxies are gravitationally fragile, and hence at risk of cap-
ture by nearby, more massive galaxies.
Our Galaxy has another relatively close, but very large, companion, the
Andromeda galaxy (M31). This nearest large galaxy to us has long been a
favorite for study. M31 also has two bright satellite galaxies, and many
fainter ones. The Andromeda galaxy and our own Galaxy are dominant
members of the Local Group of galaxies. The Local Group consists of about
twenty known galaxies, only a few of them large. Most are small irregular
objects, each lacking a massive center. Hence they are gravitationally very
fragile when they pass near larger galaxies. Ultimately, they will probably
each merge with the more massive galaxy. The halo of our Galaxy contains
sets of stars which were probably acquired in this fashion.
We live in an age when clusters of galaxies are forming. In some regions
of space, the mutual gravity of the galaxies has overcome the expansion of
the universe, and numerous galaxies gravitationally clump into one large
system. Our Local Group is an outlying member of the Virgo Supercluster.
Like clusters and superclusters, the Virgo Cluster contains many spheroidal
galaxies. Many of these spheroidal galaxies probably formed from the
merger of two or more disk galaxies, an occurrence expected frequently in
the high galaxy density core of a cluster of galaxies. Spheroidal galaxies are
a favorite laboratory for studying stellar motions.
This is the lumpy structure of the nearby universe is emphasized in
Figure 5. We live in a spiral Galaxy that has one major companion in the
Local Group. This group of galaxies is an outlying member of a large super-
cluster of thousands of galaxies centered on the Virgo Cluster. The gravita-
tional attraction of the Virgo Cluster on our Galaxy is slowing down our
expansion. We are expanding from Virgo due to the expansion of the uni-
verse, but at a lower speed than we would have if the Virgo galaxies were
A MILLENNIUM VIEW OF THE UNIVERSE 469
Figure 5. A sketch of the observed luminous universe, emphasizing the lumpy structure.
We live on a planet orbiting a star; the star is located on the edge of a spiral arm of our
Milky Way Galaxy, which is one galaxy in the Local Group of galaxies. The Local Group
is an outlying member of the Virgo Supercluster, which forms one of the knots in the
lumpy universe.
not there. At the center of the Virgo Cluster, the local gravitational field is
so great that the expansion of the universe has been halted. The core of the
cluster is not expanding, for the galaxies are all gravitationally bound to
each other.
When we map the distribution of the bright galaxies on the sky, we find
that their distribution is not uniform and not random. Instead, the galaxies
are distributed in clusters, and the clusters form superclusters. There are
galaxies in lace-like chains connecting the superclusters, as well as large
regions in which no galaxies are seen. We do not know if these dark regions
are void of all matter, or only void of bright matter.
470 VERA C. RUBIN
I now want to turn to the evidence that there is very much dark matter
in the universe. For many years I had been interested in the outer bound-
aries of galaxies, a subject relatively far from the mainstream of astronomy.
I devised an observing program to take advantage of the new large tele-
scopes, which would permit us to discover how stars in the outer disk orbit
their galaxy center. Most of the observations were made at the Kitt Peak
National Observatory outside of Tucson; others come from the Cerro Tololo
Inter-American Observatory, and the cap. O of the Carnegie Institution of
Washington in Chile.
By analogy with planets in the solar system, it was assumed that stars
far from the galaxy center would orbit with velocities much slower than
those near the galaxy center. Since before the time of Isaac Newton, scien-
tists knew the orbital periods of the planets, Mercury, Venus, Earth, Mars,
Jupiter, and Saturn, and their distances from the sun. The planet closest to
the sun, Mercury, orbits very rapidly while Saturn, the planet distant from
the sun, orbits very slowly. Newton taught us that gravitational forces fall
off as the square of the distance from the sun, the source of the gravita-
tional attraction in the solar system. And the planets, as all physics students
know, are actually falling to the sun, but their forward motion is so great
that they never reach the sun, but instead they describe an orbit. From their
distances from the sun and their orbital periods, we can deduce the mass
of the sun.
By similar reasoning, the mass within a galaxy can be determined from
the orbital velocities of stars or gas at successive distances within that
galaxy, until we reach the limit of the optical galaxy. My colleague Kent
Ford and I would use at a telescope a spectrograph that splits the light from
a galaxy into its component colors. Hydrogen atoms within the stars and
gas produce spectral lines whose positions vary with the velocity of the
source. Lines are shifted to the red for a source moving away from an
observer, and shifted to the blue for sources moving toward the observer.
The long slit of the spectrograph accepts light from each point along the
major axis of that galaxy (Figure 6). The rotation of the galaxy carries the
stars toward us on one side (therefore blue shifted), and away from us on
the other side (red shifted).
By measuring the positions of the lines with great accuracy, I deduce
the velocity for each observed position in the galaxy. I have obtained hun-
dreds of spectra of galaxies, which I then study at my office. Now I record
the observations and carry them via computer rather than photographic
plates. In virtually all cases, the orbital velocities of stars far from the nucle-
A MILLENNIUM VIEW OF THE UNIVERSE 471
Figure 6. Images, spectra, and rotation velocities for 5 spiral galaxies. The dark line
crossing the upper three galaxies is the slit of the spectrograph. The spectra show emis-
sion lines of hydrogen and nitrogen. The strongest step-shaped lines in each are from
hydrogen and ionized nitrogen in the galaxy. The strong vertical line in each spectrum
comes from stars in the nucleus. The undistorted horizontal lines are from the Earth’s
atmosphere. The curves at right show the rotation velocities as a function of nuclear dis-
tance, measured from emission lines in the spectra.
us are as high or even higher than orbital velocities of stars closer to the
bright galaxy center.
Figure 6 shows five galaxies, their spectra, and their measured veloci-
ties. Whether the galaxies are intrinsically small, or intrinsically large, the
472 VERA C. RUBIN
rotation velocities remain high far from the nucleus. If you are a first year
physics student, you know that in a system in equilibrium, a test particle
orbiting a central mass M at a distance R moves with velocity V, such that
the M is proportional to R times V (squared). The mass of the test particle
does not enter. Thus the velocity of a gas cloud or a star (or any other test
object) does not enter into the equation. This is what Galileo was trying to
show when he dropped objects from the Tower of Pisa. It is not the mass of
the falling (or orbiting) object which determines its velocity; it is the mass
of the attractor, be it Earth or a galaxy.
As we see in Figure 6, orbital velocities are almost constant independ-
ent of distance. These tell us that the mass detected within the galaxy con-
tinues to rise with increasing distance from the center of the galaxy.
Although the light in a galaxy is concentrated toward the center, the mass
is less steeply concentrated. We do not know the total mass of a single
galaxy; we know only the mass interior to the last measured velocity.
These high rotational velocities far out in a galaxy are one piece of evi-
dence that most of the matter in a galaxy is dark, and that the dark matter
extends beyond the optical galaxy. Stellar velocities remain high in
response to the gravitational attraction of this extended distribution of dark
matter. Some galaxies have hydrogen disks that extend well beyond the
optical galaxy. Observations of velocities in these extended disks reveal that
the rotation velocities remain high across the disks. The hydrogen gas is
not the dark matter, but it too responds to the gravitational attraction of the
extended dark matter. Astronomers call the distribution of dark matter a
halo, but actually mean a spheroidal distribution in which the galaxy disk
is embedded.
Questions concerning the dark matter remain; astronomers and physi-
cists cannot yet supply the answers. Does it exist? Where is it? How much
is there? What is it? I will briefly give the current thinking on each of these,
from the view of an observer.
Does it exist? Most astronomers believe that it does. There are only two
explanations that can explain these unexpectedly high orbital velocities in
spiral galaxies. One possibility is that Newtonian gravitation theory does
not apply over distances as great as galaxy disks, for this theory underlies
our analysis. There are physicists who are attempting to devise cosmologi-
cal models in which Newtonian gravitational theory is modified. But, if you
accept Newton’s laws, as most astronomers do at present, then the expla-
nation is that the star velocities remain high in response to much matter
that we cannot see.
A MILLENNIUM VIEW OF THE UNIVERSE 473
Where is it? Fritz Zwicky, over sixty years ago, discovered that in a clus-
ter of galaxies like the Virgo Cluster, many individual galaxies are moving
with velocities so large that galaxies should be leaving the cluster. But evi-
dence that clusters are not dissolving led Zwicky to suggest that there is
more matter in the cluster than can be seen, and this unseen matter is grav-
itationally holding the cluster together. Zwicky called this “missing mass.”
However, astronomers now prefer the term “dark matter,” for it is the light,
not the mass that is missing.
We know now that dark matter dominates the mass of both individual
galaxies and also clusters of galaxies. Direct evidence for the high dark mat-
ter mass in galaxy clusters comes from the discovery of gravitational lens-
ing. Light from background galaxies, passing through the dense core of an
intervening cluster, is gravitationally deflected, and the background galaxy
images are warped into arcs and rings. A recent Hubble Space Telescope
view of this effect is shown in Figure 7. Thus massive clusters act as natu-
ral telescope that enhance the intensity of the light. Nature made telescopes
before Galileo did.
How much is there? Now the questions get harder to answer. The best
answer is “We don’t know.” The amount of matter in the universe is funda-
mental to understand whether the universe will expand forever, or the
Figure 7. Galaxy cluster Abell 2218. The arcs are gravitationally lensed images of back-
ground galaxies, whose light is distorted by the mass of the foreground cluster. Credit:
A. Fruchter and ERO team, STScI.
474 VERA C. RUBIN
expansion will halt, and perhaps even recollapse. The galaxy and the clus-
ter observations offer evidence that almost all the matter in a galaxy is dark.
But even this high fraction of unseen matter describes a universe of low
density, a universe that will continue to expand forever. However, theoreti-
cal cosmologists prefer a universe of higher density, so that the amount of
matter is just sufficient to ultimately bring the expansion asymptotically to
a halt. Such a high-density universe is the basis of the inflationary model of
the Big Bang.
What is it? This is the question that really exposes our ignorance.
Production of particles following the big bang puts a limit on the number
of conventional particles in the universe. This limit suggests that only a
small fraction of the dark matter can be baryonic, that is, conventional mat-
ter such as the familiar elementary particles and atoms known on Earth
and in stars. Whatever constitutes the baryonic dark matter, it must be
invisible: faint stars in enormous quantities, too faint to have been detect-
ed, mini-black holes, brown dwarfs, dark planets. All of these possible can-
didates have been looked for, and not found in significant numbers.
Some of the dark matter must be of an exotic nature unlike the atoms
and molecules that compose the stars, the Earth, and our bodies. These
might be neutrinos which are not massless, or those particles dreamed up
but not yet detected; axions, monopoles, gravitinos, photinos. Physicists are
currently devising laboratory experiments in an attempt to detect these still
unknown particles. Maybe none of these ideas are correct. Observational
cosmology has taught us that our imaginations are very limited. Most cos-
mological knowledge has come not from thinking about the cosmos, but
from observing it.
This is the universe that we describe today. It would be a mistake to
believe that we have solved the problems of cosmology. We have not. There
are surely major features of the universe we have not yet imagined.
Someday humans on Earth will know if the universe will expand forever, if
the expansion is accelerating or decelerating, if life is ubiquitous through-
out the universe. Most important, they will attempt to answer questions we
do not now know enough to ask.
RECENT TRENDS IN THE INTERPRETATION
OF QUANTUM MECHANICS
ROLAND OMNÈS
Max Planck discovered the existence of quanta one century ago and
the basic laws of this new kind of physics were found in the years 1925-
1926. They have since withstood the test of time remarkably well while
giving rise to a multitude of discoveries and extending many times their
field of validity. Quantum mechanics was certainly the most important
breakthrough in science in the past century with its influence on physics
and on chemistry and beyond, including many features of molecular
biology.
Almost immediately, however, it was realized that the new laws of
physics required that the foundations of the philosophy of knowledge be
drastically revised because quantum rules conflicted with various deep
traditional assumptions in philosophy such as causality, locality, the real-
istic representation of events in space and time, and other familiar ideas.
For a long time, the so-called Copenhagen interpretation provided a con-
venient framework for understanding the quantum world of atoms and
particles but it involved at least two questionable or badly understood fea-
tures: a conceptual split between quantum and classical physics, particu-
larly acute in the opposition between quantum probabilism and classical
determinism, and a mysterious reduction effect, the “collapse of the wave
function”, both difficulties suggesting that something important was still
to be clarified.
Much work is still presently going on about these foundations, where
experiments and theory inspire and confirm each other. Some results have
been obtained in the last two decades or so, probably important enough to
warrant your attention and I will try to describe a few of them.
476 ROLAND OMNÈS
I believe the best way to introduce the topic will be to show it in a his-
torical perspective by going back to the work by Johann (later John) von
Neumann. In a famous book, Mathematische Grundlagen der Quanten-
mechanik, published in 1932, he identified some basic problems.
It may be interesting to notice that von Neumann was a mathematician,
indeed among the greatest of his century. This was an asset in penetrating
the essentials in a theory, quantum mechanics, which can be characterized
as a typical formal science; that is to say a science in which the basic con-
cepts and the fundamental laws can only be fully and usefully expressed by
using a mathematical language. He was in addition a logician and he had
worked previously on the theoretical foundations of mathematical sets.
That was also a useful background for trying to master the quantum
domain where logical problems and possible paradoxes were certainly not
easier than those arising from sets.
It is also worth mentioning that von Neumann had been a student of
David Hilbert, and Hilbert’s conception of theoretical physics was very
close to the contemporany trends in research. He thought that a mature
physical theory should rest on explicit axioms, including physical principles
and logical rules, from which the theory had to be developed deductively to
obtain predictions that could be checked by experiments.
Von Neumann contributed decisively (along with Dirac) to the formu-
lation of the basic principles, unifying the linear character of quantum
states with the non-commutative properties of physical quantities within
the mathematical framework of Hilbert spaces and defining dynamics
through the Schrödinger equation. He also made an important step
towards “interpretation”, but this is a protean word that must be explained.
It can mean interpreting the abstract theoretical language of physics into a
common-sense language closer to the facts and experiments, just like an
interpreter would translate a language into another; but interpretation can
also mean “understanding” quantum physics, notwithstanding a drastic
epistemic revision if necessary. We shall use the word with both meanings
but von Neumann’s contribution, which we are about to discuss, was defi-
nitely a matter of translation.
He assumed that every significant statement concerning the behavior
of a quantum system can be cast into the form of an “elementary predi-
cate”, a statement according to which “the value of some observable A lies
in a range of real numbers” (By now, this assumption has been checked
RECENT TRENDS IN THE INTERPRETATION OF QUANTUM MECHANICS 477
THREE ANSWERS
The progress that has recently been accomplished in interpretation is
best expressed by saying that von Neumann’s three problems have now
been solved. Let us review the answers.
The last problem, namely the apparent conflict between the language
of projection operators and standard logic, has also been solved by finding
the right “grammar” for the language, in terms of so-called “consistent his-
480 ROLAND OMNÈS
A NEW INTERPRETATION
Using the answers to the three problems, interpretation can be cast into
a completely deductive sub-theory inside the theory of quantum mechanics
[4]. Its physical axioms have been already mentioned (i.e., the Hilbert space
framework and Schrödinger dynamics) whereas the logical axioms amount
to the use of von Neumann’s language under the constraints of consistent
histories. The well-known rules of measurement theory are among the
main results and they have become so many theorems in this approach. A
few other aspects are worth mentioning:
vision of probabilities may be rather deep but its implications are not yet
fully appreciated.
– Three “privileged” directions of time must enter the theory: one in
logic (for the time ordering of predicates in histories), one for decoherence
(as an irreversible process), and the familiar one from thermodynamics.
The three of them must necessarily coincide. The most interesting aspect of
these results is certainly that the breaking of time reversal symmetry is not
primarily dynamical but logical and a matter of interpretation, at least from
the present standpoint.
– There is only one kind of basic laws in physics in this construction,
and they are quantum laws. The validity of classical physics for macro-
scopic bodies (at least in most circumstances) emerges from the quantum
principles. In particular, classical determinism can be proved to hold in a
wide domain of application. Its conciliation with quantum probabilism is
finally very simple if one notices that determinism claims essentially the
logical equivalence of two classically meaningful properties occurring at
two different times (one property specifying for instance position and
velocity for a tennis ball at an initial time and the other property being
similar for reception at a later time). Their logical equivalence holds with
a very small probability of error. This very small (and known) probability of
error allows determinism to assume a probabilistic character, although a
very safe one.
PHILOSOPHICAL CONSEQUENCES
It should be stressed first of all that this approach brings nothing new
concerning the reality of quantum properties. Complementarity is still
there and even more so, because it is now a trivial consequence of the his-
tory construction. The most interesting philosophical consequences are
therefore concerned with the understanding of classicality in the ordinary
world of macroscopic objects or, in a nutshell: why is common sense valid?
The technical answer to this question lies in considering histories
where only classically meaningful properties of every kind of macroscopic
objects enter. There is no problem of complementarity with such histories
and therefore no problem with reality: the set of consistent histories
describing our common experience turns out to be unique, sensible (i.e.
satisfying the relevant consistency conditions), and the logical framework
resulting from the quantum axioms in these conditions is what we might
call “common sense”. There is therefore no conflict between quantum theo-
482 ROLAND OMNÈS
ry and common sense and the first implies the second, except that one
should be careful not to extend excessively the domain where common
sense is supposed to be valid.
More precisely, one may refine common sense by making explicit in it
familiar and traditional philosophical assumptions: causality (or determin-
ism), locality in ordinary space, separability (except for measurements of
an Einstein-Podolsky-Rosen pair of particles), reality (as used in a philo-
sophical discourse, i.e. the consistency of all the – classical – propositions
one may assert about the – macroscopic – world). The fourth has just been
mentioned and the three first share a common character: they are valid at
a macroscopic scale, except for a very small probability of error (or inva-
lidity). When one tries, however, to extend them towards smaller and small-
er objects, the probabilities of errors keep growing and finally, when one
arrives at the level of atoms and particles, these traditional principles of the
philosophy of knowledge have such a large probability of error that they are
plainly wrong.
There is a lesson in this: When dealing with physis, the principles that
have been reached by science after much work stand on a much firmer
basis that many traditional principles in the philosophy of knowledge. The
concepts and laws of physics have been checked repeatedly and very care-
fully in a wide domain and, as indicated in this talk, they imply the validi-
ty of older more philosophical principles in ordinary circumstances.
Conversely, the old principles are limited in scope and the purpose of
understanding in their light the essentials of quantum mechanics is illuso-
ry. Some sort of a “premiss reversal” in the philosophy of knowledge is
therefore suggested [5].
Open questions
renewal of the old question: up to what point does physics reach reality
and, outside this reach, should it be said to preserve appearances? After all,
everything we observe can be derived directly from the quantum principles
except for the uniqueness of empirical reality, but this uniqueness can be
shown to be logically consistent with the basic principles, or preserved by
them. More generally, I think we should be more inquisitive about the
essential role of mathematics in basic science, considering that we do not
know conclusively what is the status or the nature of mathematics. One
may question particularly the possible limits of the “Cartesian program”,
according to which one assumes – perhaps too easily – the possibility of a
mathematical description for every aspect of reality.
REFERENCES
Jesus Christ is the most fascinating person who has ever existed. It is
worth getting to know Him more than any other person, and no human dis-
cipline could ever find a better subject to study. However, science, or at least
experimental science as we know it today, has not paid much attention to
Him. Scientists throughout history have been very interested in God, to
such a degree that, paradoxically, scientists may be considered to be more
religious than other intellectuals. This was the case not only in the past but
also today. Copernicus, Galileo and Newton were deeply religious men. But
even Einstein, Max Planck and Kurt Gödel felt the need to speak about
God, and more recently Stephen Hawking, Roger Penrose, Steven
Weinberg and Lee Smolin have said a great deal more about God, and per-
haps even to God, than many philosophers and theologians of our time.1
The existence of God is an ever-present challenge, even for those who sim-
ply want to deny it: is the God-hypothesis necessary or not to explain the
world scientifically? And if one admits that there is one eternal, omniscient
God, how can this fit into the image of the world that science offers? It is
difficult to escape these questions, once one attempts to see the world in
terms which are broader than strictly empirical data.
While scientists have studied and reflected on God, this has not been
the case with Jesus. The question of the historical Jesus who is believed by
1
Cf. R. Timossi, Dio e la scienza moderna. Il dilemma della prima mossa (Mondadori,
Milan, 1999).
CHRIST AND SCIENCE 485
Christians to be truly God and truly human does not seem to have found a
place in scientific reflection.
There were numerous attempts in the course of the twentieth century
to approach the figure of Jesus ‘scientifically’, above all in exegesis and the
various biblical sciences. These disciplines set themselves the task of study-
ing the person, actions and sayings of Jesus not from the point of view of
traditional interpretation, inasmuch as this was considered too partial, but
with a new, scientific, rational method, which approached the object being
studied, in this case the figure of Jesus, from a neutral point of view. We
owe to these disciplines significant progress in understanding the historical
circumstances in which Jesus and the first Christian communities lived, as
well as the origins of the first Christian writings. However, these approach-
es to the figure of Jesus, though they claimed to be scientific, were often
heavily laden with preconceptions and a priori philosophical positions,
which definitely compromised the objectivity of their results. ‘Scientific’
critical exegesis tended to be critical of everything but itself and lacked a
sound epistemological foundation. The claim that this kind of approach
made to being ‘scientific’ aroused strong objections, not only among
researchers trained in the empirical sciences but also among those trained
in the humanities.2
More recently, other scientific disciplines have attempted to subject the
figure of Jesus to the proofs of science. I am thinking of studies carried out
on objects and relics which were presumed to have had contact with Jesus,
in particular the analyses and tests carried out on the Turin Shroud in 1978
and, more recently, in 1998.3 The results of some of these tests are still con-
troversial and there was some criticism of the fact that they were carried
out at all. Clearly, a scientific investigation carried out on these objects will
never provide absolute proof that they belonged to Jesus, much less prove
that He existed. On the other hand, it is also true that these relics of Jesus,
despite being very important and worthy of veneration by Christians, are
not an object of faith in the strict sense. However, such investigations can
help to give support, on a rational basis, to the authenticity of certain relics
or facts linked to the figure of Jesus. They can also help to identify and
2
Cf. J. Ratzinger et al., Schriftauslegung im Widerstreit (Herder (Quaestiones dispu-
tatae 117) Freiburg, 1989).
3
See, for example, E.M. Carreira, “La Sábana Santa desde el punto de vista de la
Física” and J.P. Jackson, “La Sábana Santa ¿nos muestra la resurrección?” in Biblia y Fe,
XXIV (1998).
486 PAUL POUPARD
avoid those that are false. Studies like these are always ultimately based on
the desire to come to a rationally based knowledge of the case of Jesus,
something which Christianity has always claimed to have as the religio vera.
Faith has nothing to fear from reason,4 so science can never be a threat
when it wants to investigate Jesus. Anyone who has visited the Holy Land
will know that, next to every church or sanctuary, and often under them,
there is an archaeological dig, which is an attempt, in that particular con-
text, to verify rationally what has been handed down as part of tradition.
One can only hope that such research will be extended to many more
objects or facts that are said to be miraculous. In these cases, science can
help faith to rid itself of the leftovers of superstition and find solid founda-
tions for belief.5
Obviously, these investigations touch Jesus Christ only indirectly, even
though, when they speak of the resurrection of Jesus, it is impossible not
to ask oneself about Him. Nevertheless, the link between Christ and sci-
ence and scientists is still largely unexplored. One important lacuna is that
there is no serious study on what Christ means for scientists, a thorough
treatment of the way scientists have contemplated Christ across the cen-
turies, and how they have approached Him. Father Xavier Tilliette did
such a thing for philosophers in his book Le Christ de la philosophie, the
sub-title of which is quite significant: Prolégomènes à une christologie
philosophique.6 The prolegomena for a similar ‘scientific Christology’ have
still to be written.
Allow me to refer en passant to one very noteworthy attempt made in
this area: Father Pierre Teilhard de Chardin’s work entitled Science and
Christ.7 It was only an attempt, and it may well have attracted some criti-
cism, but it was a noble effort on the part of one of the twentieth century’s
great anthropologists to bring his scientific knowledge face to face with
4
John Paul II, ‘Address to those who took part in the Jubilee of Men and Women
from the World of Learning, 25 May 2000’.
5
‘Science can purify religion from error and superstition’. Cf. John Paul II, ‘Letter to
Father George V. Coyne, 1 June 1988’. The text is in Pontificium Consilium De Cultura,
Jubilee for Men and Women from the World of Learning (Vatican City, 2000), p. 59.
6
X. Tilliette, S.J., Le Christ de la philosophie (Cerf, Paris, 1990). See the same
author’s Le Christ des philosophes, (Institut Catholique de Paris, Paris, 1974).
7
P. Teilhard de Chardin, ‘Science et Christ’, a conference held in Paris on 27 February
1927. It was published in Science et Christ (Œuvres de Teilhard de Chardin, IX, Paris,
1965), pp. 47-62. See also E. Borne, ‘Teilhard de Chardin’, in P. Poupard (ed.), Grande
Dizionario delle Religioni (Piemme, Casale Monferrato, 3rd. edn., 2000), pp. 2125ff.
CHRIST AND SCIENCE 487
8
‘La Science, seule, ne peut découvrir le Christ, mais le Christ comble les voeux qui
naissent dans notre coeur à l’école de la Science’, in ‘Science et Christ’, p. 62.
9
M. Artigas, The Mind of the Universe (Templeton Foundation Press, Philadelphia-
London, 2000), pp. xx ff. Seneca uses the expression in Quaestiones Naturales I, 13.
10
M. Artigas, ‘Science et foi : nouvelles perspectives’, in P. Poupard (ed.), Après
Galilée (Desclee, Paris, 1994), p. 201. Cf. J. Ladrière, ‘Scienza-Razionalità-Credenza’, in
P. Poupard (ed.), Grande Dizionario delle Religioni (Piemme, Casale Monferrato, 3rd.
edn., 2000), pp. 1942-1947.
11
P. Davies, The Mind of God: The Scientific Basis for a Rational World (Simon &
Schuster, New York & London, 1993), p. 191.
488 PAUL POUPARD
out turning Christ into some sort of soul of the universe and at the same
time avoiding a radical separation where Christ has no place in nature.
12
See, for example, Saint Bonaventure’s Quaestiones disputatae de Scientia Christi,
in the recent edition by F. Martínez Fresneda, ITF (Murcia 1999), and the very interest-
ing introduction by M. García Baró.
490 PAUL POUPARD
sky’, and in the morning, ‘Stormy weather today; the sky is red and over-
cast” (Mt 16.2f.). Jesus scolded the people from His home town because,
while they could read the face of the sky, they were unable to read the signs
of his coming. But what He said contains a clear allusion to a certain
accumulated experience of observing the atmosphere and, by implication
therefore, to the intelligibility of nature.
These are only very weak indications, but they do still reveal the man
Jesus as someone who observed nature carefully, someone who was curi-
ous about the phenomena surrounding life in the countryside, and some-
one who could draw from the book of nature teachings for living. There are
just a few hints elsewhere in the Gospel about the scientific activity of those
times. There is a reference to what doctors do. The evangelist Mark is quick
to point out that a poor widow had suffered a great deal at the hands of doc-
tors whose services had cost her all her money, but Luke – the doctor so
dear to St. Paul (Col 4.14) – may be speaking with a certain sympathy and
esprit de corps when he says that nobody had been able to cure her. Even
astrologers, to whom we owe the beginnings of the science of astronomy,
have a special place in the life of Jesus, since they were the first non-Jews
to adore the new-born baby. The wise men from the east were known as
‘Magi’, a term that is quite vague, but conjures up the image of someone
who studies the heavens. In recognising the coming of the Messiah, they
stole a march on Israel’s wise men, the experts on the Scriptures. This
marks a clear victory for scientists over theologians. The one who declared
Himself to be the Truth could surely never refuse those who struggle in
their search for a better knowledge of nature, life and the world.
In Jesus there are two modes of knowledge which differ in nature but
have the same object. That is why He is able to say He is one with the
Father, and yet admit that He does not know when the hour of judgement
will come. As Logos, He is the creative wisdom which was there at the
beginning of creation, as its architect (Prov 8.30), and yet He is unaware of
how the seed sown by the sower grows. In Christ, divine knowledge and
experimental knowledge are not opposed to each other, nor do they cancel
each other out. They are different modes of knowledge which are united
without confusion or change, without division or separation, according to
the formula used by the Council of Chalcedon to describe the relationship
between the two natures of Christ (DS 302).
492 PAUL POUPARD
In this way Christ Jesus is an extreme case of the relationship within the
human person between faith and reason, and between science and faith.
What there is in Christ in a unique and special way, the unity of two natures
in a single hypostasis or person, is replicated analogously in Christians.
Faith is the way they participate in the knowledge of God, which He
Himself has revealed. Faith is, therefore, seeing things in God’s own light.
Faith also opens up a broad field of objects which would otherwise be inac-
cessible to human knowledge; the disputed questions on the Scientia Christi
are an example of this. But faith never replaces experimental knowledge,
which human beings acquire by their own efforts, as they attempt to unrav-
el the secrets of nature. Faith does not annul reason and reason does not
expel faith. They are both like ‘wings on which the human spirit rises to the
contemplation of the Truth’ (Fides et Ratio 1).
A right understanding of the links between science and faith is
absolutely essential if scientists are to avoid foundering in dire straits,
steering clear as much of the Scylla of fideism as of the Charybdis of sci-
entism, and if they are to avoid denying the problem by taking refuge in
syncretistic solutions. Fideism13 thinks it can save faith by denigrating the
capacity of human reason to reach the truth, and it has been a defence
mechanism used by many believers in the face of scientific progress. But
to deny reason its rights in order to save faith always impoverishes faith,
which is then forced into pious sentimentality. Christianity’s original
claim was that it was the religio vera, that it possessed a truth about the
world, history and humanity which would hold up in the face of reason.
I like to remember what Chesterton said about going to church. We take
off our hats, but not our heads.
This attitude crops up in another guise, in a sort of exhumation of the
mediaeval theory of twin truths, whereby faith and reason each have their
own province of knowledge, and it is thus possible to deny in one what
one could affirm in the other. Scientists who were also believers have
often adopted this compromise solution in order to deal with what they
saw as an insurmountable conflict between the biblical account of cre-
ation and evolutionary theories. This means living in two separate worlds
which can never be on a collision course because there is no contact
between them. But this is yet another form of fideism which denies not
13
See my article ‘Fideismo’, in Grande Dizionario delle Religioni (Piemme, Casale
Monferrato, 3rd. edn., 2000), pp. 753ff, and my Un essai de philosophie chrétienne au XIX
siècle. L’abbé Louis Bautain (Paris, 1961).
CHRIST AND SCIENCE 493
the capacity of reason, but its right to enter into dialogue with revelation
and faith.
The other danger which threatens the work of scientists is the tempta-
tion of scientific reductionism, or the belief that science is the only accept-
able form of knowledge. Buoyed up by the unstoppable conquests of sci-
ence, the scientist may play down other dimensions of human knowledge,
regarding them as irrelevant. This applies not only to faith, but also to phi-
losophy, literature, and ethics. Science needs to be open to other disci-
plines and to be enriched by data that come from other fields of investiga-
tion.14 Science is not capable of explaining everything. Paul Davies admits
as much in The Mind of God, when he says that his desire to explain every-
thing scientifically always comes up against the old problem of the chain
of explanations. For him, ultimate questions will always remain beyond
the realm of empirical science.15 I am happy to recall the symposium enti-
tled Science in the Context of Human Culture, organised jointly by the
Pontifical Council for Culture and the Pontifical Academy of Sciences in
October 1990. Its great richness lay in its exploration of the need for
greater interdisciplinary co-operation between scientists, philosophers,
and theologians. Let me renew today the appeal I made then for dialogue,
which becomes more and more difficult the more people’s work becomes
specialised. The remarkable experience of dialogue in those few days was
formulated much more recently in the Pontifical Council for Culture’s doc-
ument Towards a Pastoral Approach to Culture (Vatican City 1999):
In the realm of knowledge, faith and science are not to be super-
imposed, and their methodological principles ought not to be con-
fused. Rather, they should overcome the loss of meaning in isolat-
ed fields of knowledge through distinction, to bring unity and to
retrieve the sense of harmony and wholeness which characterizes
truly human culture. In our diversified culture, struggling to inte-
grate the riches of human knowledge, the marvels of scientific dis-
covery and the remarkable benefits of modern technology, the pas-
toral approach to culture requires philosophical reflection as a pre-
requisite so as to give order and structure to this body of knowledge
and, in so doing, assert reason’s capacity for truth and its regulato-
ry function in culture (no. 11).
14
Cf. Paul Poupard, Science in the Context of Human Culture II (Pontifical Academy
of Sciences-Pontifical Council for Culture, Vatican City, 1997).
15
Cf. The Mind of God (Simon & Schuster, London, 1992), pp. 14, 15, 232.
494 PAUL POUPARD
Scientia Christi
My dear and learned friends, we have spent time together along the
intricate paths of human knowledge. Having spoken at length of Christ and
science, I feel obliged to offer one final reflection on the knowledge of
Christ: not on what He knew, but on the science which has Him as its
object, the true science of life.
On this subject, I am reminded of some words spoken some forty years
ago by Paul VI. It was in 1963. Paul VI had just been elected Pope and was
welcoming the Members of the Pontifical Academy of Sciences for the first
time. Perhaps some of you were present, along with me, then one of the
Pope’s youngest colleagues. He expressed his joy over a stimulating cer-
tainty: ‘the religion we are happy to profess is actually the supreme science
of life: thus it is the highest and most beneficial guide in all the fields
where life manifests itself’. He concluded by developing a very beautiful
thought: ‘religion may appear to be absent when it not only allows but
requires scientists to obey only the laws of life; but – if we look more close-
ly – religion will be at their side to encourage them in their difficult task of
research, reassuring them that the truth exists, that it is intelligible, that it
is magnificent, that it is divine’.16 Christ may appear to be absent, but He is
not. Ladies and gentlemen, in this Jubilee Year, dedicated to Jesus Christ,
allow me to invite you most sincerely to do all you can to acquire this
supreme science.
16
Paul VI, ‘Discorso alla Sessione Plenaria della Pontificia Accademia delle Scienze,
13 October 1963’, in Pontificia Academia Scientiarum, Discorsi indirizzati dai Sommi
Pontefici Pio XI, Pio XII, Giovanni XXIII, Paolo VI, Giovanni Paolo II alla Pontificia
Accademia delle Scienze dal 1936 al 1986 (Vatican City, 1986), pp. 109-111.
ON TRANSDISCIPLINARITY
JÜRGEN MITTELSTRASS
1
For the following, cf. J. Mittelstrass, “Interdisziplinarität oder Transdisziplinari-
tät?”, in: L. Hieber (ed.), Utopie Wissenschaft. Ein Symposium an der Universität
Hannover über die Chancen des Wissenschaftsbetriebs der Zukunft (21./22. November
1991), Munich and Vienna 1993, pp. 17-31, also in: J. Mittelstrass, Die Häuser des
Wissens. Wissenschaftstheoretische Studien, Frankfurt/Main 1998, pp. 29-48.
496 JÜRGEN MITTELSTRASS
many disciplinary competencies. The same is true of energy and health. But
this means that the term interdisciplinarity is concerned not merely with a
fashionable ritual, but with forces that ensue from the development of the
problems themselves. And if these problems refuse us the favour of posing
themselves in terms of fields or disciplines, they will demand of us efforts
going as a rule well beyond the latter. In other words, whether one under-
stands interdisciplinarity in the sense of re-establishing a larger discipli-
nary orientation, or as a factual increase of cognitive interest within or
beyond given fields or disciplines, one thing stands out: interdisciplinarity
properly understood does not commute between fields and disciplines, and
it does not hover above them like an absolute spirit. Instead, it removes dis-
ciplinary impasses where these block the development of problems and the
corresponding responses of research. Interdisciplinarity is in fact transdis-
ciplinarity.
While scientific co-operation means in general a readiness to co-opera-
tion in research, and thus interdisciplinarity in this sense means a concrete
co-operation for some definite period, transdisciplinarity means that such
co-operation results in a lasting and systematic order that alters the discipli-
nary order itself. Thus transdisciplinarity represents both a form of scientif-
ic research and one of scientific work. Here it is a question of solving prob-
lems external to science, for example the problems just mentioned concern-
ing the environment, energy or health, as well as a principle that is internal
to the sciences, which concerns the order of scientific knowledge and scien-
tific research itself. In both cases, transdisciplinarity is a research and scien-
tific principle, which is most effective where a merely disciplinary, or field-
specific, definition of problematic situations and solutions is impossible.
This characterisation of transdisciplinarity points neither to a new (sci-
entific and/or philosophical) holism, nor to a transcendence of the scientif-
ic system. Conceiving of transdisciplinarity as a new form of holism would
mean that one was concerned here with a scientific principle, that is to say
a scientific orientation, in which problems could be solved in their entirety.
In fact, transdisciplinarity should allow us to solve problems that could not
be solved by isolated efforts; however, this does not entail the hope or intent
of solving such problems once and for all. The instrument itself – and as a
principle of research, transdisciplinarity is certainly to be understood
instrumentally – cannot say how much it is capable of, and those who con-
struct and employ it also cannot say so in advance. On the other hand, the
claim that transdisciplinarity implies a transcendence of the scientific sys-
tem, and is therefore actually a trans-scientific principle, would mean that
498 JÜRGEN MITTELSTRASS
institutionally, for instance in the case of new research centres which are
being founded in the USA, in Berkeley, Chicago, Harvard, Princeton and
Stanford,2 where a lot of money is in play. The “Centre for Imaging and
Mesoscale Structures” under construction in Harvard calls for a budget of
thirty million dollars for a building of 4.500m2. Here scientists will be inves-
tigating questions it would be senseless to ascribe to a single field or disci-
pline. The focus is on structures of a particular order of magnitude, and not
on objects of a given discipline. And there are other institutional forms pos-
sible, which are not necessarily housed in a single building, for instance the
“Centre for Nanoscience (CeNS)” at the University of Munich.
Such centres are no longer organised along the traditional lines of phys-
ical, chemical, biological and other such institutes and faculties, but from
a transdisciplinary point of view, which in this case is following actual sci-
entific development. This is even the case where individual problems, as
opposed to wide-scope programmes, are the focus, as for example in the
case of the “Bio-X”-Centre in Stanford 3 or the “Centre for Genomics and
Proteomics” in Harvard.4 Here, biologists are using mature physical and
chemical methods to determine the structure of biologically important
macro-molecules. Physicists like the Nobel-prize-winner Michael Chu, one
of the initiators of the “Bio-X” programme, are working with biological
objects which can be manipulated with the most modern physical tech-
niques.5 Disciplinary competence therefore remains the essential precondi-
tion for transdisciplinarily defined tasks, but it alone does not suffice to
deal successfully with research tasks which grow beyond the classical fields
and disciplines. This will lead to new organisational forms beyond those of
the centres just mentioned, in which the boundaries between fields and dis-
ciplines will grow faint.
Naturally this holds not just for university research, but for all forms of
institutionalised science. In Germany, for instance, these present a very
diverse picture, which ranges from university research, defined by the unity
of research and teaching, to the Max Planck Society’s research, defined by
path-breaking research profiles in new scientific developments, to large-
2
Cf. L. Garwin, “US Universities Create Bridges between Physics and Biology”,
Nature 397, January 7, 1999, p. 3.
3
Cf. L. Garwin, ibid.
4
Cf. D. Malakoff, “Genomic, Nanotech Centers Open: $200 Million Push by
Harvard”, Science 283, January 29, 1999, pp. 610-611.
5
Cf. L. Garwin, ibid.
500 JÜRGEN MITTELSTRASS
MICHAEL HELLER
1
Sometimes one speaks about a meaning in relation to purely formal languages but,
in such a case, the meaning of a given expression is to be inferred from the rules of how
this expression is used within the language.
502 MICHAEL HELLER
vice versa), is from the point of view of strict logic an ‘illicit jump’. In spite
of this, we all speak natural languages and surprisingly often we under-
stand each other (with a degree of accuracy sufficient to act together and
communicate with each other). But this is a pragmatic side of the story
which I have decided to put to one side.
6. In traditional philosophy, going back to medieval scholasticism,
there was a fundamental distinction between the epistemological (or
logical) order (or level) and the ontological order (or level). It roughly
corresponded to the modern distinction between syntaxis and semantics
with a shift of emphasis from the relationship between language and what
it describes (the modern distinction) to the relationship between ‘what is
in the intellect’ and ‘what is in reality’ (the traditional distinction). The
latter distinction appeared, for example, in the criticism by St. Thomas
Aquinas of the famous ‘ontological argument’ for the existence of God
proposed by St. Anselm of Canterbury. ‘God is something the superior of
which cannot be thought of (aliquid quo nihil majus cogitari possit). And
what does exist is greater than what does not exist. Thus God does exist’
– claimed St. Anselm. St. Thomas did not agree: the statement ‘God is
something the superior of which cannot be thought of’ belongs to the
epistemological order, whereas ‘God does exist’ belongs to the ontologi-
cal order, and the ‘proof’ consists of the ‘illicit jump’ between the two
orders. This distinction became one of the corner stones of the Thomist
system and was strictly connected with its epistemological realism.
Discussions like that between St. Anselm and St. Thomas (and its conti-
nuation by Descartes, Leibniz and Kant) paved the way for the modern
logical analysis of language.
7. Strangely enough, modern tools of linguistic analysis can be used to
more effectively understand the functioning of the universe, or, more
strictly, to see more clearly where the gaps in our knowledge of it are loca-
ted. The point is that nature seems often to employ linguistic methods in
solving some of its fundamental problems. I shall briefly touch upon three
domains in which this ‘linguistic strategy’ of nature can be seen quite trans-
parently. All these domains are in fact fundamental as far as our under-
standing of the world is concerned.
8. The first of these domains is the genetic code. In fact ‘code’ is here
a synonym for ‘language’. As is well known, it consists of linear strings of
only four bases playing the role of letters in the ‘alphabet of life’. ‘The
linear sequence of these four letters in the DNA of each species contains
the information for a bee or a sunflower or an elephant or an Albert
504 MICHAEL HELLER
Einstein.’ 2 This is clearly the syntactic aspect of the genetic code. The
point is, however, that the ‘syntactic information’ must be implemented
within the biological machinery. Syntaxis must generate semantics. And
do even more than this. After all, living beings are not purely linguistic
concepts, but things that are real. For this reason, the old philosophical
vocabulary about the epistemological and ontological orders seems to be
more adequate in this context. The vocabulary but not the rules of tradi-
tional philosophy! The phenomenon of life testifies to the fact that, con-
trary to these rules, the ‘illicit jump’ from the epistemological order to the
ontological order has been made. The genetic code does not only descri-
be certain modes of acting, but also implements the action within the
concrete biological material.
Jacques Monod sees this ‘semantic antinomy’ in the following way. The
biological code would be pointless without the possibility of being able to
decode it or to translate it into action. The structure of the machine which
does that is itself encoded into the DNA. The code cannot be decoded unless
the products of the code itself are involved. This is the modern version of
the old omne vivum ex ovo. We do not know when and how this logical loop
was closed.3
9. Another domain in which nature uses linguistic tricks to solve its pro-
blems is the functioning of the brain. In this case, the language consists of
electric signals propagating along nerve fibres from neuron to neuron
across the synaptic clefts. In this case the ‘illicit jump’ does not consist of
changing from a purely linguistic level to something external which it
describes, but rather in creating something real which did not exist pre-
viously, namely, consciousness. The problem at stake is much more com-
plex here and our knowledge about it is less adequate. Let us notice that it
is consciousness that has produced human languages and, in this way, the
linguistic property, the starting point of our analysis, is not the beginning
but rather the final product of the whole evolutionary process.
10. The third domain in which a ‘linguistic approach’ seems to be essen-
tial is the universe itself or, to be more precise, the laws of nature which
constitute it or structure it. It is a commonplace to say that the laws of natu-
re are expressed in the language of mathematics. Our textbooks on physics
are full of mathematical formulae which are nothing but a certain formal
2
J. V. Nossal, Reshaping Life: Key Issues in Genetic Engineering (Cambridge University
Press, 1985), p. 14.
3
Le hasard and nécessité (Éd. du Seuil, 1970), p. 182.
‘ILLICIT JUMPS’ – THE LOGIC OF CREATION 505
language (although rather seldom a purely formal language, i. e., one put
into the form of an axiomatic system). Physicists claim that some of the for-
mulae of this language express the laws of nature. This means that the lan-
guage has its ‘semantic reference’, that it is an interpreted language. Very
roughly speaking, the universe is its ‘model’. I put the word ‘model’ in quo-
tation marks because the universe is not a set of utterances and conse-
quently it is not a model in the technical, semantic sense. In fact, logicians
and philosophers of science construct such semantic models. The strategy
they adopt is the following. They try to express all experimental results,
relevant for a given physical theory, in the form of a catalogue of sentences
reporting these results (these are called empirical sentences), and then com-
pare the two languages: the theoretical language consisting essentially of
mathematical formulae with the catalogue of empirical sentences.
Physicists are usually not impressed by this procedure and prefer to stick
to their own method of approaching the ‘language of nature’.
Here we also encounter an ‘illicit jump’ from the empistemological
order to the ontological order. After all, no mathematical formula, even if it
is semantically interpreted in the most rigorous way, is a law of nature ope-
rating in the universe.
In the case of the genetic code and of the neural code, the language in
question could be regarded as a ‘language of nature’ (the sequences of bases
in DNA and electrical signals in neural fibres are products of natural evo-
lution), and in this context how to change from syntaxis to semantics seems
to be a non-trivial problem. However, as far as the laws of nature are con-
cerned the mathematical language in which they are expressed is the lan-
guage created by us. By using this language we only describe certain regu-
larities occurring in the real universe, and we could clain that the problem
of the mutual interaction between syntaxis and semantics is no more com-
plicated than in our human languages.
I do think, however, that there is a subtle and deep problem here. Every
property of the world can be deduced from a suitable set of laws of nature,
i.e., from a suitable set of suitably interpreted mathematical formulae – all
with the exception of one, the most important, namely, existence.
Nowadays physicists are able to produce models of the quantum origin of
the universe out of nothingness, but in doing so they must assume (most
often they do so tacitly) that the laws of quantum physics exist a priori with
respect to the universe (a priori in the logical sense, not necessarily in the
temporal sense). Without this assumption, physicists would be unable even
to start constructing their models. But somehow the universe does exist. Is
SCIENCES AND SCIENTISTS
AT THE BEGINNING OF THE TWENTY-FIRST CENTURY*
PAUL GERMAIN
INTRODUCTION
Let us recall first three recent international events which show that the
situation of sciences and scientists is significantly changing:
*
This is a proposed declaration drawn up by Prof. P. Germain, Academician and
Councillor of the Pontifical Academy of Sciences, which the Council of the Academy
decided to publish in the Proceedings of the Jubilee Session.
508 PAUL GERMAIN
a) The IAP
This is something like a ‘working force’ which will carry out studies for
international organisations such as the World Bank and committees con-
nected with UNO (the United Nations). It is chaired by two countries, and
these countries at the present time are the USA and India. The Secretariat
is in the Netherlands. The Board has fifteen members. One of the co-chairs
of the IAP attends the meetings of the IAC Board.
These three events are very important. They show that the situation of
sciences and scientists in relation to society is undergoing a major change.
The first and traditional task of the sciences and scientists remains, of
course, the development of our knowledge in all the scientific disciplines.
At the international level, the ICSU is the organisation which stimulates
this progress and promotes the necessary cooperation.
The second task is rather new. It requires a ‘New Commitment’. As was
written in the ‘Declaration of Science and the Use of Scientific Knowledge’
of Budapest: ‘The sciences should be at the service of humanity as a whole
and should contribute to providing every one with a deeper understanding
of nature and society, a better quality of life and a sustainable and healthy
environment for present and future generations’.
For this second task, the Academies of Sciences seem ready to play an
important role and this is why they have recently created new tools with
which to face this new duty together with the IAP and the IAC.
SCIENCES AND SCIENTISTS AT THE BEGINNING OF THE TWENTY-FIRST CENTURY 509
To describe what is implied in this new task the following three lines of
thought will be analysed:
I. Science as a necessary basis of technology
II. Science at the heart of culture
III. Science, a fundamental goal of education
A dialogue between science and its application has always existed. But
it has become more effective in recent decades thanks to large institutions
and companies which have organised networks of scientific results and
technical achievements in order to offer goods, products, possibilities, and
many improvements in living conditions, in a most effective way. One such
network is called a technology.
All these advances in technology have been very beneficial for people.
Their material existence has been greatly improved. They live longer and
have better health. They can enjoy a very rich cultural life.
Despite the fact that the production and justification of scientific results are
SCIENCES AND SCIENTISTS AT THE BEGINNING OF THE TWENTY-FIRST CENTURY 511
This fantastic history has very often given rise to “ideologisation” – the
belief that scientific results have a “metaphysical implication” and that they
provide the only valid knowledge. This was especially the case during the
nineteenth century and in particular during its last decades when many
512 PAUL GERMAIN
– The human, social and economic sciences. These are more recent and
have been produced by mankind through a critical analysis of facts, taking
account, when possible, of the results of the natural sciences. They are
concerned almost exclusively with the past and the present, and their tech-
niques and results are often coloured by the cultural context of their
authors.
– The “humanities”. Human thought has a long history which goes back
several centuries before our present era. From thought emerged man’s
knowledge and our present-day sciences. But all these sciences cannot
exhaust the wealth of this thought which is the basis and the source of our
perception of human dignity, our personality, in which is to be found most
of our foundations and aspirations.
– The ethics of science imply that every scientist obeys a code of “good
behaviour” of a deontological nature.
– There is no frontier to scientific knowledge. But the scientific com-
munity must not forget that the progress of science and its application
must always respect human dignity and benefit the whole of mankind,
each country of the world and future generations, and work to reduce
inequalities.
– Scientific expertise is often provided by committees of ethics which
have to be rigorously controlled, in which scientists can use their own
expertise. But in many cases, in the case of very serious problems which are
of great importance for the dignity of the person, or for the future of
humanity, one must prevent scientific experts from dealing with positions
or questions which are the proper concern of the decision-making bodies
of a democracy.
Meeting such a requirement is not easy. One can appoint a multi-disci-
plinary committee and add a few people working with scientists who can
represent non-scientists. One can also organise debates, especially between
those various schools of belief or opinion which are very concerned with
the destiny of man, in such a way that they can exchange and compare their
positions.
The scientific community has always been concerned with the educa-
tion of future scientists (researchers, lecturers, engineers, doctors). But in
the present context of the new commitment of science and technology with-
in society a far more important involvement in education of a broad public
is of primary importance in order to make the actions which have been
analysed in the two preceding sections more effective.
Science and technology are, without any doubt, the “motor” of our rap-
idly changing world. The recent new developments described in section II.
1, and the new problems faced by society, are factors which bring out the
truth of this statement. But at a more simple level the same may be said of
the experience of daily life. People are surrounded by the products of tech-
nology which make their lives easy, interesting, and fruitful, although some-
times also tiring. Their world is largely shaped by science. Nonetheless, in
514 PAUL GERMAIN
most cases, they have no idea of why this is so. That is something which
remains mysterious. They do not understand science and, what is even
more serious, they think that they will never understand it. From this point
of view, they are scientifically illiterate.
That is not a satisfactory situation. Firstly, because access to knowledge,
especially to that knowledge which shapes our lives, is an important com-
ponent of the human being and human dignity. Secondly, because it pro-
duces a number of people who feel that they are excluded from society.
Such a feeling of exclusion must be avoided. Lastly, because it prevents the
sound working of democracy. As has already been observed above, such an
exercise is necessary in order for the right decisions to be taken on ques-
tions which may affect the future of humanity.
Of course, in the front line, there will be scientists. Not only those who
teach in departments devoted to the preparation of future teachers and lec-
turers, but also scientists, researchers, lecturers and engineers who could
draw up new teaching methods well suited to a specific public, new paths
by which to make people participate in new knowledge. But also scientists
from other disciplines, from the history and philosophy of science for
example, and – why not? – scholars from other branches of knowledge. One
SCIENCES AND SCIENTISTS AT THE BEGINNING OF THE TWENTY-FIRST CENTURY 515
CONCLUSION
During the closed session of the Academy held during the Plenary Session
many Academicians expressed deep concern at the distorted way in which
recent scientific results, and in particular those relating to genetically
improved plant varieties, have been presented to the public. It was decided to
establish a committee with the task of producing a document on this subject.
The chairman of the committee was A. Rich and its other members were W.
Arber, T-T. Chang, M.G.K. Menon, C. Pavan, M.F. Perutz, F. Press, P.H. Raven,
and R. Vicuña. The document was examined by the Council at its meeting of
25 February 2001, submitted to the members of the Academy for their com-
ments, and then sent to the committee for the preparation of the final version.
The document, which is included in the Proceedings, expresses the con-
cerns of the scientific community about the sustainability of present agri-
cultural practices and the certainty that new techniques will be effective. At
the same time, it stresses the need for the utmost care in the assessment
and evaluation of the consequences of each possible modification, and on
this point we cannot but recall the exhortation of John Paul II regarding
biotechnologies made in his speech of 11 November 2000 on the occasion
of the Jubilee of the Agricultural World: ‘they must be previously subjected
to rigorous scientific and ethical control to ensure that they do not give rise
to disasters for the health of man and the future of the earth’.
The document also expresses concern about excesses with regard to
the establishment of ‘intellectual property’ rights in relation to widely
used crops – excesses which could be detrimental to the interests of devel-
oping nations.
STUDY-DOCUMENT ON THE USE OF ‘GENETICALLY MODIFIED FOOD PLANTS’ 517
II. RECOMMENDATIONS
The Challenge
3. Virtually, all food plants have been genetically modified in the past;
such a modification is, therefore, a very common procedure.
4. The cellular machinery of all living organisms is similar, and the mix-
ing of genetic material from different sources within one organism has
been an important part of the evolutionary process.
5. In recent years, a new technology has been developed for making
more precise and specific improvements in strains of agricultural plants,
involving small, site-directed alterations in the genome sequence or some-
times the transfer of specific genes from one organism to another.
518 STUDY-DOCUMENT ON THE USE OF ‘GENETICALLY MODIFIED FOOD PLANTS’
III. BACKGROUND
ture. Ever since the start of agriculture about 10,000 years ago, farmers
have selected plant variants that arose spontaneously when they offered
increased productivity or other advantages. Over time, new methods for
producing desirable genetic variants were introduced, and have been used
extensively for some two centuries. Cross-breeding of different plant vari-
eties and species, followed by the selection of strains with favorable char-
acteristics, has a long history. That process involves exchanging the genet-
ic material, DNA, from one organism to another. DNA contains genes, and
genes generally act by expressing proteins; thus the newly modified plant
obtained by genetic crossing usually contains some proteins that are dif-
ferent from those in the original plant. The classical method of crossing
plants to bring in new genes often results in bringing in undesirable genes
as well as desirable ones since the process could not be controlled.
1
The C. elegans Sequencing Consortium, 1998. ‘Genome Sequence of the Nematode
C. elegans: A Platform for Investigating Biology’. Science 282: 2012-18.
2
The Arabidopsis Genome Initiative. 2000. Analysis of the Genome Sequence of the
Flowering Plant Arabidopsis thaliana. Nature 408:796-815.
STUDY-DOCUMENT ON THE USE OF ‘GENETICALLY MODIFIED FOOD PLANTS’ 521
A large number of genes in all placental mammals are essentially the same,
and about a third of the estimated 30,000 genes in humans are common to
plants, so that many genes are shared among all living organisms.
Remarkably, one has discovered another reason for the similarities
between DNA sequences in different organisms: DNA can at times move in
small blocks from one organism to another, a process that is called lateral
transfer. This occurs at a relatively high rate in microorganisms, and it also
occurs in plants and animals, albeit less frequently. Once this has taken
place, the genetic material that has been transferred becomes an integral
part of the genome of the recipient organism. The recent sequence of the
human genome revealed that over 200 of our estimated 30,000 genes came
from microorganisms,3 demonstrating that such movements are a regular
part of the evolutionary process.
The new technology has changed the way we modify food plants, so
that we can generate improved strains more precisely and efficiently than
was possible earlier. The genes being transferred express proteins that are
natural, not man-made. The changes made alter an insignificantly small
proportion of the total number of genes in the host plant. For example, one
gene may be introduced into a plant that has 30,000 genes; in contrast, clas-
sical cross-breeding methods often generated very large, unidentified
changes in the selected strains.
Many of the statements made here in abbreviated form have been dealt
with more thoroughly in a number of publications. Among the more sig-
nificant is a report entitled “Transgenic Plants and World Agriculture”,
which was prepared by a committee representing the academies of sciences
of Brazil, China, India, Mexico, the U.K. the U.S, and the Third World
Academy of Sciences. In summary, it reached the conclusion that foods
produced from genetically modified plants were generally safe, that any
new strains needed to be tested and that further investigation of the poten-
tial ecological problems associated with such new strains also needed fur-
ther consideration. The French Academy of Science also issued a very use-
ful report, commenting on many aspects of this issue and dealing especial-
ly with the problems of deployment of GM plants in developing countries.
The accumulating literature in this field has become quite extensive.
Traditional methods have been used to produce plants that manufac-
ture their own pesticides, and thus are protected from pests or diseases.
3
Venter, J. Craig et al. 2001. ‘The Sequence of the Human Genome’. Science
291:1304-51.
522 STUDY-DOCUMENT ON THE USE OF ‘GENETICALLY MODIFIED FOOD PLANTS’
These goals are highly desirable, but the questions that have arisen often
concern the method of genetic modification itself, not its products. The
appearance of these products has generated a legitimate desire to evaluate
carefully their safety for consumption by human beings and animals, as well
as their potential effects on the environment. As is usual for complicated
questions, there are no simple answers, and many elements need careful
consideration.
Contrary to common perception, there is nothing intrinsic to the genet-
ic modification of plants that causes products derived from them to be
unsafe. The products of gene alteration, just like the products of any mod-
ification, need to be considered in their own right and individually tested to
see if they are safe or not. The public needs to have free access to the meth-
ods and results of such tests, which should be conducted not only by com-
panies that develop the genetically altered plants, but also by governments
and other disinterested parties. Overall, widely accepted testing protocols
need to be developed in such a way that their results can be understood and
can be used as a basis for consumer information.
One of the present concerns is that new genetically modified plants
may include allergens that will make them unhealthy for some people. It
is possible to test these plants to determine whether they have allergens.
Many of our present foodstuffs, such as peanuts or shellfish, have such
allergens, and they represent a public health hazard to that part of the
population with corresponding allergies. It is important that any geneti-
cally modified crop varieties, as well as others produced by traditional
breeding methods, be tested for safety before they are introduced into the
food supply. In this connection, we also note that the new technologies
STUDY-DOCUMENT ON THE USE OF ‘GENETICALLY MODIFIED FOOD PLANTS’ 523
offer ready methods for removing genes associated with allergens, both in
present crops and newly produced ones.
Another issue concerns the potential impact of genetically modified plants
on the environment. Cultivated plants regularly hybridize with their wild and
weedy relatives, and the exchange of genes between them is an important fac-
tor in plant evolution. When crops are grown near relatives with which they
can produce fertile hybrids, as in the case of maize and its wild progenitor
teosinte in Mexico and Central America, genes from the crops can spread to
the wild populations. When this occurs, the effects of these genes on the per-
formance of the weeds or wild plants needs to be evaluated. There is nothing
wrong or unnatural about the movement of genes between plant species.
However, the effects of such movement on the characteristics of each plant
species may vary greatly. There are no general reasons why we should fear
such gene introductions, but in each case, scientific evaluation is needed. The
results should be verified by the appropriate government agency or agencies,
and full disclosure of the results of this process should be made to the public.
Improved Foods
There are many opportunities to use this new technology to improve
not only the quantity of food produced but also its quality. This is illustrat-
ed most clearly in the recent development of what is called “golden rice”,4
a genetically modified rice that has incorporated in it the genes needed to
create a precursor of Vitamin A. Vitamin A deficiency affects 400 million
people,5 and it often leads to blindness and increased disease susceptibility.
Use of this modified rice and strains developed with similar technologies
will ultimately make it possible to help overcome Vitamin A deficiency.
“Golden rice” was developed by European scientists, funded largely by the
Rockefeller Foundation and using some methods developed by a private
company. However, that company has agreed to make the patents used in
the production of this strain freely available to users throughout the world.
When successfully bred into various local rice strains and expressed at high
enough levels, it offers the possibility of helping to alleviate an important
nutritional deficiency. This is just one of several plant modifications that
has the potential for producing healthier food.
4
Potrykus, Ingo. 2001. ‘Golden Rice and Beyond’. Plant Physiology 125: 1157-61.
5
Ye, Xudong et al. 2000. ‘Engineering the Provitamin A (b-Carotene) Biosynthetic
Pathway into (Carotenoid-Free) Rice Endosperm’. Science 287: 303-5.
524 STUDY-DOCUMENT ON THE USE OF ‘GENETICALLY MODIFIED FOOD PLANTS’
The loss of a quarter of the world’s topsoil over the past fifty years, cou-
pled with the loss of a fifth of the agricultural land that was cultivated in
1950,6 indicates clearly that contemporary agriculture is not sustainable. To
become sustainable, agriculture will need to adopt new methods suitable
for particular situations around the world. These include greatly improved
management of fertilizers and other chemical applications to crops, inte-
grated pest management to include improved maintenance of populations
of beneficial insects and birds to control pests, and the careful management
of the world’s water resources. (Human beings currently use 55% of the
renewable supplies of fresh water, mostly for agriculture.) It will also be
necessary to develop strains of crop plants with improved characteristics to
make them suitable for use in the many diverse biological, environmental,
cultural and economic areas of the world.
Genetically modified plants can be an important component of efforts
to improve yields on farms otherwise marginal because of limiting condi-
tions such as water shortages, poor soil, and plant pests. To realize these
benefits, however, the advantages of this rapidly growing technology must
be explained clearly to the public throughout the world. Also, results of the
appropriate tests and verifications should be presented to the public in a
transparent, easily understood way.
6
Norse, D. et al. 1992. ‘Agriculture, Land Use and Degradation’. pp. 79-89. In Dooge,
J.C.I. et al. (eds.). An Agenda of Science for Environment and Development into the 21st
Century. Cambridge University Press, Cambridge.
526 STUDY-DOCUMENT ON THE USE OF ‘GENETICALLY MODIFIED FOOD PLANTS’
7
Pimentel, D. et al. 1992. ‘Environmental and Economic Costs of Pesticide Use’.
BioScience 42: 750-59.
506 MICHAEL HELLER
there here a similar ‘illicit jump’ as in the case of the genetic and neural
codes? If so, this ‘illicit jump’ would be no more and no less than the
mystery of creation itself.
11. We would fall victim to a facile temptation if we treated the above
considered ‘illicit jumps’ as gaps in the structure of the universe which
could be (or even should be) filled in with the ‘hypothesis of God’. The true
duty of science is never to stop asking questions, and never to abandon loo-
king for purely scientific answers to them. In my view what seems to be an
‘illicit jump’ from the point of view of our present logic is in fact nature’s
fundamental strategy in solving its most important problems (such as the
origin of life and consciousness). The limitations of our logic are too well
known to be repeated here (Gödel’s theorems, problems with applications
of the standard logic to quantum theory, etc.). My hypothesis is that our
‘Aristotelian logic’ is too simplistic to cope with the problems we are consi-
dering at this conference. What we need is not another ‘non-standard’ logi-
cal system which is constructed by changing or rejecting this or that axiom
or this or that inference rule. What we need is something radically new: a
far-reaching revolution comparable with that in changing from linear
physics to non-linear physics.
I do not think that we would have any chance of inventing such a logic
by experimenting with purely symbolic operations and having it at our dis-
posal to successfully apply it to solving the problem of life and consciou-
sness. On the contrary, such a logic will probably emerge from the analysis
of concrete empirical investigations, and will constitute only one compo-
nent of the solution of the problem.