Você está na página 1de 201

PERSPECTIVES ON SUSTAINABLE

TECHNOLOGY
PERSPECTIVES ON SUSTAINABLE
TECHNOLOGY

M. RAFIQUL ISLAM
EDITOR

Nova Science Publishers, Inc.


New York
Copyright © 2008 by Nova Science Publishers, Inc.

All rights reserved. No part of this book may be reproduced, stored in a retrieval system or
transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical
photocopying, recording or otherwise without the written permission of the Publisher.

For permission to use material from this book please contact us:
Telephone 631-231-7269; Fax 631-231-8175
Web Site: http://www.novapublishers.com

NOTICE TO THE READER


The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or
implied warranty of any kind and assumes no responsibility for any errors or omissions. No
liability is assumed for incidental or consequential damages in connection with or arising out of
information contained in this book. The Publisher shall not be liable for any special,
consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or
reliance upon, this material.

Independent verification should be sought for any data, advice or recommendations contained in
this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage
to persons or property arising from any methods, products, instructions, ideas or otherwise
contained in this publication.

This publication is designed to provide accurate and authoritative information with regard to the
subject matter covered herein. It is sold with the clear understanding that the Publisher is not
engaged in rendering legal or any other professional services. If legal or any other expert
assistance is required, the services of a competent person should be sought. FROM A
DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE
AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS.

LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA

Islam, M. Rafiqul.
Perspectives on sustainable technology / M. Rafiqul Islam.
p. cm.
ISBN 978-1-60692-451-8
1. Environmental engineering. b2. Sustainable development. I. Title.
TA170.I74 2008
628--dc22
2007039926

Published by Nova Science Publishers, Inc. New York


CONTENTS

Preface vii
Introduction If Nature Is Perfect, What Is ‘Denaturing’? 1
M. R. Islam
Chapter 1 Truth, Consequences and Intentions:
Engineering Researchers Investigate Natural and
Anti-Natural Starting Points and Their Implications 9
G. M. Zatzman and M. R. Islam
Chapter 2 A Comparative Pathway Analysis of a Sustainable
and an Unsustainable Product 67
M. I. Khan, A. B. Chettri and S. Y. Lakhal
Chapter 3 A Numerical Solution of Reaction-Diffusion
Brusselator System by A.D.M. 97
J. Biazar and Z. Ayati
Chapter 4 Zero-Waste Living with Inherently Sustainable Technologies 105
M. M. Khan, D. Prior and M. R. Islam
Chapter 5 Tea-Wastes as Adsorbents for the Removal
of Lead from Industrial Waste Water 131
M. Y. Mehedi and H. Mann
Chapter 6 Multiple Solutions in Natural Phenomena 157
S. H. Mousavizadegan, S. Mustafiz and M. R. Islam
Index 177
PREFACE

Nature thrives on diversity and flexibility, gaining strength from heterogeneity, whereas
the quest for homogeneity seems to motivate much of modern engineering. Nature is non-
linear and inherently promotes multiplicity of solutions. This new and important book
presents recent research on true sustainability and technology development from around the
globe.
In: Perspectives on Sustainable Technology ISBN: 978-1-60456-069-5
Editor: M. Rafiqul Islam, pp. 1-7 © 2008 Nova Science Publishers, Inc.

INTRODUCTION
IF NATURE IS PERFECT, WHAT IS ‘DENATURING’?

M. R. Islam
Civil and Resource Engineering Dept., Dalhousie University,
Halifax, Nova Scotia, Canada

“There's two possible outcomes: if the result confirms the hypothesis,


then you've made a measurement.
If the result is contrary to the hypothesis, then you've made a discovery.”
– Enrico Fermi

How well do we know nature? In the words of Aristotle (384-322 BC): “The chief forms
of beauty are orderly arrangement (taxis), proportion (symmetria), and definiteness
(horismenon)”. Yet, scientifically, there is no such thing as ‘orderly’, ‘symmetrical’, or
‘definite’ in nature. If Aristotle is right along with all who say “God created us in His image”,
we must have an ugly God. Nearly a millennium ago, long before renaissance hit Europe,
Averröes (known as Ibn Rushid outside of the Eurocentric world) pointed out, that
Aristotelian logic of ‘either with us or against us’ cannot lead to increasing knowledge unless
the first premise is true.
The difficulty, all the way to the present modern age, has been our inability to identify the
first premise.
For Averröes, the first premise was the existence of the (only) creator. It was not a
theological sermon, it wasn’t even a philosophical discourse, it was purely scientific. Inspired
by the Qu’ran that cites the root word ilm (science) second most frequently (only second to
the word, Allah), he considered the Qu’ran as the only available, untainted communication
with the creator and linked the first premise to the existence of such a communication.
Thomas Aquinas took the logic of Averröes and introduced it to Europe with a simple yet
fatal modification: he would color the (only) creator as God and define the collection of
Catholic church documentation on what took place in the neighborhood of Jerusalem some
millennium ago as the only communication of God to mankind (hence the title, bible – the
(only) Book).
2 M. R. Islam

It is unfortunate because the new doctrine introduced by Thomas Aquinas was


diametrically opposite to the science introduced by Averröes. The intrinsic features of both
God and bible were equally dissimilar to the (only) creator and Qu’ran, respectively
(Armstrong, 2002). Unfortunately for Europe and the rest of the world that Europe would
eventually dominate, this act of Thomas Aquinas indeed became the bifurcation point
between two pathways, with origin, consequent logic, and the end being starkly opposite.
With Aristotle’s logic, something either is or is not: if one is ‘true’, the other must be false.
Because, Averröes’ ‘the creator’ and Thomas Aquinas’s ‘God’ both are used to denominate
monotheist faith, the concept of science and religion became a matter of conflicting paradox
(Pickover, 2004). As long as no one dares ask: "What relevance does
something that eventuated centuries ago in Palestine have for the religion that was re-invented
during the late Middle Ages to suit the circumstances of European descendants of tribes from
the Caucusus?”, the confusion between religion, ethnicity, and science continues.
Averröes called the (only) creator as ‘The Truth’ (In Qu’ranic Arabic, the word ‘the
Truth’ and ‘the Creator’ refer to the same entity). His first premise pertained to the book
(Qur’an) that said, “Verily unto Us is the first and the last (of everything)”. (89.13) Contrast
this to a “modern” view of a creator. In Carl Sagan’s words (Hawkings, 1988), “This is also a
book about God…or perhaps about the absence of God. The word God fills these pages.
Hawking embarks on a quest to answer Einstein’s famous question about whether God had
any choice in creating the universe. Hawking is attempting, as he explicitly states, to
understand the mind of God. And this makes all the more unexpected the conclusion of the
effort, at least so far: a universe with no edge in space, no beginning or end in time, and
nothing for a Creator to do.” What a demented way to describe the creator and His job
description!
Why do we fail to see the divergence of these two pathways? Historically, challenging
the first premise, where the divergence is set, has become such a taboo that there is no
documented case of anyone challenging it and surviving the wrath of the Establishment
(Church alone in the past, Church and Imperialism after the Renaissance). Even challenging
some of the cursory premises has been hazardous, as demonstrated by Galileo. Today, we
continue avoid challenging the first premise and even in the information age this it continues
to be hazardous, if not fatal, to challenge the first premise or secondary premises. After the
Renaissance, it was the Church and the Government to worry about, now keeping up with the
culture of The Trinity, the Corporation has been added to this array of the most powerful
gatekeepers of spurious assumptions. It has been possible to keep this modus operandi
because new “laws” have been passed to protect ‘freedom of religion’ and, of late, ‘freedom
of speech’. For special-interest groups, this opens a Pandora’s box for creating ‘us vs them’,
‘clash of civilizations’ and every aphenomenal model now in evidence (Zatzman and Islam,
2007).
The first paper avoids the discussion of the theological nature but does manage to
challenge the first premise. Rather than basing the first premise on the Truth à la Averröes, it
talks about individual acts. Each action would have three components: 1) origin (intention); 2)
pathway; 3) consequence (end). Averröes talked about origin being the truth; this paper talks
about intention that is real. How can an intention be real or false? This paper talks at length
about what is real and equates reality with natural. The paper outlines fundamental features of
nature and shows there can be only two options: natural (true) or artificial (false). The paper
shows Aristotle’s logic of anything being ‘either A or not-A’ is useful only to discern
If Nature Is Perfect, What Is ‘Denaturing’? 3

between true (real) and false (artificial). The paper shows the entire pathway has to be real:
without that, the paper argues, the aphenomenon will set in and the research results will be
harmful to the inventor as well as those who follow him/her (blindly or deliberately). In order
to ensure the end being real, the paper introduces the recently developed criterion of Khan
(2007) and Khan and Islam (2007). If something is convergent when time is extended to
infinity, the end is assured to be real. In fact, if this criterion is used, one can be spared of
questioning the ‘intention’ of an action. If any doubt, one should simply investigate where the
activity will end up if time, t goes to infinity. During the review process of this paper, an MIT
PhD (in Engineering) pointed out serveral verses of the Qu’ran. He quoted, “Are the who
creates and the one who doesn’t the same?” (17.16).
What is being said in the paper is not new. To the authors’ credit, they were not aware of
these verses while writing the paper. It is possible Averröes knew about these verses in the
context of origin, but Khan (2007), who introduced the criterion of time going to infinity (to
determine the fate of a technology), did not, and thus couldn’t be inspired by them while
developing this criterion. Averröes couldn’t have commented about this criterion nor did he
report any link between the end criterion, the intention and technology development (Zatzman
and Islam, 2007). The first paper continues the discussion of nature and what is natural and
shows with a number of very recent examples how currently practiced research protocols
violate fundamental traits of nature. The first premise of this paper is: Nature is perfect. In
philosophy, this might well mean the creator is perfect, in theology, it might mean, “God is
perfect”; for Averröes, it might mean ‘the Creator is the only creator and is the one who
defines perfection’; but for this paper, this simply is a matter left to the reader. The most
important conclusion of the paper is: if it is not real, it is not sustainable.
There are no less than 20 million known chemicals. Most of them are natural. In the
modern age, we have managed to create some 4000 of them, all artificially, both in process
and ingredient. The fact that natural process has been violated, they become inherently toxic,
meaning nature has to fight them throughout their pathways extending to an infinite time
period. The fate of all these 4000 chemicals shows that we do not need to wait an infinite
period to discover we had no business ‘creating’ them (Globe and Mail, 2006). The second
paper uses a previously developed criterion and demonstrates that unless this criterion is
fulfilled, what we claimed to have created will act exactly the opposite way as to what we
claimed it would do. The paper shows with example that artificial products follow a very
different (in fact: opposite) pathway as a function of time. Even with the same origin, the
pathway would be different, and with different origins, the difference would be even more
stark. If the claim of the first paper needed any physical evidence, the second paper by Khan
et al. offers ample of that. This paper shows the reason behind the apparent success of
artificial chemicals. Because the time criterion was not used, all chemicals were thought to be
the same, based only on their compositions. With this mode, beeswax and paraffin wax,
vitamin C from organic source and vitamin C artificial source, honey and saccharine, and
virtually all 4,000 synthetic chemicals would appear to be the same as their natural
counterpart. Only recently has it become clear that artificial products do the opposite of the
natural ones. For instance, artificial vitamin C gives cancer, the natural one blocks it; natural
chromium increases metabolic activities, artificial chromium decreases them; natural
fertilizers increase food value, artificial fertilizers decrease it; and the list continues for all
4,000 artificial chemicals. This is not a matter of proof, it is a reality. The Nobel Prize may
4 M. R. Islam

have been awarded to the inventor of DDT, but this did not detoxify its environmental
impacts.
Recently, the United Arab Emirates banned the use of plastic containers for serving hot
beverages. Years of research finally discovered that plastic containers leach out toxic
chemicals at hot temperatures. Of course, similarly, DDT was a ‘miracle powder’ for decades
before it was banned. How can we predict behavior with foresight rather than hindsight? How
long will it take to predict that if plastic containers are unacceptable for hot beverages, they
are also not acceptable for cold ones? When will we be able to use the knowledge of
Arrhenius’ equation, showing reaction rate as a continuous function of temperature? The first
paper of this issue claims that the modern age has been hopeless in making use of the modern
analytical mind. Practically all the theories and ‘laws’ are based on some spurious first
premise that conflicts with natural traits. So, should we through away everything we learned
in the past?
Indeed, we can make use of the past experience if we focus on failures and investigate
why things did not work rather than being obsessed with measuring success and pragmatic
results. The non-linear character of any equation that attempts to describe any natural process
has been a failure in ‘new science’. We can barely solve any meaningful non-linear problem
without linearizing it first. Linearization changes the very nature of the problem and suits the
problem to satisfy the solution. Conventional wisdom would consider this modus operandi
preposterous, yet pragmatism in new science calls it ‘the modus operandi’. The third paper by
Biazar and Hyati attempts to solve a highly non-linear problem without linearizing it first.
The paper does not recover all the essential features of nature, such as multiple solutions, but
it advances knowledge by producing one solution domain that was not known before.
The ‘Pop Princess’, Tatiana, might have made a fortune, singing “Things happen because
they should”, an engineering professor in Canada questioned the balanced nature of nature.
He asked, “Nature isn’t really perfect, is it?” It would be considered blasphemy if it wasn’t
for the fact that the professor is also an ordained minister of a church. In support of his
argument, he cites the fact that there are deadly bacteria, poisonous hemlock, and the potent
venom. What does this line of argument imply? If it is deadly to human beings, it is not
perfect. If one takes this argument further, it really implies if something does not advance the
short-term gains of someone and this someone is in a position of calling the shots, that
‘something’ is imperfect. One doesn’t need Aristotle to see where this argument is leading.
Nature is perfect because, even as it accommodates changes evolved within it and even as
it overcomes insults inflicted on the environment by ‘development’, it remains balanced. If
nature wasn’t perfect, it would implode this very moment. Throughout human history, we
have understood that nature is balanced. Even during the ‘new science’ era, we have accepted
concepts, such as, mass balance and energy balance, with ease, albeit with a myopic vision.
This concept comes from the first premise that matter (and, therefore, energy) cannot be
created or destroyed. This also dictates that any natural (real) phenomenon be also 100%
efficient. The new science that has claimed ‘nature can be improved (through engineering)’
has not disputed mass and energy balance but it has failed to see mass and energy balance
also means nature is 100% efficient and 100% zero-waste. This failure to see the obvious
outcome of the fundamental premise of nature needed slanting of zero-waste engineering as
‘blue sky’ research. The fourth paper by Khan et al. points out this absurdity of Eurocentric
engineering and proposes a zero-waste process. This paper personifies the pro-nature
technology that satisfies the three fundamental criteria, namely, real origin, real pathway, and
If Nature Is Perfect, What Is ‘Denaturing’? 5

real end. It defines waste (both in energy and mass) as something that is not ‘readily’ usable
by nature. “Readily” here implies characteristic time frame of a process (Zatzman and Islam,
2007; also see Paper 1 of this issue). This definition is scientific, yet it has eluded even the
most pro-environment enthusiasts. For instance, today no criterion exists that would call
converting solar energy into electric energy a waste. Yet, electricity (AC or DC) does not
exist in nature (no, lightning is neither ‘direct’ nor ‘alternating’, since it delivers current flow
in discrete pulses). This paper also explains how a pro-nature process can be confused with an
anti-nature one and vice versa, depending on perception. The paper refines previous work of
Khan and Islam (2007) to show how ‘good’ (real) intention will be beneficial continuously
and ‘ill’ (artificial) intention will do the opposite. If the direction is wrong, the whole concept
of efficiency and speed is perverted. After all, how good speed of a movement is if the
direction is wrong?
Continuing with the zero waste theme, the fifth paper (by Mehedi et al.) shows how even
organic waste (which is not really waste) can be rendered valuable. Using the example of
Turkish tea (Turkey is one of the world’s biggest consumers and producers of a specific brand
of black tea), the paper shows how tea waste can be used to first remove metallic
containments from an aqueous stream, then to grow bacteria that would ferment the tea into
organic fertilizers and valuable nutrients. While this technique cannot undo the damage
chemical fertilizers or pesticides may have caused to the tea, it does propose a natural
pathway that would leave our waste products in a state amenable to natural processing.
To Albert Einstein, the leap from three dimensions to four dimensions was a matter of
visualizing n dimensions with a value n=4 put in it. Paper 6 introduces the knowledge
dimension and shows such dimension is not only possible, it is necessary. Our knowledge is
conditioned not only by the quantity of information gathered in the process of conducting
research, but also by the depth of that research, i.e., the intensity of our participation in
finding things out. In and of themselves, the facts of nature’s existence and of our existence
within it do neither guarantee nor demonstrate our consciousness of either, or the extent of
that consciousness. Our perceptual apparatus enables us to record a large number of discrete
items of data about the surrounding environment. Much of this information we organize
naturally and indeed unconsciously. The rest we organize according to the level to which we
have trained, and-or come to use, our own brains. Hence, neither can it be affirmed that we
arrive at knowledge directly or merely through perception, nor can we affirm being in
possession at any point in time of a reliable proof or guarantee that our knowledge of
anything in nature is complete.
We normally live and work, including conduct research, within these limitations without
giving them much further thought. In this respect, how can we be said to differ significantly
from the residents of Flatland (Abbott, 1884)? Their existence in two dimensions carried on
perfectly comfortably and they developed their own rationalizations to account for
phenomena intervening in their existence that arose from a third dimension of whose very
existence they knew nothing. Einstein demonstrated that nature is four-dimensional, yet we
carry on almost all scientific research and daily life according to an assumption that the world
is three-dimensional and time is something that varies independently of anything taking place
within those three dimensions. Implicit in this arrangement is the idea that, although spatial
dimensions may vary in linear independence of, or in linear dependence upon, one another,
time does not or shall not. To help demonstrate the consequences of this anomaly, consider
the simple problem of determining either the maximum volume of perfect cube possible
6 M. R. Islam

inside a sphere of a given radius r, or the maximum spherical volume possible inside a given
cube of side-length l. For the most general case, requiring the fewest linearizing exogenous
assumptions, we have to consider ∂x, ∂y, and ∂z, including all their possible first-degree
combinations xyz, yxz and zxy. In other words: we have to accept the prospect that changes in
any of the linear dimensions may not be independent of one another. If we are examining this
as a transformation taking place over some period of time t, we would likely add
consideration of a ∂/∂t operation on each of x, y and z. However, would we even think of
examining xyzt, yxzt, zxyt and txyz? Why not? Perhaps, treating time as the independent variable
is a rationalization akin to Flatlanders’ treatment of evidence of the third dimension in their
two-dimensional universe. If this is the case, then what is the accomplishment of conscious
knowledge, as distinct from mere recording and cataloguing of perceptions? This gap
between conscious knowledge and catalogues of recorded perception seems very like that
between time as an actual dimension and any of the spatial dimensions taken singly or in
combination, or like that of two-dimensional Flatlanders’ experiencing of phenomena
intervene from a third dimension outside their ‘natural’ experience. It feels like a dimensional
gap: can we consider the knowledge domain as perhaps a fifth dimension? Its relationship to
the other dimensions is not orthogonal like that of length, width or height to each other;
neither is time’s relationship as a dimension relative to length, width or height. In fact,
however, orthogonality is more of a convenient assumption for purposes of a certain kind of
mathematical manipulation, than a proper description. In reality, every two-dimensional plane
or surface is simply a collection of points, beyond those on a line, that may also enclose any
other line of that plane or surface; every three-dimensional space is a collection of points on,
as well as and in between, any and every plane or surface. All three of these dimensions are,
literally, time-less. The temporal dimension incorporates points in the past and the future
along with all points in the present. The awareness of these points-in-themselves and their
interrelationship(s) forms the fifth dimension we call the “knowledge domain” (Zatzman and
Islam, 2007).
Finally, a book review is presented. The book provides one with a guideline for
developing technologies that environmentally beneficial and socially responsible, the two
most important features of nature. Among others, the book shows with concrete examples
how pro-nature techniques are not only possible, but an absolute necessity for achieving and
maintaining sustainability.

REFERENCES
Abbott, Edwin. 1884. Flatland – A Romance in Many Dimensions London: Macmillan.
Armstrong, K., 1994, A History of God, Ballantine Books, Random House, 496 pp.
Globe and Mail, 2006, Toxic Shock, a series on synthetic chemicals.
Hawking, S., 1988, A Brief History of Time, Bantam books, London, UK, 211 pp.
Khan, M.I., 2007, J. Nature Science and Sustainable Technology, vol. 1, no. 1, pp. 1-34.
Khan, M.I. and M.R. Islam, 2007, True Sustainability in Technological Development and
Natural Resource Management, Nova Science Publishers, NY, 381 pp.
Pickover, C.A., 2004, The Paradox of God and the Science of Omniscience, Palgrave
MacMillan, N.Y., 288 pp.
If Nature Is Perfect, What Is ‘Denaturing’? 7

Zatzman, G.M and Islam, M.R., 2007, Economics of Intangibles, Nova Science Publishers,
New York, 400 pp.
In: Perspectives on Sustainable Technology ISBN: 978-1-60456-069-5
Editor: M. Rafiqul Islam, pp. 9-66 © 2008 Nova Science Publishers, Inc.

Chapter 1

TRUTH, CONSEQUENCES AND INTENTIONS:


ENGINEERING RESEARCHERS INVESTIGATE
NATURAL AND ANTI-NATURAL STARTING POINTS
AND THEIR IMPLICATIONS

G. M. Zatzmana and M. R. Islamb


a
EEC Research Organisation-6373470 Canada Inc.
b
Dep’t of Civil and Resource Engineering, Dalhousie University, Halifax, Canada

ABSTRACT
The contemporary scene is rife with the howling contradiction of an unprecedented
number of scientific tools at Humanity’s disposal for sorting out a vast range of problems
alongside a looming and growing “technological disaster” – a seemingly endless
catalogue of system failures, from airplane crashes to oil-rig sinkings, arising most often
either from some identifiable “human error” in deploying a key technology or from what
are widely reported as “unintended consequences.” Especially in the civil courts of the
United States, an entire multi-billion-dollar sub-industry has emerged in the field of what
is known as “product liability” law to handle compensation claims by individuals for all
manner of conditions, ranging from the contracting of malignant cancers (from exposures
to everything from asbestos to cigarettes) to death or acute pain from faulty pacemakers
and heart-valve replacements. The resultant widespread focus on critical-point failure has
distracted attention away from addressing more fundamental questions about how the
science that is supposed to underlie the technologies implicated in these disasters was
conducted in the first place. This article refocuses on a number of the issues of how
science is conducted in the Information Age, suggesting a way out of the present
conundrum.

The secret of life is honesty and fair dealing.


If you can fake that, you've got it made.
- Groucho Marx
10 G. M. Zatzman and M. R. Islam

INTRODUCTION
For some time, the effects of faked science have been increasingly apparent. As fraud is
not a new story within the human condition, however, little headway has been made in
specifying anything particularly significant about the rise of faked science, other than to
remark that there is also a lot more science about, and therefore such an increase in fakery can
be statistically anticipated.
There has been a lot of discussion and analysis of the effects of technological failures
stemming from science poorly-conceived or badly-done. Much of this discourse centres on a
looming and growing sense of what Nobel Chemistry Laureate Robert F. Curl has called “our
technological disaster” – a seemingly endless catalogue of system failures, from airplane
crashes to oil-rig sinkings, arising most often either from some identifiable “human error” in
deploying a key technology or from what are widely reported as “unintended consequences.”1
The resultant widespread focus on critical-point failure has distracted attention away from
addressing more fundamental questions about how the science that is supposed to underlie the
technologies implicated in these disasters was conducted in the first place.
What had been less prominent until recently was much public expression of concern that
anything be done about this problem. Unfortunately much of this concern has expressed itself
in the form of campaigns against rising plagiarism in academia, and wringing of hands and
crackdowns in security against the relatively easy and widespread practice of using the World
Wide Web as a screen on which to misrepresent work results or effort to a potentially global
audience. The starting-point of the present article is that the widespread focus on critical-point
failure has distracted attention away from addressing more fundamental questions about how
the science that is supposed to underlie the technologies implicated in these disasters was
conducted in the first place. Disinformation has become the norm rather than the exception in
all fields of scientific endeavour, as the very wide range of items included in the Appendix of
this article illustrate. And underlying this development is a deeper systematic problem,
possessing definite historical origins and direction, which threatens the present and future of
the scientific research enterprise and needs urgently to be addressed.

HISTORICAL OVERVIEW: HOW THE QUESTION OF “INTENTION”


POSED ITSELF DURING EARLY MODERN ERA PRE-WW1
The true wellsprings of modern science are to be found neither in the hushed hallways of
Cal Tech or the Institute of Advanced Studies at Princeton, nor in the lushly-outfitted
laboratories of star researchers at private-sector foundations or public-sector institutions. One
of the most critical of critical points in the arc of the story-line of modern science’s
development emerged about 350 years ago, in Renaissance Italy, at the University of Padua.
Galileo Galilei demonstrated not only the evidence for his proposition that the Earth and other
1
Especially in the civil courts of the United States, an entire multi-billion-dollar sub-industry has emerged in the
field of what is known as “product liability” law to handle compensation claims by individuals for all manner
of conditions, ranging from the contracting of malignant cancers (from exposures to everything from asbestos
to cigarettes) to death or acute pain from faulty pacemakers and heart-valve replacements. Dr Curl teaches a
course on this theme at Rice University; see http://www.owlnet.rice.edu/~univ113/contact.html.
Truth, Consequences and Intentions 11

known planets must revolve around the Sun, but also why the authoritative doctrine approved
for the previous 14 centuries or so and still defended at that time by the Roman Catholic
Church throughout the Western Christian world – that the Sun revolves around the Earth –
must be wrong. That was his great crime, for which the Inquisition was to punish him. Even
as he submitted humiliatingly to the Church’s decision to ban his writings while permitting
him to live but only in a state of virtual house arrest, he is reported (apocryphally) to have
muttered under his breath: “E per si muove!” (“And yet it [i.e., the Earth] moves!”)
Many other far more dramatic tales of great scientific accomplishment or failure never
entered, or have long since lapsed from, public consciousness, yet this anecdote lives on
hundreds of years since the events it describes. Indeed: the international mass media marked
the moment at which Pope John Paul II in 1984 “rehabilitated” Galileo, effectively lifting the
moral taint that still operated to enforce the Church’s ban on Catholics reading and
disseminating Galileo’s work (Golden and Wynn, 1984). The underlying material reason why
this particular historic moment cannot be eradicated from people’s consciousness lies in the
event’s significance for all subsequent human development: knowledge could no longer be
retained as the private property of a self-appointed priesthood.
The importance for the development of science lay in Galileo’s approach. This
established as general for all science a method already being used by others. The method was:
to elucidate the facts and features of a phenomenon and draw warranted conclusions without
reference to what any established or officially-approved authority might have to say for or
against those conclusions. This clear-cut rejection of arbitrary authority was crucial for the
further explosive development of science in European, and eventually also American and
global, life and letters. People eventually developed a nose for sniffing out signs of the
presence of such arbitrary authority lurking anywhere. During the 17th, 18th and 19th centuries,
especially in the scholarly literature of Westernised / Christianised societies, such lurkers
frequently gave themselves away by introducing talk of “intentions” – meaning: the intentions
of the Deity in human affairs – in opposition to scientific findings that offended or otherwise
triggered them to react. Echoes of this openly anti-scientific manipulation of the notion of
intentions continue in our own day in the discussions and battles over “intelligent design”,
“Creationism” and other challenges to Charles Darwin’s theory of natural selection as the
mechanism of speciation in the animal kingdom.2
The economic system that grew and expanded on the basis of the scientific revolution
launched by Galileo treated the factory labourer as the extension of the machinery to which
his creative labouring power was harnessed. The line between the labourer’s humanity and
the thing-ness of the raw materials and machinery became blurred as both were now in the
factory-owner’s power. Increasingly they became merged as the scale of factory production
and its associated systems of distribution and exchange expanded to include the
commodification of many wants that had never before been packaged as “products” for sale
in the market. None of these things, however, yet threatened or affected the scientific research
undertaken to develop and prove the technologies applied in industry.

2
See infra Appendix C, Item 1.
12 G. M. Zatzman and M. R. Islam

HISTORICAL OVERVIEW, CONT’D: SINCE WW1 – “THERE IS NO


GOD BUT MONOPOLY…”
By the time of the First World War, however, this expansion had already followed in the
wake of the increasing concentration of the means of production on the largest possible scale
as well as concentration of the ownership of such giant systems in fewer and fewer hands.
Certain essential characteristic features of this transformed order of social production were
unprecedented. The enterprises were monopolistic and manipulated market competition to
destroy and absorb competitors. Their capital base was increasingly detached from any
previous notions of the individual capitalist’s risk and repositioned as the investment decision
of a collective. These ownership groups included one or more banks or financial institutions
possessing detailed knowledge of existing competitors, competing markets and competing
products as well as other sources of investment capital prepared to forego any say in
management in exchange for a guaranteed rate of return.
The old-style individual factory owner’s obsession with remaining in business riveted his
attention on maintaining whatever was necessary for garnering the average rate of profit in his
industry. The modern monopoly group’s obsession with guaranteeing a rate of return
premised on continuing to grow riveted its attention on going for the maximum profit in every
single undertaking. One of the most crucial weapons of competition at this level was the
development and deployment of technologies not yet possessed or used by others, based on
denying rivals’ access to comparable technological means for as long as possible. Into the
development of the science underlying such technologies, these mandates inserted an
intention that had not played a role before. On top of the well-known examples of the
corrupting of science by deceptive advertising messages was added a far profounder
corrupting of the mission and purpose of the research scientist-engineer, whose instinct to
resist was challenged by the fact that all sources of his present or future employment and
livelihood lay with these giant monopolies, and he must therefore submit to an imposed alien
intention or not expect to eat. Eventually and inevitably, a great deal of the science itself
increasingly incorporated and regurgitated as scientific these monopolising intentions, dressed
up as modern scientific norms.3
The intentions underpinning monopoly dictation of scientific and technological
development increasingly threaten Humanity’s future on this planet. Whereas the insistence
upon authority over science by the Papal Index and Inquisition threatened individuals with
torture and worse, the shahadah of modern monopoly – “there is no god but Monopoly and
maximum is Its profit” 4 – threatens entire societies with destruction. And here is also the
point at which an idea actually as old as the earliest emergence of the human species on the
face of the earth, viz., the primacy of the natural environment over anything fabricated by any
of its participants, has emerged in strikingly modern form as the pointer to real solutions:
intervention (by humans or any other agency) that preserves the primacy of the surrounding
environment vis-à-vis any of its constituent elements will follow the natural path and sustain

3
For example, “proving” the safety and reliability of a new drug by massive clinical trials (which only the
wealthiest manufacturers can afford in the first place) and random sampling of the resulting datasets indicates
nothing whatsoever about the safety or reliability of the underlying science. See infra Appendix D, Item 3.
4
The shahadah, or profession of faith that is one of Islam's five pillars, states that “There is no God but God and
Muhammad is his Prophet.” By contrast, the non-religious theory and practice of Monopoly seems to rest on
just this one pillar alone.
Truth, Consequences and Intentions 13

the environment for the long term, whereas intervening according to any other intention in the
short-term will have deleterious anti-Nature, anti-environmental consequences.

INTERVENTION, APHENOMENALITY AND ANTI-NATURE OUTCOMES


Intervention in anything – as a citizen or official in some social, economic or political
situation, or as an engineer transforming a natural process and-or raw materials into some
kind of finished product, whether for sale in the market or as the input to a further stage of
processing – entails a process of formulating and taking decisions. The pathway on which
such a decision is formulated and taken is crucial. Among other reasons for considering these
pathways crucial is the fact that a struggle is waged inside these pathways between that which
is natural and that which is anti-Nature. That which is anti-Nature takes the form of an
unverified assumption, masked as a positive assertion, about what the investigator wishes
existed. In fact, the phenomenon purportedly encapsulated in the (unverified) assumption or
assertion does not exist – either fully or verifiably or even at all – and may more properly be
described as aphenomenal.5
The extreme technological complexity of modern systems, coupled to the widespread and
unquestioned acceptance of the fundamental pragmatic principle that “truth is whatever
works”, has rendered Humanity highly vulnerable to aphenomenal modeling of solutions,
long-term or short-term. If the logical or technological imperative is satisfied in the short
term, no one questions its premises. When it fails disastrously, the last thing anyone dares
hint at examining is the underlying assumption(s) that produced the unwanted result. As is
well known, the larger the organisation, the more it may – indeed: must, as career futures
could be at stake – resist modeling how something actually failed. Back in the 19th century,
when the maximum exploitation of people’s labouring power was the main aim of the entire
social and economic order, the expression was popularised that “idle hands are the Devil’s
playground.” In the Information Age of our own time, the Unchecked Assumption has really
taken over that role. Today, the very first check to be carried out when investigating failure
besetting any complex system is the system’s underlying logic, which is where
aphenomenality unleashes its greatest destruction. Any and every situation where the
operational logic is one or another version of the following syllogism is certain to become
beset by aphenomenality:

• All Americans speak French


• Jacques Chirac is an American
• Therefore, Jacques Chirac speaks French

The short-term pragmatic conclusion is true and appears logical, but the pillars of its
logic are completely hollow, vulnerable and false.
“Aphenomenality” is a term that has been coined to describe in general the non-existence
of any purported phenomenon or of any collection of properties, characteristics or features

5
In this particular regard, “expected values” arising from a projection of the probability measure taken on some
random variable and then accepted as outcomes on the authority of the mathematical theory without further
verification, are a frequent culprit.
14 G. M. Zatzman and M. R. Islam

ascribed to such a purported but otherwise unverified or unverifiable phenomenon. Research


work undertaken by members of the EEC Research Group into a number of phenomena of
natural science, social science and physical and social engineering has identified an
“aphenomenal model”. This describes an approach to investigating phenomena, and-or
applying the investigation’s result, according to criteria that assume or circumscribe
conclusions in the assumptions and hypotheses that sparked the investigation.
The process of aphenomenal decision-making is illustrated by the inverted triangle,
proceeding from the top down (Figure 1). The inverted representation stresses the inherent
instability of the model. The source data from which a decision eventually emerges already
incorporates their own justifications, which are then massaged by layers of opacity and
disinformation.
The disinformation referred to here (Figure 1) is what results when information is
presented or recapitulated in the service of unstated or unacknowledged ulterior intentions.
The methods of this disinformation achieve their effect by presenting evidence or raw data
selectively, without disclosing either the fact of such selection or the criteria guiding the
selection. This process of selection obscures any distinctions between the data coming from
nature or from any all-natural pathway, on the one hand, and data from unverified or untested
observations on the other.
Not surprisingly, therefore, the aphenomenal model of decision-making necessarily gives
rise to an anti-Nature result.
It has been a big advance to detect and make explicit the bifurcation between pro- and
anti-Nature approaches. Of course, some will say: if what you are saying is true, and it is so
simple, why has no one hit on it before? Essentially the answer must be that the truth is
assailed by error and falsehood everywhere. At the same time, we do not advocate that one
must acquire membership in any special priesthood in order to grasp the truth... so, what's the
“secret”? There must be some practical way(s) to avoid an anti-Nature path in the first place,
or to break with such a path when one detects that s/he has gone that way and find a pro-
Nature path instead.

Figure 1. Aphenomenal Model of Decision-Making.


Truth, Consequences and Intentions 15

In addition to detecting and making explicit the bifurcation between pro- and anti-Nature
approaches in various circumstances, what is most crucial is also to elaborate some of these
practical ways of guarding against or countering an erroneous choice of pathway. In this
regard, there are three (3) things to check if one is concerned not to be intervening on a path
that is anti-Nature.
First: there is intention. If one lacks a conscious intention, this itself should serve as a
warning sign. It doesn’t matter is everything else seems “alright”: this lack is a decisive
indicator. One of the hazards of the Information Age derives nevertheless from the sheer
surfeit of “data” on any topic, compared to the number and depth of questioning needed in
order to cull, group and analyse what these data mean. Intention means direction. Many
confuse this notion with the notion of “bias”, but although data may be filtered for analysis in
any number of acceptable ways, bias is not one of them. As various works published by
members of the EEC Research Group have stressed, however, “understanding requires
conscious participation of the individual in acts of finding out.”6 Data in and of themselves
actually disclose nothing… until operated on by an investigator asking questions and then
seeking answers in the data.7 The sequence of questions posed must serve to establish the
direction of the inquiry. At the same time it should by no means be assumed that such
sequencing is arbitrary. The line of questions an engineer asks about an electric circuit, for
example, changes dramatically once the circuit’s state is altered from open to closed, or vice-
versa.
Second: there is the matter of the character of the knowledge forming the basis of one's
intervention. If the entire source and content of this knowledge base is perception alone, the
character of this knowledge base will be faulty and deficient. The source and content of this
knowledge base must be conscious observation – either by oneself directly, or collected from
sources that are unbiased and otherwise reliable – and the warranted conclusions collected
from, or on the basis of, such observations.
At first glance, this seems like impossibly petty caviling: surely, “data are data”,
regardless of its observer, recorder or investigator? Precisely what this approach overlooks is
something that is not a petty matter at all, viz., the reliability of perception in and of itself,
disconnected from any framework, or its source.
Imagine an electric fan rotating clockwise – the clockwise rotation is actual: the truth, not
a matter of perception or blind faith8 – and we are seeking a knowledge-based model of this
phenomenon. Such a model will; necessarily also be dynamic, since knowledge is infinite (at
what time can it ever be asserted that all current information is available and the description
of a process is complete?). Every phenomenon, including this apparently trivially simple

6
This injunction was first stated this way 40 years ago in a powerful pamphlet by Hardial S. Bains entitled
Necessity for Change (London, 1967). Within its own work , the EEC Research Group has taken this in a
definite direction; see for example Islam (2003) and Zatzman and Islam (2006).
7
The perpetration of the “Terrorism Information Awareness” (TIA) database project headed by Adm. Poindexter
(an individual publicly discredited in the 1980s for abetting the illegal “Iran-Contra” arms deal) and intended
to “detect patterns that predict potential future outbursts of Islamic extremism” from applications filed for U.S.
visas or citizenship, logs of transoceanic cell-phone calls to or from U.S. locations and other “data sources”,
was and is possible only because some people bought the argument that data would provide answers, without
worrying either about nailing bias or even first making explicit the questions that are to be asked. For an
insightful critique of TIA, see http://www.epic.org/privacy/profiling/tia/
8
Any model that predicts contrary to the truth is a model based on ignorance. Thus, for example, to predict
something based on the earth being flat cannot pass. That would be an example of an aphenomenal model. A
knowledge-based model, by contrast, should predict the truth. See Zatzman & Islam (ibid.)
16 G. M. Zatzman and M. R. Islam

phenomenon of a clockwise-rotating fan, necessarily incorporates some room for further


investigation. As long as the direction of the investigation is correct – something ascertainable
from the intention of the investigator – further facts about the truth will continue to reveal
themselves. Over time, such investigation takes on a different meaning as the observed
phenomenon is constantly changing. The starting point was quite explicit: the fan rotates
clockwise, so any model that would proclaim that the fan appears rotating counter-clockwise
would do nothing for the truth. Despite the appearance, such a proclamation would in fact
simply falsify perception. For instance, if observation with the naked eye is assumed to be
complete, and no other information is necessary to describe the motion of the electric fan, an
aphenomenal result would be validated.
Using a strobe light, however, and depending on the applied frequency of the strobe, the
motion of the electric fan can be shown to have reversed. If the information is omitted that the
observation of the electric fan was carried out under a strobe light, then knowledge about the
actual, true motion of the fan would be obscured (to say the least). Simply by changing the
frequency, perception has been rendered the opposite of reality.
Now, however, the problem compounds. How can it be ensured that any prediction of
counter-clockwise fan rotation is discarded? If mention of the frequency of the light under
which the observation was being made is included, it becomes obvious that the frequency of
the strobe is responsible for perceiving the fan as rotating counter-clockwise. Even the
frequency of the light would be insufficient to re-create the correct image, since only sunlight
can guarantee an image closest to the truth. Any other light distorts what the human brain will
process as the image. Frequency is the inverse of time, but as this example serves to make
more than clear: time is the single most important parameter in revealing the truth.
This is the case even for phenomenon that is deliberately human-engineered in every
aspect – no part of this fan or its operation is reproduced anywhere in the natural
environment. Two things were observed that directly conflicted: clockwise and counter-
clockwise rotation. However, by knowing the time-dependence (frequency), what was true
could readily be differentiated from what was false.
All steady-state models, models based on a central equilibrium, are aphenomenal. Their
most important features are their tangibility and the confinement of their operating range to
time t = ‘right now’. However, although steady-state models, and therefore also all tangible
models, are inherently aphenomenal, intangible models are inherently knowledge-based.
Intangible models are what can be found everywhere and anywhere in their normal habitat in
the natural environment. Such models, by including the phenomenon in its natural state,
necessarily also incorporate information and facts that no observer may yet have sorted out,
yet which “come with the territory” – like the skin of a banana with its edible, i.e., tangible,
insides which is what we are probably actually intending when we refer to “banana”. The key
to sorting everything out is conscious observation of the phenomenon – engineered or natural
– in its characteristic mode of operation.
Third: the temporal element of the observations collected must be made conscious. The
intervention must be arranged in the light of such consciousness is the result is to not to take
people down an anti-Nature path, or otherwise serve an anti-Nature intention. Time is the
fourth dimension in which everything in the social or natural environment is acted out. As the
ancient Greek philosopher Heraclitus was probably the first European to point out, motion is
Truth, Consequences and Intentions 17

the mode of existence of all matter.9 Such is the content of the temporal existence of matter in
all its forms, whether observed (and measured) as energetic oscillation (i.e., frequency), or as
duration (i.e., time, which is the reciprocal of frequency).
Note that this generalisation applies to matter in sub-atomic forms. The sub-atomic arena
is one in which Newton’s famous Laws of Motion, which contemplated matter in the form of
object-masses, sometimes encountered serious contradictions. For example, Newton’s First
Law of Motion speaks of “objects” remaining at rest unless acted on by an external force. The
atoms of any object are at rest at no time. Therefore, since (as we now know) electrons and
protons are not spherical “object-masses” whirring about inside a given object-mass, the
locating of a source of alleged “external force” supposedly responsible for the undoubted and
extensive motion of particulate matter at the sub-atomic level of any object-mass becomes
seriously problematic. The notion that motion is the mode of existence of matter in all forms
incorporates the temporal dimension – in this definition, time is truly “of the essence”. By
contrast, in the Newtonian universe, time is an independent variable that can be disregarded
when object-masses of interest are not in motion. In the Newtonian worldview, the steady-
state is timeless. The physics of Heraclitus was rendered in modern European philosophy by
G.W.F. Hegel (1821 [1952 Eng. tr.]) as well as in the religion and philosophy of non-
Buddhist civilisations of ancient India. According to any of these, Newtonian science’s
“steady state” is utterly aphenomenal because it eliminates any role for time, an act which in
itself is anti-Nature. As Einstein demonstrated, once time is restored to its proper place as an
entire fourth dimension of the material universe and ceases to be treated as a mere
independent variable, the applicability of Newton’s Laws of Motion undergoes serious
modification.10

FUNDAMENTAL MISCONCEPTIONS IN TECHNOLOGY DEVELOPMENT


Any meaningful discourse about what constitutes sustainable technology development,
must first be informed by a tour d’horizon of the present state of what constitutes
“knowledge”. The conventional thought process throughout all fields of engineering science
is based on short-term vision that gives rise to numerous misconceptions. The elimination of
misconceptions, which is a permanent standing aim of all scientific work, not only fortifies
and clarifies what is true, but also promotes long-term thinking. It follows that, for those
interested in retaining a healthy scientific discourse, long-term thinking should be promoted
instead of short-term vision. Unfortunately, the short-term vision of the conventional thought-
process --- there is this immediate problem posing itself in finite time and space, for which
the following solution is proposed --- retains a tenacious grip, and if the proposed solution
“works”, then… that becomes the truth. Truth becomes… “whatever works”. The very
success of such a pragmatic approach in the short term --- we now have something that
“works”, who cares how or why? --- itself promotes a certain indifference about the urgency
and necessity to eliminate misconceptions or isolate false perceptions. Misconceptions and

9
No complete original of his one known book, On Nature, survives. Plato and others cited the most famous phrase
everywhere attributed to him: “All things are in flux.” See http://www2.forthnet.gr/presocratics/heracln.htm
10
These and many other aspects of time are discussed at greater length in Chapter 2, “A Delinearised History of
Time”, in Zatzman & Islam (ibid.)
18 G. M. Zatzman and M. R. Islam

falsified perception have always led to downplaying, ignoring or even eliminating altogether
any long-term considerations such that --- depending on how much information has been
hidden --- any scheme looks appealing. Sustainability, on the other hand, can only come
through knowledge, and knowledge can only be clarified and fortified in the process of
eliminating misconceptions and false perceptions from the very processes of thought. By
tracking the first assumptions of all engineering and natural science ‘laws’, the following
misconceptions are identified.
a) Truth is whatever works: This pragmatic approach is the most important
misconception that has prevailed over last 350 years. A fundamental requirement of any
scientific definition of truth is that it be “phenomenal” throughout time and space yet
independent and regardless of any particular time or space. Another way to state this is that
the truth must exist in all four dimensions of nature – not just one dimension with time t as an
independent variable or not at all, nor just two with time t as an independent variable or not at
all, nor just three with time t as an independent variable or not at all, nor only in time without
regard to space (i.e., it must exist respect to mass or energy or both).
The necessity and significance of this prerequisite is obscured the moment a truth is
accepted that was based on findings tied to some finite time period. The truths of nature-
science, i.e., of science based on that which actually exists in nature, incorporate all three
spatial dimensions and time, the fourth dimension. They are not constructed as superpositions
of things that are true of some abstract construct such as a single linear-spatial dimension, or
of two linear-spatial dimensions, or of three linear-spatial dimensions but with time
appearing, if at all, only as an independent variable.
Can anything be proven “true” in any absolute sense outside space and time? Imagine
trying to prove 1+1=2: our recent work shows proving 1+1=2, an abstraction that does not
exist, can only be done once it is given a tangible meaning (Islam and Zatzman, 2006b).
Consider this statement: a linear structure exists, when length scale is zero. This statement
that starts a circular logic and inherently falsifies the first assertion (that is, such structure
exists). So, there can be numerous proofs of many corollaries and theories, as long as they are
confined to aphenomenal meanings.
The same applies to ‘verification’, which is nothing but ascertaining the truth. Consider
verifying that someone’s name is “Sam Smith”. We can ask Sam Smith and he can testify that
he is indeed Sam Smith. That does not verify that his name is Sam Smith, because we are
depending on his testimony, so there is an underlying assumption that he is speaking the truth.
The only way we can proceed to verify without any doubt is if we witnessed the naming
ceremony and the statement was made at the time of the ceremony, not a second later.
Because, even a second later, we are relying on people’s memory as the time has already
lapsed.
Every theory has a first assumption attached to it. Figure 2 shows how the difficulty of
verifying the first assumption.
b) Chemicals are chemicals. This misconception allowed Paul Hermann Müller, credited
with inventing dichloro-diphenyl-trichloroethane (DDT) and awarded a Nobel Prize in
medicine and physiology, to glamorize making of synthetic products. This very old
misconception got a new life, inspired by the work of Linus Pauling (a two-time Nobel prize
winner, in chemistry and peace). Pauling (1968) justified using artificial products no
differently than natural products of the same chemical structure and functions, claiming any
differences as to whether their operational setting, or characteristic environment, was in
Truth, Consequences and Intentions 19

nature or a laboratory could be safely disregarded. This had the positive feature of
challenging any latent notions that biological chemicals were mysterious or somehow other
than chemical. But the conclusion that this apparent tangible identity made it safe to disregard
differences in chemicals’ characteristic operating environments or how they were produced in
the first place was unwarranted.

Figure 2. The impact of the first assumption increases as the degree of difficulty goes up in verifying
the first assumption.

The issue of environment was finessed by asserting that since the individual chemicals
could be found in the natural environment, and since the same chemicals synthesised in a lab
or a factory were structurally and functionally one and the same as the version found in raw
form in nature, then such synthesised chemicals could not pose any long term or fundamental
threat to the natural environment. It was all the same as far as nature was concerned, ran the
argument, since, no matter which individual chemical constituent or element form the
Periodic Table you chose, it already existed somewhere in nature; how they were synthesised
was glossed over. Furthermore, the fact that almost none of the synthetic combinations of
these otherwise natural-occurring individual elements had ever existed in nature was ignored.
Scientists with DuPont claimed dioxins (from PVC being the same as from nature) existed in
nature, so therefore, synthetic PVC should not be harmful – even if the “naturally-occurring”
version of the dioxin is utterly poisonous.
The mere fact something exists in nature tells us nothing about the mode of its existence.
This is quite crucial, when it is remembered that synthetic products are used up and dumped
as waste in the environment without any consideration being given – at the time their
introduction is being planned – to the consequences of their possible persistence or
accumulation in the environment. All synthetic products “exist” (as long as the focus is on the
most tangible aspect) in nature in a timeframe in which Δt=0. That is the mode of their
existence and that is precisely where the problem lies. With this mode, one can justify the use
20 G. M. Zatzman and M. R. Islam

of formaldehyde in beauty products, anti-oxidants in health products, dioxins in baby bottles,


bleaches in toothpaste, all the way up to every pharmaceutical product promoted today. Of
course, the same focus, based on the science of tangible, is applied to processes, as well as
products. Accordingly, nuclear fission in an atomic weapon is the same as what is going on
inside the sun (the common saying is that “there are trillions of nuclear bombs going off every
second inside the sun”). In other words: ‘energy is energy’! This misconception can make
nuclear energy appear clean and efficient in contrast to ‘dirty’, ‘toxic’, and even ‘expensive’
fossil fuel. Nuclear physicists seem generally to share this misconception, including Enrico
Fermi, Ernest Rutherford who won Nobel prizes in physics.
c) If you cannot see, it doesn’t exist. Even the most militant environmental activists will
admit, the use of toxic chemicals is more hazardous if the concentration is high and the
reaction rate is accelerated (through combustion, for example). The entire chemical industry
became engaged in developing catalysts that are inherently toxic and anti-nature (through
purposeful denaturing). The use of catalysts (always very toxic because they are truly
denatured – i.e., concentrated) was justified by saying that catalysts by definition cannot do
any harm because they only help the reaction but do not participate.
The origin of this logic is that “catalyst” replaces “enzyme”. Enzymes allegedly also
don’t participate in reaction. Consider in this regard the winners of the 2005 Nobel Prize in
Chemistry. Three scientists were awarded the Nobel Prize for demonstrating a kind of nature-
based catalysis that could presumably give rise to ‘green chemistry’. Yves Chauvin explained
how metal compounds can act as catalysts in ‘organic synthesis’. Richard Schrock was the
first to produce an efficient metal-compound catalyst and Robert Grubbs developed an ‘even
better catalyst’ that is ‘stable’ in the atmosphere. To these breakthroughs is ascribed the
possibility of creating processes and products that are ‘more efficient’, ‘simpler’ to use,
‘environmentally friendlier’, ‘smarter’, and less hazardous.
Another example relates to PVC. The defenders of PVC often state, just because there is
chlorine in PVC doesn’t mean PVC is bad. After all, they argue, chlorine is bad as an
element, but PVC is not an element, it is a compound. Here two implicitly spurious
assumptions are invoked: 1): chlorine can and does exist as an element; 2) the toxicity of
chlorine arises from it being able to exist as an element. This misconception, combined with
‘chemicals are chemicals’, makes up an entire aphenomenal process of ‘purifying’ through
concentration. This ‘purification’ scheme would be used from oil refining to uranium
enrichment. The truth, however, is that if the natural state of some portion of matter, or the
characteristic time of a process, is violated, the process becomes anti-nature. For instance,
chemical fertilizer and synthetic pesticides can increase the yield of a crop, but that crop
would not be the crop that would provide nutrition similar to the one produced through
organic fertilizer and natural pesticides. H2S is essential for human brain activities, yet
concentrated H2S can kill. Water is essential to life, yet ‘purified’ water can be very toxic and
leach out minerals rather than nourishing living bodies.
Many chemical reactions are valid or operative only in some particular range of
temperature and pressure. Temperature and pressure are themselves neither matter nor
energy, and are therefore not deemed to be participating in the reaction. Absent these
“conditions of state” being present in definite thresholds, the reaction itself cannot even take
place. Yet… because temperature and pressure are themselves neither matter nor energy
undergoing a change they are deemed not to be part of the reaction. It is just as logical to
separate the act of shooting someone from the intention of the individual who aimed the gun
Truth, Consequences and Intentions 21

and pressed the trigger. Temperature and Pressure are but calibrated (i.e., tangible) measures
of indirect/oblique, i.e., intangible, indicators of energy (heat, in the case of temperature) or
mass (per unit area, in the case of pressure). Instead of acknowledging that, first of all,
logically, these things are obviously involved in the reaction in some way that is different
from the ways in which the tangible components of the reaction are involved, we simply
exclude them on the grounds of this intangibility.
It is the same story in mathematics. Data outputs from some process are mapped to some
function-like curve that includes discontinuities where data could not be obtained. Calculus
teaches that one cannot differentiate across the discontinuous bits because “the function
doesn’t exist there”. Instead of figuring, for example, how to treat – as part of one and the
same phenomenon – both the continuous areas and the discontinuities that have been
detected, the derivative of the function obtained from actual observation according only to the
intervals in which the function “exists” is treated as definitive. This is clearly a fundamentally
dishonest procedure (Islam et al., 2006).

UNDERSTANDING NATURE AND SUSTAINABILITY


In order to reverse Dr Curl’s aptly-labelled “technological disaster”, one must identify an
alternate model to replace the currently-used implosive model. The only sustainable model is
that of Nature. It is important to understand what nature is and how we can emulate nature.
Even though many claims have been made about “emulating” nature, no modern technology
truly emulates the science of nature. It has been quite the opposite: observations of nature
have rarely been translated into pro-nature technology development. Today, some of the most
important technological breakthroughs have been mere manifestations of the linearisation of
nature science: nature linearised by focusing only on its external features (Islam, 2006).
Today, computers process information exactly opposite to how the human brain does (Islam
and Zatzman, ibid.). Turbines produce electrical energy while polluting the environment
beyond repair even as electric eels produce much higher-intensity electricity while cleaning
the environment. Batteries store very little electricity while producing very toxic spent
materials. Synthetic plastic materials look like natural plastic, yet their syntheses follow an
exactly opposite path. Furthermore, synthetic plastics do not have a single positive impact on
the environment, whereas natural plastic materials do not have a single negative impact. In
medical science, every promise made at the onset of commercialisation proven to be opposite
what actually happened: witness Prozac®, Vioxx®, Viagra®, etc. Nature, on the other hand,
did not allow a single product to impact the long-term negatively. Even the deadliest venom
(e.g., cobra, poisoned-arrow tree frog) has numerous beneficial effects in the long-term. This
catalogue carries on in all directions: microwave cooking, fluorescent lighting, nuclear
energy, cellular phones, refrigeration cycles to combustion cycles. In essence, nature
continues to improve matters in its quality, as modern technologies continue to degrade the
same into baser qualities.
Nature thrives on diversity and flexibility, gaining strength from heterogeneity, whereas
the quest for homogeneity seems to motivate much of modern engineering. Nature is non-
linear and inherently promotes multiplicity of solutions. Modern applied science, however,
continues to define problems as linearly as possible, promoting “single”-ness of solution,
22 G. M. Zatzman and M. R. Islam

while particularly avoiding non-linear problems. Nature is inherently sustainable and


promotes zero-waste, both in mass and energy. Engineering solutions today start with a
“safety factor” while promoting an obsession with excess (hence, waste). Nature is truly
transient, never showing any exact repeatability or steady state. Engineering today is obsessed
with standards and replicability, always seeking “steady-state” solutions.
Table 1 shows the major differences between features of natural products and those of
artificial products. Note that the features of artificial products are only valid for a time, t =
‘right now’ (Δt = 0). This implies that they are only claims and are not true. For instance,
artificial products are created on the basis that they are identical (this is the first premise of
mass production and the economics of volume). No components can be identical, let alone
products. Similarly, there is no such state as steady state. There is not a single object that is
truly homogeneous, symmetrical, or isotropic. This applies to every claim of the right hand
side of Table 1. It is only a matter of time that the claims are proven to be false. Figure 3
demonstrates this point. Note that the moment an artificial product comes into existence, it
becomes part of Nature; therefore, it is subject to natural behavior. This is equivalent to
saying, “You cannot create anything, already everything is created”. The case in point can be
derived from any theories or ‘laws’ advanced by Newton, Kelvin, Planck, Lavoisier,
Bernoulli, Gibbs, Helmholz, Dalton, Boyles, Charles, and a number of others who serve as
the pioneers of modern science. Each of their theories and laws had the first assumption that
would not exist in nature, either in content (tangible) or in process (intangible).
This task of eliminating misconceptions and exposing falsified perceptions involves the
examination of the first assumption of every ‘natural law’ that has been promoted in the
energy sectors. The next step involves the substitution of these assumptions with assumptions
that conform to nature. As an example, consider the ideal gas law. Its first assumption is its
metaphor that represents all gas molecules as rigid spherical balls. Of course, it is only an
analogy but already built into the analogy are assumptions that will produce conclusions that
fit a mechanical system of rigid spherical balls but also violate or misrepresent the physical
laws of operation governing, and other aspects of what is actually possible, with matter in
gaseous phase. Similarly, all fundamental thermodynamic laws assume, first and foremost, a
“steady state”. Because of the first assumptions that are fitted to an aphenomenal entity, linear
models necessarily emerge emerged. Simply adding a non-linear term at a later stage does not
make these models realistic. Indeed, even after a non-linear equation emerges, it is ultimately
solved with a technique that can handle only linear equations (equally applicable to numerical
solution techniques or statistical models). All this follows from the simple fact that merely
negating an aphenomenal model is not in and of itself a sufficient condition for reproducing a
realistic emulation of anything in nature. It is a necessary condition, but it is insufficient.
Consider more closely the ideal gas law model for a moment – and what happened with
various attempts to render nature more realistically without touching any of the underlying
assumptions of this law. This model is based on a rigid spherical balls configuration of the
molecules that are in steady state. The real gas law recognises that a so-called “ideal” gas
does not and cannot exist. So, it adds a z-factor (gas compressibility factor) that makes the
ideal gas law non-linear. However, z is still is a function only of pressure and temperature.
The ideal gas law is negated, but… the “improved” version still fails to emulate what was
most critical to capture. The sufficient conditions have not been met. Then there emerge
Virial equations-of-state: these deny real gas laws, offering a series of coefficients to include /
account for other factors, including intermolecular force, eccentricity, etc. (At times, it went
Truth, Consequences and Intentions 23

up to 29 coefficients). Even with such a wide non-linearity, people always found a single
solution to every problem – amazing! The solution that was preferred was the one nearest the
experimental results – the only difference between this and a lottery being that the latter
demands the exact number. This is equivalent to comparing the effects in order to say that the
cause and the process have both been emulated. The most widely used equation in petroleum
engineering, the Peng-Robinson equation of state, appears to overcome the excessive
complexity of the Virial: has only two coefficients. This model is widely popular because it is
simple (only two coefficients) and results are again closer than any other model. Once again,
however, this non-linear equation gives unique solutions; occasionally, when two solutions
arise, one is quickly discarded as “spurious”. This is accepted as science, but would anyone
accept the validity of a lottery that draws two winners and then declares one of them a loser?
The only way these efforts to emulate the behaviour of matter in gaseous phase can be
made realistic is by removing the first assumption of spherical rigid balls that are at a steady
state. This needs to be replaced with realistic dynamic entities that have all the characteristic
features of natural objects (see left hand side of Table 1).

Table 1. Typical features of natural processes as compared to the claims of artificial


processes (modified from Khan and Islam, 2006)

Nature (Δt Æ ∞) (Real) Artificial (Δt Æ 0) (Aphenomenal)


Complex Simple
Chaotic Steady, periodic, or quasi-periodic
Unpredictable Predictable
Unique (every component is different) Non-unique, self similar
Productive Reproductive
Non-symmetric Symmetric
Non-uniform Uniform
Heterogeneous, diverse Homogeneous
Internal External
Anisotropic Isotropic
Bottom-up Top-down
Multifunctional Single-functional
Dynamic Static
Irreversible Reversible
Open system Closed system
True False
Self healing Self destructive
Nonlinear Linear
Multi-dimensional Unidimensional
Infinite degree of freedom Finite degree of freedom
Non-trainable Trainable
Infinite Finite
Intangible Tangible
Open Closed
Flexible Inflexible/rigid
24 G. M. Zatzman and M. R. Islam

Proceeding on this path should give rise to a number of fundamental governing equations
that are inherently non-linear. These equations will be solved with non-linear solvers.11 Only
with these solutions may one begin to emulate nature. Table 2 shows how natural processes
are, and how they are opposite to aphenomenal processes. Current models essentially emulate
the aphenomenal model, thereby removing any credibility of the outcome, even if every once
in a while the results may agree with experimental observation.

Table 2. True difference between sustainable and unsustainable processes

Sustainable (Natural) Unsustainable(Artificial)


Progressive/youth measured by the rate of change Non-progressive/resists change
Unlimited adaptability and flexibility Zero-adaptability and inflexible
Increasingly self evident with time Increasingly difficult to cover up
aphenomenal source
100% efficient Efficiency approaches zero as processing is
increased
Can never be proven to be unsustainable Unsustainability unravels itself with time

Figure 4 shows how this pragmatic approach has lead to the promotion of perception
manipulation tactics.

Figure 3. It is only a matter of time that the product or process based on the anti-nature premise will be
exposed as untrue and will collapse.

Sustainable development starts with the source, i.e., a natural intention. The intended
objective is achievable only if the source itself is natural. One cannot achieve an objective

11
Some non-linear solvers have been developed recently by our research group. See Islam et al. (2007), Ketata et
al. (2007) and Mousavizadegan et al. (2006).
Truth, Consequences and Intentions 25

that is anti-nature. That is the translation of “You cannot fight Nature”. Here, it is important
to be honest, even with oneself. Nature never lies. For sustainability, honesty is not the best
policy, it is the only policy. Having the correct intention is a necessary condition, but it is not
sufficient for achieving sustainability. The process has to be natural. It means the features of
natural processes as outlined above (in Tables 1 and 2) are not violated. This is the only
management style and technology development mode that can assure sustainability. It is
important to screen current practices vis-à-vis natural processes and determine sustainability.
It is equally important to check new technologies and ensure natural process is being
followed. In predicting behavior and functionality of these techniques, one must be equipped
with innovative tools and non-linear mathematical models. By using non-linear mathematics,
even the most simplified prediction can be made closer to reality. By using innovative tools,
the true picture of a physical system will emerge and the guesswork from engineering
practices will be eliminated.

Figure 4. Greening of technological practices must go beyond painting green with toxic chemicals
(photo: R. Islam, Ecuador, 2004).

In the past, a holistic approach to solving technical problems has been missing. The
current model is based on conforming to regulations and reacting to events. It is reactionary
because it is only reactive and not fundamentally pro-active. Conforming to regulations and
rules that may themselves not be based on any sustainable foundation can only increase long-
term instability. Martin Luther King, Jr. famously pointed out, “We should never forget that
everything Adolf Hitler did in Germany was ‘legal’.” Environmental regulations and
technology standards are such that fundamental misconceptions are embedded in them: they
follow no natural laws. A regulation that violates natural law has no chance to establish
sustainable environment. What was ‘good’ and ‘bad’ law for Martin Luther King, Jr., is
actually true law and false law, respectively. With today’s regulations, crude oil is considered
to be toxic and undesirable in a water stream whereas the most toxic additives are not. For
instance, a popular slogan in the environmental industry has been, “Dilution is the solution to
pollution”. This is based on all three misconceptions in the previous section, yet all
environmental regulations are based on this principle. The tangible aspect, such as the
concentration, is considered – but not the intangible aspect, such as the nature of the
chemical, or its source. Hence, ‘safe’ practices initiated on this basis are bound to be quite
unsafe in the long run.
Environmental impacts are not a matter of minimizing waste or increasing remedial
activities, but of humanising the environment. This requires the elimination of toxic waste
altogether. Even non-toxic waste should be recycled 100%. This involves not adding any anti-
26 G. M. Zatzman and M. R. Islam

nature chemical to begin with. Then making sure each produced material is recycled, often
with value addition. A zero-waste process has 100% global efficiency attached to it. If a
process emulates nature, such high efficiency is inevitable. This process is the true equivalent
of the greening of petroleum technologies. With this mode, no one will attempt to clean water
with toxic glycols, remove CO2 with toxic amides, or use toxic plastic paints to ‘green’. No
one will inject synthetic and expensive chemicals to increase EOR production. Instead, one
would settle for waste materials or naturally available materials that are abundantly available
and pose no threat to the eco-system. As opposed to the “greenwashing” approach common
throughout much of government and industry, which attempts to brainwash people either into
opposing technically feasible alternatives as demonic or into assuming the greening promise
of industry or government is equivalent to their actually going green (see Figure 3), these
proposals are all technically feasible as well as an improvement on our human condition.
In economic analysis, intangible costs and benefits must be included. Starting from
understanding the true nature of energy pricing, the scheme that converts waste into valuable
materials can turn the economic onus of the petroleum industry to economic boon. The
current economic development models lack the integration of intangibles, such as
environmental impact, the role of human factors, and long-term implications (Zatzman and
Islam, 2006). These models are based on very old accounting principles that the Information
Age has rendered obsolete. This points to the need for a thoroughgoing renovation of
economic theory that takes into account the roles of intangible factors such as intention, the
actual passage of historical time, information, etc. so that a new approach to energy pricing is
provided with a basis more solid and lasting that of some short-term policy objective. This
economic development model will make unsustainable practices exuberantly costly while
making sustainable practices economically appealing.

SIMULATING VS EMULATING NATURE


There are lots of simulators and simulations. They routinely disregard, or try to linearize/
smooth over, the non-linear changes-of-state. They are interested only in what they can take
(away) from Nature. That is why they are useless for planning the long-term. Simulators and
simulations start from some structure or function of interest, which they then abstract. These
abstractions negate any consideration of the natural-whole in which the structure or function
of interest resides/operates/thrives. Hence they must inevitably leave a mess behind.
Emulations and emulators are a differernt story. They start with what is available in
Nature, and how to sustain that, rather than what they would like to take from Nature
regardless of the mess this may leave behind. Great confusion about this distinction is
widespread. For example, people speak and write how the computer "emulates" the
computational process of an educated human. This is an abstraction. The principles of the
computer's design mimic an abstraction of the principles of how computation is supposed to
work. But, first of all, from the computer side of this scenario, the physical real-world
arrangements actually implementing this abstraction of principles operates in reality
according to the carefully-engineered limitations of semiconducting materials selected and
refined to meet certain output criteria of the power supply and gate logic on circuit boards
inside the computer. Secondly from the computation side, the human brain as a biochemical
Truth, Consequences and Intentions 27

complex incorporating matter-that-thinks does not actually compute according to the abstract
principles of this process. It might most generously be called an abstraction being analogised
to another abstraction, but it certainly cannot be properly described as emulation of any kind.
Emulation requires establishing a 1:1 correspondence between operation Xc of the
computer and operation Xb of the human brain. Yet obviously: there exists as yet no such
correspondence. Brain research is quite far from being capable of establishing such a thing.
We have already found many examples where the two processes are exactly the reverse of
one another, from the chip overheating whereas the brain cools, to the division operator in the
brain compared to the addition operator in the computer. As Table 3 suggests, this is repeated
too generally not to be reflecting some deeper underlying principle:

Table 3. Computer and Brain Compared

Element Computer Brain


Memory Doubling every 12-18 months Continuously renewing; crystallised
[“Moore’s Law”] intelligence grows continuously;
even fluid intelligence can grow with
Continuously degrading more research, i.e., observation of nature
and thinking
Size The bigger the better No correlation
Thinking Does not think; computations are 1:1 Fuzzy and Variable
correspondence-based, between 0 and No 1:1 correspondence involved or
1 (linearisation of every calculation) required
Decision Based on some number (series); any Result of the decision is conscious, but
requisite computation is hi-speed not (yet) the process / pathway that
electronic, but many steps of the precipitates it; hence it seems
process are mechanical / deliberate “spontaneous”
Multitasks No two tasks at the same time; rigidly Decision is made after fuzzy logical
serial processing of each task; interpretation of the big picture;
quick decision underlying this is massive parallelism of
neural circuitry
Computation/Math Quantitative; non-linear problems are Qualitative (intangible, ∆t =∞),
linearly simulated nonlinearity is the norm (linearity would
have to be non-linearly simulated)
Limit Limited by programmer Unlimited
Energy Supply 110-volts converted to 3 volts; µ-Volt, uses glucose (biochemical
when worked long, it heats energy); brain cools as it works
Hardware heavy metals non- degradable 100% bio-degradable
plastics, batteries, etc.
Regeneration Entirely a function of power supply Continuously regenerating according to
Time whether it is on or off; otherwise, this overall state of the organism
category N/A (not applicable)
Functionality Crashes, restarts take time, an insane Never crashes, spontaneous, never stops.
computer is never found Insane humans are those who do not
make use of it.

However, in contrast to simulation, emulation does not require selecting any particular
function or set of functions and then abstracting them. It requires a very different reasoning
process that may be defined as “abstracting absence”. This is a process that discovers, or
uncovers, what would have to be present, available or engendered in order to regenerate the
28 G. M. Zatzman and M. R. Islam

natural result. It is a method that relies on the principle of obliquity to reconstruct, proceeding
step-wise by exhaustion, whatever would be necessary and sufficient. At the conclusion of the
characteristic duration of any process, Nature is in a state of zero net-waste, tolerating neither
surpluses nor shortages. Thus generating and incorporating these necessary and sufficient
conditions should render faithfully at least all the relevant stage(s) of the natural pathway, if
not precisely each and every one of its details. Indeed, it should not be expected that the
details would be duplicated: this is not Nature, after all, but its emulation. Most important:
nothing has been concocted or inserted that is not definitely known, nothing that is
incompletely understood has been excluded just because detailed knowledge of its precise
role(s) may still be incomplete, and nothing has been substituted simply because it creates the
same “effect”.
This merits attention because many simulations rely on feedback loops and threshold
triggers that, while clever and fully exploiting the technology providing the vehicle for the
simulation, may not be or have anything to do with whatever is taking place in nature. The
philosophical outlook governing the modeling of phenomena by means of simulations is the
theory of pragmatism – that the truth is “whatever works”. A simulation that produces the
same effect at its output as the process being investigated is deemed to be “true” if it is
capable of recapitulating the truth at the outcome. For anything entirely artificial and not
given to us first in, by or through Nature, this can undoubtedly be accomplished. However, it
is also trivial, because why not simply duplicate the artificial original? Consider, on the other
hand, whether it is wise to sort out how executive decision-making works among a grouping
of polar bears by dressing one of them in a suit. This provocative and absurd idea brings out
the essence of the real problem: what is the pathway?
One of the most advanced branches of the theory of simulations is the extensive work
that has developed in modeling using what are known as “cellular automata”. These attempt
to cure the serious deficiency of simulation theory with respect to establishing or taking into
account necessary and sufficient conditions by starting from the simplest possible system,
usually with only two or three elements, interacting according to “rules” that have been
programmed for the given simulation.12 The premise common to all these modeling efforts
assumes that what exists in simple countably finite terms at the microscopic level can be
programmed to produce a credible pathway accounting for the infinite complexity observed at
the macroscopic level. The fact is, however, that we are repeatedly discovering that the
microscopic scale is just as infinitely complex as the macroscopic. The only difference is the
technological means available to render it tangible, visible or measurable in some form.
Simulations developed from the principles informing the modeling of cellular automata
implicitly deny any possible dialectical relationships in which quantity is transformed into a
new quality or vice-versa. In cellular automata or any other simulation, quantity can only
grow into a greater quantity or decline to some lesser quantity.
Closely related to this aspect of the theory of simulation is the notion of “functional
groups”. One case in point is the discovery that so-called “junk DNA” is likely highly
functional after all, for a wide range of purposes not previously thought about because… they

12
Among relatively accessible public sources, since the 1980s, a massive number of extremely curious features of
such models have been documented regularly in the recreational-mathematics section of Scientific American
magazine. As is also well known, Stephen Wolfram, creator of the popular Mathematica software package, is
an especially dedicated and serious investigator of modeling phenomena using cellular automata. See Wolfram
(2002) and Gardner (1983).
Truth, Consequences and Intentions 29

were not detectable before the techniques for mapping complete genomes were perfected
(Kettlewell, 2004; Bejerano et al., 2004).13
Another related problem with the entirely transparent yet entirely mechanical modeling
generated by most simulations is built into the internal logic of the simulation’s
programming. The Aristotelian law of the excluded middle is a fundamental feature of
conventional mathematical logic: it is powerful for excluding a wide range of phenomena
from consideration, by starting from the premise that everything at a certain stage must either
be “A” or else it is “not-A”. The consequences for the credibility of a model, however, of
excluding potentially valid assumptions is just as fatal as the consequences of including
unwarranted ones: either can contribute to generating unwarranted conclusions.
The fatal flaw inherent in the Aristotelian law of the excluded middle is that what
something may become, or what it has been in some past state, is excluded from the definition
of what “A” is “right now”. If what is sustainable is what is desired, the pathway must be all-
natural. However, an all-natural pathway means that transitional states of energy or mass
before or after time t = “right now” cannot be excluded or discarded: true emulation of nature
must produce something that could only have been produced by a definite past evolution and
that retains the same future potential.
In conclusion: we may indeed proclaim that we emulate nature. It is frequently stated as a
truism that engineering is based on natural science, and the invention of toxic chemicals and
anti-nature processes are called chemical engineering. People seriously thought sugar is
concentrated sweetness… just like honey, so the process must be the same: “we are emulating
nature”! But then – because honey cannot be mass-produced (as sugar is) – we proceed to
“improve” nature. This takes on a life of its own, one discussed elsewhere as the “Honey Æ
Sugar Æ Saccharine® Æ Aspartame®” syndrome (Islam, 2003; Zatzman, 2006). The
chemical and pharmaceutical industries engage in denaturing first, then reconstructing to
maximise productivity and tangible features of the product, intensifying the process that
converts real to artificial (Islam, 2006).

THE “EXTENSIONS OF MAN”


According to the communications theorist H. Marshall McLuhan (1964), media in
general – in which he includes all technologies – are “the extensions of Man”. From a certain
distance and angle, the entrance to the Auschwitz concentration camp in Poland, as depicted
in a world-famous iconic photograph, looks like… a railway station. In every case, the
question is: what is going on inside? What is the pathway? Are all technologies truly “the
13
According to Chris Ponting, from the UK Medical Research Council's Functional Genetics Unit, “Amazingly,
there were calls from some sections to only map the bits of genome that coded for protein - mapping the rest
was thought to be a waste of time. It is very lucky that entire genomes were mapped, as [the discovery of
identical lengthy DNA sequences, in areas dismissed as ‘junk’, across three distinct mammalian species] is
showing.” According to researcher Gill Bejerano, “it is often observed that the presence of high proportions of
truly nonfunctional ‘junk’ DNA would seem to defy evolutionary logic. Replication of such a large amount of
useless information each time a cell divides would waste energy. Organisms with less nonfunctional DNA
would thus enjoy a selective advantage, and over an evolutionary time scale, nonfunctional DNA would tend
to be eliminated. If one assumes that most junk DNA is indeed nonfunctional, then there are several
hypotheses for why it has not been eliminated by evolution, [but] these are all hypotheses for which the time
30 G. M. Zatzman and M. R. Islam

extensions of Man”, or are numbers of them shackles forged on the basis of a serious and
fundamental misapprehension of Nature, its role, and Humanity’s place?
The data of Table 4 suggest some answers to this question. It examines what was
promised and was accomplished by a number of technologies developed and promoted as
helping nature, fixing nature or otherwise improving upon some existing process somewhere
in nature. None of these technologies are truly “extensions of Man”: there is no content from
the way people work or live expressed directly in the technology.

Table 4. “Extensions of Man”?

Product Promise Truth


Microwave oven Instant cooking 97% of the nutrients destroyed;
(“bursting with nutrition”) produces dioxin from baby bottles
Fluorescent light (white light) Simulates the sunlight and can Used for torturing people, causes
eliminate “cabin fever” severe depression
Prozac (the wonder drug) 80% effective in reducing Increases suicidal behavior
depression
Anti-oxidants Reduces aging symptoms Gives lung cancer
Vioxx Best drug for arthritis pain, no Increases the chance of cancer
side effect
Coke Refreshing, revitalizing Dehydrates; used as a pesticide in
India
Transfat Should replace saturated fats, Primary source of obesity and
incl. high-fiber diets asthma
Simulated wood, plastic gloss Improve the appearance of Contains formaldehyde that causes
wood Alzheimer
Cell phone Empowers, keep connected Gives brain cancer, decreases sperm
count among men.
Chemical hair colors Keeps young, gives appeal Gives skin cancer
Chemical fertilizer Increases crop yield, makes Harmful crop; soil damaged
soil fertile
Chocolate and ‘refined’ Increases human body volume, Increases obesity epidemic and
sweets increasing appeal related diseases
Pesticides, MTBE Improves performance Damages the ecosystem
Desalination Purifies water Necessary minerals removed
Wood paint/varnish Improves durability Numerous toxic chemicals released
Freon, aerosol, etc. Replaced ammonia, which was Global harms immeasurable and
deemed “corrosive” should be discarded

Two examples of technology that did provide such an extension would be the leather
saddle adapted from an Arab invention, and the stirrup which was copied from Christian
Crusaders’ Arab enemies and brought back to Europe. The stirrup was essentially the
conversion of the rider’s legs into a thoroughly effective transmission for increasing or
cutting back the speed of the horse.
Now let us gallop ahead: the horse was an essential feature of Arab life for millennia,
enabling point-to-point communications for tribal Arabs and community-to-community

scales involved in evolution may make it difficult for humans to investigate rigorously.”
(http://en.wikipedia.org/wiki/Junk_DNA#_note-Bejerano, last accessed 17 December 2006).
Truth, Consequences and Intentions 31

contacts for settled groups in the Arab world. Raising and keeping horses was associated with
a certain maintenance threshold, but meeting that threshold did not consume the remaining
resources of these societies. The same principles applied to the camel, which was even more
practicable in the desert regions of the Arab world. Not only are none of these features true
for the automobile, but its claim on society’s resources continues to grow at a rate that
outstrips almost all other technologies.
The secret of why the horse or the camel were so well-adapted in Arab society was not a
mark of backwardness. On the contrary, without the saddle or the stirrup, horse or camel
travel could not have acquired society-wide use, and without these means of communication
and transport, the socialization possible in these societies would have been far more limited.
The stranglehold exercised by the demands of motorised transport, its maintenance and
extension on energy production planning and other resource extraction activities throughout
every allegedly modern, allegedly advanced Western society poses one of the most serious
obstacles to instituting and spreading nature-based technologies. Does this technology
represent one of the extensions of Man, or one of the extensions of a yoke that blocks
Humanity from approaching its real potential? If a breakdown is made that catalogues each of
the individually bad and anti-nature aspects and consequences of technical developments
associated with the maintenance or extension of motorised transport, any hope for an
alternative course seems impossible. On the other hand, if this problem is instead tackled
from the most general level – what would be a pro-nature path on which to renovate social
means of transport that are environmentally appealing as well as economically attractive? –
then the possibilities become endless.

THE NEED FOR THE SCIENCE OF INTANGIBLES


AS THE BASIS FOR ENGINEERING

There are a number of obstacles inherent in the project to establish a science of


intangibles based on Nature. Among these obstacles are those that entail our un-learning
much of what we thought we knew before we can begin to learn and appreciate the forms and
content of nature-science. Chief among this collection of problems is how we have already
become trained by the society, culture and education system to conceive and accept the
metaphors and correspondences of engineered space and time represented, essentially, in two
dimensions (2-D). It is indeed a considerable accomplishment that, utilizing perspective and
the projective plane implicit in its geometry, what is actually a third spatial dimension can be
represented to us convincingly within a two-dimensional plane. However, the price at which
this is achieved is something remarked far less: the fourth dimension, time itself, is made to
disappear. In fact, whether the context is the fine arts or engineered space and time, we have
learned a certain visual "grammar", so to speak, with all spatial visualization and
representation. We know no other "language" but that in which either:

(1) time is frozen - as in a snapshot – or


(2) time is represented not as the fourth dimension but rather as something that varies
independently of any phenomenon occurring within it.
32 G. M. Zatzman and M. R. Islam

The modern history of communications media and information transfer really begins with
those famous Canaletto landscapes, of 16th century Italy, incorporating perspective and
immediately overthrowing in that same moment the centuries-long authority of the Holy
Roman Catholic Church over the message we are to receive from works of art. With the
emergence of the new approach in art of the Renaissance, the principles underlying
representational art works of the early and high Middle Ages were reversed. Any previously-
authorized message already vetted carefully as to the acceptability of its content and the
morality of its purpose would hereafter become extraneous and secondary to the information
gathered by the visual cortex of the individual observer.
The new approach made the visual arts accessible at all levels of society for the first time.
Perspective in Renaissance painting, and the findings of anatomy regarding the movement
and distribution of weight in the human frame manifested in Renaissance sculpture,
overthrew the centuries-long monopoly of Church authority with the bluntest directness. This
was bracingly liberating and bound to provoke ever-deeper questioning of Church authority in
other fields. By enabling Humanity to reclaim from Nature something that Authority had
denied, these transformations within mass communications media (turning art into a mass
medium was itself the key to the transformation) unleashed a social and intellectual
revolution. However, even as the new "grammar" of perspective-based representation of
three-dimensional space, a space that now appeared to be living rather than representing a
purely imaginary phantasm or idea, overwhelmed the previously accepted canons of visual
arts, and overthrew with it the long-asserted timelessness of the Church's approved truths, the
new visual canon served up another illusion of reality: the timeless snapshot-like image.
Over the next four centuries, expressed as a struggle to capture the moving image, and
later the live image, further development of mass communications media and associated
systems and technologies of information transfer wrestled with just about every imaginable
and practical aspect of how to engineer the appropriate representation of time and space.
Interwoven throughout this development are parts of the history of development of analog
and then digital electronic media, of the individual or limited-edition static-image to the mass-
marketed photographic static image, and of the illusion of the moving picture - an illusion
created by overwhelming the visual cortex with 24 still frames per second and then of this
same moving picture with a superimposed sound track (the talking motion picture). Also
interwoven are the stories of the unmodulated telegraphic signal whose information is
contained in its sequencing to the modulated signal overlaid with an audio carrier (telephone
and radio), the modulated signal overlaid with visual and audio carrier signals (television), the
encoding of information in digitized sequences (computers), and the digital encoding of
information on a transmitted carrier signal (cell phones, the Internet). All these technological
aspects have been exhaustively discussed and examined by many people. Less cogently
commented, but still mentioned, at least, are the political-economic transitions that also
developed within this historical tapestry: from privately-conducted individual, or craft-
oriented, production prior to the Industrial Revolution intended for finite, relatively small
markets of certain individuals here or there, to privately-owned but socially produced output
for mass markets in the 19th and early 20th centuries, to the readily-socialized mass
production of our own time conducted under increasingly narrowly monopolized ownership.
Nevertheless, however, what still remains unmentioned and uncommented anywhere in these
historical recapitulations is whatever happened to the tangible-intangible nexus involved at
each stage of any of these developments. We cannot hope seriously to make headway
Truth, Consequences and Intentions 33

towards, much less accomplish, serious nature-science of phenomena, an authentic science of


the tangibles-intangibles nexus, without filling in that part of the tapestry as well. That which
is natural can be neither defended nor sustained without first delimiting and then restricting
the sphere of operation of everything that is anti-Nature.
This absence of discussion of whatever happened to the tangible-intangible nexus
involved at each stage of any of these developments is no merely accidental or random fact in
the world. It flows directly from a Eurocentric bias that pervades, well beyond Europe and
North America, the gathering and summation of scientific knowledge everywhere. Certainly,
it is by no means a property inherent either in technology as such, or in the norms and
demands of the scientific method per se, or even within historical development, that time is
considered so intangible as to merit being either ignored as a fourth dimension, or conflated
with tangible space as something varying independently of any process underway within any
or all dimensions of three-dimensional space.

Figure 5. Logically, an phenomenal basis is required as the first condition to sustainable technology
development. This foundation can be the Truth as the original of any inspiration or it can be ‘true
intention’, which is the essence of intangibles.

Averröes worked with the logic of Aristotle. Averröes identified the flaw in Aristotle’s
logic. In Economics of Intangibles (Zatzman and Islam, 2007), we identified this as had was
identified by the argument of externality (aphenomenal concept) as well as lack of multiple
solutions (contradicting natural traits). For instance, the same action can be phenomenal or
aphenomenal depending on the intention of the individual (the case in point being suicide and
self sacrifice or publishing charity to promote competition in good deed or to promote self
interest). Averröes argued that critical thinking is essential for increasing knowledge. For this,
there is no need for a mentor or intermediary and the only condition he put was that the first
34 G. M. Zatzman and M. R. Islam

assumption (first premise) be phenomenal. If this is the case, one is assured of increasing
knowledge with time. First premise for Averröes was the Quran. He argued all logic should
begin with Qur’an. This will make sure, he argued, that one travels continuously the path of
knowledge. This was an attractive model and Thomas Aquinas was so enamored with this
model that he adopted the entire model, with only one revision: he replaced Qur’an with
bible. This replacement was aphenomenal adjustment to Averröes model as knowledge of the
time and knowledge of today indicate that literally all features of Qur’an is opposite to those
of bible.
The downward graph shows continuous overall downward trend of the Aquinas model.
The emergence of New Science as well as the transition to modern age and eventually to the
Information age all make part of this graph. The division into hard science, social science,
philosophy, theology, etc. are all part of the HSSA syndrome (Zatzman, 2006) in knowledge
sense. The occasional upward trends in this graph indicate natural intervention. In social
science, for instance, Bolshevik revolution is a definite upward move. In socio-economical
context, Marx’s recognition of intangibles is the same. In Science, Darwin’s theory of natural
selection is the one that goes upward in the knowledge dimension. The discovery of time as
the fourth dimension, time itself being subjective was a great scientific move toward
knowledge (Einstein’s relativity theory). In modern Engineering, one is hard pressed to find a
single example that can be characterized as pro-nature. The following discussion can explain
why that is the case.

(1) Mathematics is the driver of Engineering. Accounting, reformulated somewhat


abstracty as the “rulews of arithmetic”, is the root of European mathematics.
Accounting in Europe was introduced for collecting tax. In the Averröes model,
accounting is also the root of mathematics. However, accounting was for disbursing
zakat (often wrongly translated as ‘Islamic tax’. It literally means ‘purify’ (relating to
long-term ‘increase’) and is given to poor and destitute, all exactly opposite traits of
tax).
(2) Science is the foundation of engineering. The original science was to increase
knowledge, to be closer to nature and all natural traits. New Science is the opposite.
(3) Medicine was to cure. New medicine is to delay the symptom.
(4) Education was to increase knowledge. The same prophet Muhammad who said, “It is
obligatory for every muslim male and female to increase their knowledge” also said,
“Knowledge cannot be with full stomach”. New Education is a corporation that
promises bloated stomach. It also restricts access to non-elites.
(5) Engineering was to emulate nature. Modern engineering and technology is to
simulate nature. Here, our bird, brain, etc. become important.

The modern-day view holds that knowledge and solutions developed from and within
nature might be either good, or neutral [zero net impact] in their effects, or bad – all
depending on how developed and correct our initial information and assumptions are. The
view of science in the period of Islam's rise was rather different. It was that, since nature is an
integrated whole in which humanity also has its roles, any knowledge and solutions
developed according to how nature actually works will be ipso facto positive for humanity.
Nature possesses an inbuilt positive intention of which people have to become conscious in
order to develop knowledge and solutions that enhance nature. On the other hand, any
Truth, Consequences and Intentions 35

knowledge or solutions developed by taking away from nature or going away from nature
would be unsustainable. This unsustainability would mark such knowledge and solutions as
inherently anti-nature.
People are interested in sustainability today as never before. Why is this the case?
Because ignoring the original intention has got the world into a serious mess. We say: there is
a path that can take us away from repeating and-or deepening this mess. To get there,
however, requires first taking an inventory of our existing first assumptions and then "starting
all over again" on the path of nature itself.
Today we would, and could, do this with a great deal more data at our collective disposal,
and -- especially -- more ways of collating everybody's data from all kinds of sources, than
people had 1500 years ago. As we have glimpsed in the area of "ancient Indian mathematics".
Much of this was preserved, but by methods that precluded or did not include general or
widespread publication. Thus, there could well have been almost as much total reliable
knowledge 1500 years ago as today, but creative people's access and availability to that mass
of reliable knowledge would have been far narrower. Only recently we would discover
Islamic scholars were doing mathematics some 1000 years ago of the same order that we
think we discovered in the 1970s (Lu and Steinhardt, 2007) – with the difference being that
our mathematics can only track symmetry, something that does not exist in nature. Recently,
a three-dimensional PET-scan of a relic known as the ‘Antikythera Mechanism’ has
demonstrated that it was actually a universal navigational computing device – with the
difference being that our current-day versions rely on GPS, tracked and maintained by
satellite (Freeth et al., 2006). We would also be shocked to find out what Ibn Sina
(‘Avicenna’) said regarding nature being the source of all cure still holds true (Grugg and
Newman, 2001) – with the proviso that not a single quality given by nature in the originating
source material of, for example, some of the most advanced pharmaceuticals used to “treat”
cancer remains intact after being subject to mass production and accordingly stripped of its
powers actually to cure and not merely “treat”, i.e., delay, the onset or progress of symptoms.
Consider the example of chess, which is frequently cited in lists of "the best 10 Muslim
inventions". What is the thinking underlying creation of such a game? It is, and was, that the
INNER LOGIC of a conflict, how it developed and was resolved can all be extracted and
recaptured by reproducing the conflict NOT as a narrative of this or that victory, but rather as
the outcome of two guiding intelligences pitting themselves against each other. Before Arab
learning and culture reached Renaissance Europe, chess made no headway in Christian
European civilisation. Why? Could it be because it provided no simple clearcut recipes: if
your opponent does X, do Y? The Roman civilisation in which European Christianity
emerged, very unlike Greek civilisation before or Arab-Muslim civilisation after, was
notoriously incurious about humanthought and action in relation to nature.
The modern version of this chess-like idea of gaming, as a form of simulation, is what is
called "operations research" (OR). This was developed during WW2 using the first electronic
computing devices. But how is a "best strategy" selected in a simulation that has been
"gamed" by means of "OR"? This selection is based on solving massive systems of
simultaneous equations in some huge number of variables.
This is supposed to be an "aid" to the human planner-strategist. In practice it becomes a
source of ex-post-facto justifications and excuses after some strategy has failed. Strategic
calculation by a human leader-planner-strategist, freed of dependence on OR-type
simulations, is far more likely to follow the pathways of a chess-game setting, in which many
36 G. M. Zatzman and M. R. Islam

decisions are taken on the basis of incomplete knowledge of surrounding conditions. An OR-
type simulation considers this condition to be one of "uncertainty", and various probabilistic
algorithms are incorporated to take it into account. However, the human planner-strategist
leading a real battle is acting not out of uncertainty, but rather out of a certainty that his
knowledge is incomplete and can only be increased by taking action. The OR approach is
premised on the notion that less complete knowledge with an uncertainty measure is the same
as more complete knowledge garnered from new acts of further-finding-out. Does less-
complete knowledge, however, actually become more complete when its uncertainty is
measured and assigned a number? In what way is this any different than the false promises of
numerology? The very idea is utterly aphenomenal. As we now know, the invasion and
occupation of Iraq was carried out after U.S. forces had tested a number of OR-backed war-
game scenarios. At least one of these demonstrated, repeatedly, that there was no way the
invader could not lose... but the high-ups were informed there was no way the invader could
lose. This pattern of disinformation only came to light after documents were ferreted out by a
Freedom of Information request. These showed the actual result of that particular scenario,
including a protest from one of the military officers involved when his side, playing the Iraqi
resistance, was ordered to "lose" the next run of the simulation, after in fact it had "won"!

CONCLUSION14
The matter of the approach to science of any technology – is it pro-nature or anti-nature?
– is decisive. It is also the indispensable part of the engineering development process that
requires absolutely no engineering expertise whatsoever. What matters here are truth,
consequences and intentions. Is the engineering researcher absolved of all responsibility for
whatever choices are made simply because, “in the long run,” as Lord Keynes wrote, “we are
all dead”? Accomplishing the best decision in the circumstances involves the greatest struggle
of all for any human – between what one’s expertise and training, all focused on the short
term, has taught and what one’s ongoing participation in the human family, not to mention the
participation of one’s children and one’s children’s children, demands.

ACKNOWLEDGEMENT
Funding for this study was possible through Emertec Research and Development
Corporation and EEC Research Organization.

APPENDICES
The following materials, reproduced with full credit below, provide factual examples and
formal opinions that serve to illustrate themes elaborated in the preceding article.

14
A follow-up article will address the questions: what is the relationship between the pathway of a technical choice
and a chosen technology? and: if all technologies can solve some problem(s) in the short term, is there any
long-term solution that is ultimately technological?
Truth, Consequences and Intentions 37

The first four Appendices deal with disinformation. A key point to keep in mind is that
disinformation is rife not only in political discourse and media reportage, but in all fields of
social and natural sciences as well. What is most dangerous is precisely the spread of
disinformation within the sciences, where knowledge is supposed to be rigorously tested and,
as much as possible of any detected falsehood(s) removed before it circulates as part of the
general knowledge of Humanity. This is precisely what has rendered so crucial and urgent the
conscious and deliberate step of delineating a pro-Nature pathway before intervening in
anything.
Any and every pathway that is not consciously so delineated will be anti-Nature in its
effects. The fifth Appendix is a document from the Six Nations Confederacy (also known as
the Haudenosaunee), one of the last surviving groups of indigenous people on the North
American continent – what they call “Turtle Island” – with its original systems of law and
thought-material about Humanity’s relations with Nature still intact. The Haudenosaunee
incorporate the Iroquois, the Mohawk and four other tribes that cross the Canadian and U.S.
borders. This is about what is “natural”, and what is not. In the context of the present article,
its content is utterly riveting and it is therefore presented here “as is”, on its own terms –
without comment or other elaboration.

A. ANTI-TERRORIST AND RELATED


IDEOPOLITICAL DISINFORMATION
Item 1

The following article discusses Abu Musab al-Zarqawi as someone who lived and then
died as a major force in international terrorism circles. However, there is no mention that
there is an extensive case documented from various sources challenging precisely the
assertion that Zarqawi was even alive in the last couple of years, and establishing also that his
roles and links among the other known terrorist circles were marginal. “False-flag” and
“black” operations using compromised individuals have become de rigeur among the
intelligence agencies of big powers, enabling them to intervene undetected in the affairs of
smaller, weaker states. The purpose of such interventions is to destroy individuals or groups
opposed to commercial or financial predations by these powers in those countries. Zarqawi
fits that kind of profile far more credibly than the profile of terrorist kingpin. In Zarqawi’s
case, the only issue seems to be exactly how many intelligence agencies had him on a payroll.
There are in today’s world doubtless quite a few armed opponents of the interests of the
United States. However, what the article refers to as the “global jihad”, as though it were a
self-evident fact that there is an international conspiracy operating with this aim, is asserted
here without foundation. Agencies more credible than The New York Times have also failed to
establish this premise, including the American Enterprise – Brookings Institute. None of the
sources cited in the article as corroboration are themselves independent of U.S. government
influence or control. It is therefore entirely credible, and more than likely, that the tactic
attributed to “jihadists” and allegedly “pioneered” by Zarqawi was in fact developed and has
long been used by the intelligence services of the leading big powers for some time as an
Information Age “mail-drop” for mobilising their agents without having to risk personal
38 G. M. Zatzman and M. R. Islam

contact. Indeed, there seems to be little or nothing to distinguish such mail-drops, well
documented in Cold War spy fictions by John Le Carré (among others), from the notions
discussed in the article of posting things as briefly as possible on the Internet, just long
enough for fellow followers around the world to download, and then removing the material
and all its traces from the Web. This procedure echoes the opening of the 1960s television
series Mission Impossible, in which the audience hears a taped a message left by “the
Secretary” for Agent “Jim” warn that “this tape will self-destruct in five seconds.” (This spy-
as-American-hero epic was reprised as a series of movie vehicles beginning in 1996, and now
up to Mission Impossible 3 in 2006, all starring Tom Cruise.)
The aim of such a report is to place a large obstacle of totally paralysing doubt in the path
of anyone seeking what the truth may actually be. Disguised as information, such a report is,
functionally speaking, the purest disinformation. Its evident effect, consistent with its true
aim, is to render rational articulation of a problem or its aspects impossible. In this case, the
aim is to block people from getting to the bottom of modern-day terrorism by blurring any
distinctions between state-organised terrorism and anti-imperialist political activities. Far
from assisting people to sort out any aspect of the problem of modern-day terrorism, this
“reporting” saddles them with an incoherence made even more burdensome by the
impenetrability of the material and omission of any guideline for distinguishing what is or
might be true from what is or might be false.

Zarqawi Built Global Jihadist Network on Internet


By SCOTT SHANE
Published: Tuesday 9 June 2006
The New York Times

WASHINGTON, Monday 8 June — Over the last two years, Abu Musab al-Zarqawi
established the Web as a powerful tool of the global jihad, mobilizing computer-savvy allies
who inspired extremists in Iraq and beyond with lurid video clips of the bombings and
beheadings his group carried out.

Mr. Zarqawi was killed in an air strike on an isolated house about 30 miles north of
Baghdad. Iraqi officials announce the death the next morning, and Al Qaeda confirmed that
he had died.

On Thursday (4 June – Ed.) the electronic network that he helped to build was abuzz with
commentary about his death, with supporters posting eulogies, praising what they called his
martyrdom and vowing to continue his fight.

Item 2

Under the guise of providing a quantitative and therefore “scientific” approach to


analyzing — and thus being in a position to combat — terrorism and its effects, the following
article describes situations in which intentions of a perpetrator, which are obviously crucial to
establishing whether a phenomenon is terroristic, are denied any consideration. Instead,
allegedly “objective facts” of terrorist incidents are asserted, and underlying tendencies of the
Truth, Consequences and Intentions 39

quantitative features of this dataset are plumbed for patterns and frequencies. Firstly,
however, information about the frequency of a certain event is irrelevant if there is no way of
establishing the size of the overall event-space. In this case, no total number, let alone any
notion of such a total number, of the political and-or military actions in a given event-space is
provided Secondly, the definition of what constitutes an event in this context is highly
dependent on third-party reporting rather than the compiler’s first-hand observations. In this
case, there is no objective or consistent definition of which actions are definably “terrorist”.
The key notion discussed in this article is the recognition and isolation of so-called triggering
events from other events. However, no objective foundation whatsoever is provided as to how
such events can be distinguished. Hence, anyone reading or studying this material for possible
insight is left with no guideline of the conditions in which one and the same event would
indeed be a trigger or in fact not be a trigger.
Far from being about science, this is about cooking up a justification for wholesale U.S.
government interference with the fundraising and funds distribution activities of numerous
Islamic and other charities whose money is going into countries of interest to U.S.
intelligence.

Quantitative Analysis Offers Tools to Predict Likely Terrorist Moves


By Sharon Begley, Science Journal
The Wall Street Journal
February 17, 2006; Page B1

There are regular and even predictable sunspot cycles and lunar cycles and economic
cycles, but terrorism cycles?
Between 1968 and 2004 (the last year with complete data), the world’s news media
reported 12,793 incidents of “transnational” terrorism, violence inspired by political, religious
or social motives and involving targets or perpetrators from different countries. The attacks
left a trail of horror and tragedy. But they also generated a wealth of quantitative data, putting
the study of terrorism on a firm enough footing that researchers believe they can generate
testable hypotheses -- the hallmark of science -- and predictions.
Time was, analyzing terrorism meant focusing on its political roots and sociological or
psychological motivations. But more and more scholars believe that quantitative analysis can
add something crucial.
“This kind of analysis has the potential to generate testable predictions, such as how
terrorists will respond to countermeasures instituted after 9/11,” says Bruce Russett, professor
of international relations at Yale University.
One promising technique, called spectral analysis, is typically applied to cyclical events
such as sunspots. A new application of it is now shedding light on terrorism, which, data
show, waxes and wanes in regular, wavelike cycles.

Item 3

The hair-raising spectre opened up by this approach is that of, once again, externally
manipulating an individual human’s sensorium. Humans’ sense-perception apparatus, like
that of all other vertebrates, is intimately connected to the proper functioning of the particular
40 G. M. Zatzman and M. R. Islam

individual’s brain. This is just as degenerate as the schemes involving experiments on human
subjects with LSD and other psychoactive drugs conducted by medical institutes financed by
the U.S. Central Intelligence Agency (CIA). Until its exposure in a celebrated lawsuit brought
successfully against the CIA by the experiments’ victims and their families, one such was run
by Dr Ewen Cameron at McGill University Medical School in Montreal, Canada during the
1950s and 1960s. The justification for the CIA’s “brainwashing” experiments was the U.S.
government’s Cold War mission of reversing the dangerous effects of “communist ideology”.
The data gathered by the CIA’s experiments, meanwhile, has been used in dozens of countries
around the world to refine techniques of torture applied to opponents of U.S.-friendly
regimes, such that no physical scarring is left. The aim of this tongue research is allegedly to
facilitate clearing of “terrorist” undersea mines by naval divers. However, this research is
obviously just as useful – indeed probably more useful – for planting mines as it is for
detecting them.

Scientists Probe the Use of the Tongue


By MELISSA NELSON, The Associated Press
Mon 24 Apr 2006 19:32 ET

In their quest to create the super warrior of the future, some military researchers aren't
focusing on organs like muscles or hearts. They're looking at tongues.
By routing signals from helmet-mounted cameras, sonar and other equipment through the
tongue to the brain, they hope to give elite soldiers superhuman senses similar to owls, snakes
and fish.
Researchers at the Florida Institute for Human and Machine Cognition envision their
work giving Army Rangers 360-degree unobstructed vision at night and allowing Navy
SEALs to sense sonar in their heads while maintaining normal vision underwater - turning
sci-fi into reality.
The device, known as "Brain Port," was pioneered more than 30 years ago by Dr. Paul
Bach-y-Rita, a University of Wisconsin neuroscientist. Bach-y-Rita began routing images
from a camera through electrodes taped to people's backs and later discovered the tongue was
a superior transmitter.
A narrow strip of red plastic connects the Brain Port to the tongue where 144
microelectrodes transmit information through nerve fibers to the brain. Instead of holding and
looking at compasses and bulky-hand-held sonar devices, the divers can processe the
information through their tongues, said Dr. Anil Raj, the project's lead scientist.
In testing, blind people found doorways, noticed people walking in front of them and
caught balls. A version of the device, expected to be commercially marketed soon, has
restored balance to those whose vestibular systems in the inner ear were destroyed by
antibiotics.
Michael Zinszer, a veteran Navy diver and director of Florida State University's
Underwater Crime Scene Investigation School, took part in testing using the tongue to
transmit an electronic compass and an electronic depth sensor while in a swimming pool. He
likened the feeling on his tongue to Pop Rocks candies. "You are feeling the outline of this
image," he said. "I was in the pool, they were directing me to a very small object and I was
able to locate everything very easily."
Truth, Consequences and Intentions 41

Underwater crime scene investigators might use the device to identify search patterns,
signal each other and "see through our tongues, as odd as that sounds," Zinszer said.
Raj said the objective for the military is to keep Navy divers' hands and eyes free. "It will
free up their eyes to do what those guys really want to, which is to look for those mines and
see shapes that are coming out of the murk."
Sonar is the next step. A lot depends on technological developments to make sonar
smaller: hand-held sonar is now about the size of a lunch box.

"If they could get it small enough, it could be mounted on a helmet, then they could pan
around on their heads and they could feel the sonar on their tongues with good registration to
what they are seeing visually," Raj said.

The research at the Florida institute, the first to research military uses of sensory
augmentation, is funded by the Defense Department. The exact amount of the expenditure is
unavailable.
Raj and his research assistants spend hours at the University of West Florida's athletic
complex testing the equipment at an indoor pool. Raj does the diving himself.
They plan to officially demonstrate the system to Navy and Marine Corps divers in May.
If the military screeners like what they see, it could be put on a "rapid response" to quickly
get in the hands of military users within the next three to six months.
Work on the infrared-tongue vision for Army Rangers isn't as far along. But Raj said the
potential usefulness of the night vision technology is tremendous. It would allow soldiers to
work in the dark without cumbersome night-vision goggles and to "see out the back of their
heads," he said.

Item 4

The following extract brings to the surface the essential features of a standard line of
argument which has created tremendous disinformation about the notion of “democracy” and
the professed aims of the United States in what it considers “furthering” democracy.
What the author calls “culture” is, in fact, racism. However, here there is no longer even a
pretence of declaring certain peoples “unfitted for self-rule” (the British excuse for
dominating Asia in the 19th century), or “racially inferior” (the Nazis’ excuse for putting
Europe and the Soviet Union to the sword in the 20th). Instead, according to the norms of the
new American imperium – sharply exposed by the satiric label “Pox Americana” coined by
various writers – some cultures are receptive to “democracy” à la Washington and others
are… well, just plain benighted.

Democracy Isn’t ‘Western’


By AMARTYA SEN
A piece under this title was published in the 24 March 2006 editions of The Wall Street
Journal
42 G. M. Zatzman and M. R. Islam

The determinism of culture is increasingly used in contemporary global discussions to


generate pessimism about the feasibility of a democratic state, or of a flourishing economy, or
of a tolerant society, wherever these conditions do not already obtain.
Indeed, cultural stereotyping can have great effectiveness in fixing our way of thinking.
When there is an accidental correlation between cultural prejudice and social observation (no
matter how casual), a theory is born, and it may refuse to die even after the chance correlation
has vanished without trace. For example, labored jokes against the Irish, which have had such
currency in England, had the superficial appearance of fitting well with the depressing
predicament of the Irish economy when it was doing quite badly. But when the Irish economy
started growing astonishingly rapidly, for many years faster than any other European
economy, the cultural stereotyping and its allegedly profound economic and social relevance
were not junked as sheer rubbish.
Many have observed that in the ‘60s South Korea and Ghana had similar income per
head, whereas within 30 years the former grew to be 15 times richer than the latter. This
comparative history is immensely important to study and causally analyze, but the temptation
to put much of the blame on Ghanaian or African culture (as is done by as astute an observer
as Samuel Huntington) calls for some resistance.
The temptation of founding economic pessimism on cultural resistance is matched by the
evident enchantment, even more common today, of basing political pessimism, particularly
about democracy, on alleged cultural impossibilities.
When it is asked whether Western countries can “impose” democracy on the non-
Western world, even the language reflects a confusion centering on the idea of “imposition,”
since it implies a proprietary belief that democracy “belongs” to the West, taking it to be a
quintessentially “Western” idea which has originated and flourished exclusively in the West.
This is a thoroughly misleading way of understanding the history and the contemporary
prospects of democracy.
The belief in the allegedly “Western” nature of democracy is often linked to the early
practice of voting and elections in Greece, especially in Athens. Democracy involves more
than balloting, but even in the history of voting there would be a classificatory arbitrariness in
defining civilizations in largely racial terms. In this way of looking at civilizational
categories, no great difficulty is seen in considering the descendants of, say, Goths and
Visigoths as proper inheritors of the Greek tradition (“they are all Europeans,” we are told).
But there is reluctance in taking note of the Greek intellectual links with other civilizations to
the east or south of Greece, despite the greater interest that the Greeks themselves showed in
talking to Iranians, or Indians, or Egyptians (rather than in chatting up the Ostrogoths).
Similarly, the history of Muslims includes a variety of traditions, not all of which are just
religious or “Islamic” in any obvious sense. … When, at the turn of the 16th century, the
heretic Giordano Bruno was burned at the stake in Campo dei Fiori in Rome, the Great
Mughal emperor Akbar (who was born a Muslim and died a Muslim) had just finished, in
Agra, his large project of legally codifying minority rights, including religious freedom for
all, along with championing regular discussions between followers of Islam, Hinduism,
Jainism, Judaism, Zoroastrianism and other beliefs (including atheism).
Truth, Consequences and Intentions 43

B. DISINFORMATION AND/OR INCOHERENCE ON


KEY ECONOMIC QUESTIONS
Item 1

This article illustrates how that part of the society’s economic surplus taking the form of
interest-bearing capital is harnessed to the most absurd forms of disinformation in order to
further encumber the populace in massive and growing consumer debt even as the economy
sinks. The United States has an enormous sector of its economy involved in consumer finance
– the financing of ongoing high levels of consumption, including consumer debt, regardless
of the general conditions in which the surrounding economy is kept – and the single richest
portion of that market is facing a crisis. There is a deepening slowdown in housing
construction and sales throughout the country, and housing is the largest investment most
Americans make in their lifetime. To keep the merry-go-round circling, new “financial
products” are being floated such as the 40- and even 50-year mortgage. Does this mean
getting people actually to buy a house to live in for the next 40 or 50 years? Is the Pope
Jewish?!?! No, of course not. Since new housing isn’t on the way (at least: not in the
quantities it formerly was), the idea is rather to stimulate the ever faster turnover of the
existing housing stock. But to accomplish this requires redesigning the hypothecation of
capital to be borrowed to put down on a property, such that the interest rate is locked in for as
long as possible:

“Mortgage experts caution that the 50-year mortgage is best-suited for those who plan to
stay in their home for about five years, while the loan’s interest rate remains fixed.

“‘If you’re going to be there more than five years, you’re gambling,’ says Marc Savitt of
the consumer protection committee for the National Association of Mortgage Brokers. ‘You
don’t know what interest rates are going to be. I wouldn’t do it.’ ”

Need to keep house payments low? Try a 50-year mortgage


By Noelle Knox and Mindy Fetterman,
USA TODAY
Wednesday 10 May, 0654 ET

Those struggling to afford a home may be wondering how long their mortgage payments
can be stretched out.

A handful of lenders have begun offering 50-year adjustable-rate loans to buyers who
need to keep payments low in the face of record home prices and rising rates.

Most big banks already offer 40-year mortgages, which account for about 5% of all home
loans, according to LoanPerformance, a real estate data firm. …

Statewide, which introduced its 50-year loan in March, has received about 220
applications...
44 G. M. Zatzman and M. R. Islam

For cash-squeezed buyers, the longer-term loans are another option. In California, only
14% of people could afford a median-priced home in December, when the median was
$548,430, if they had to put down 20%.

The 50-year mortgage also signals that the cooling real estate market is heating up
competition among lenders.

A borrower with a 50-year mortgage builds equity very slowly. And because rates on the
loans are adjustable, borrower’s monthly payments could rise.

Still, the 50-year isn’t considered as risky as an interest-only loan or a mortgage that lets
borrowers pay even less than the interest.

With those loans, a borrower might not build any equity and could end up owing more
than a home is worth - called negative amortization.

That’s why Anthony Sanchez applied for the 50-year loan to refinance his California
home. “I looked at a lot of different options,” says Sanchez, 30. “I didn’t want to be tempted
with negative amortization.”

Mortgage experts caution that the 50-year mortgage is best-suited for those who plan to
stay in their home for about five years, while the loan’s interest rate remains fixed.

“If you’re going to be there more than five years, you’re gambling,” says Marc Savitt of
the consumer protection committee for the National Association of Mortgage Brokers. “You
don’t know what interest rates are going to be. I wouldn’t do it.”
(All emphases added – Ed.)

Item 2

The following report discloses how, when the people are a factor in the equation of social
progress, possibilities present themselves that could not previously be imagined, or that were
dismissed as unprofitable, when only the interests of big capital and the State were
considered. A new coherence that was not previously possible now becomes possible. Above
all, as this report brings out, everything depends on the determination of the alliance backing
Hugo Chàvez in power not to permit oil to re-enslave them, and to make oil riches their
servant and the struggle to tame these oil riches their basic social school.

In Venezuela, Oil Sows Emancipation


by Luciano Wexell Severo

The data just released by the Banco Central de Venezuela (BCV) confirm that the
Venezuelan economy grew at a cumulative 10.2 percent between the fourth quarter of 2004
and the fourth quarter of 2005. This is the ninth consecutive increase since the last quarter of
2003. Overall, in 2005, the gross domestic product (GDP) grew at 9.3 percent.
Truth, Consequences and Intentions 45

Just like in the previous eight quarters, the strong increase was fundamentally driven by
activities not related to oil: civil construction (28.3 percent), domestic trade (19.9 percent),
transportation (10.6 percent), and manufacturing (8.5 percent). The oil sector had an increase
of only 2.7 percent. According to a report by the Instituto Nacional de Estadística (INE), the
unemployment rate in December 2005 was 8.9 percent, two percentage points below the rate
in the same period of 2004. In absolute terms, this means 266,000 additional jobs. Last year,
the inflation rate reached 14.4 percent, but that was below the 19.2 percent rate in 2004. The
nominal interest rate went down to 14.8 percent.
These results bear out the expectations of the ministry of finance and belie the
persistently pessimistic forecasts made by economic pundits of the opposition, who insist on
the idea of a “statistical rebound.” The term was invented in June 2004 as an explanation for
the growth in the economy: the increase in the GDP was supposed to be an arithmetic illusion
due to the depth of the fall in the previous periods. But, by now, this would be a fantastic and
unheard-of case of a ball dropping on the floor only to bounce back up and up, without signs
of slowing down, in defiance of the law of gravity. In a commentary on the economic
expansion, the minister of development and planning, Jorge Giordani, said ironically: “The
productive rebound continues . . . social practice and some modest figures have falsified the
ominous desires of the political opposition. Their political judgments, disguised as economic
technicalities, have been exposed by the new reality.”
It has been shown that one of the legacies of the neoliberal period is the disdain for
history: a shortsighted strategy that goes only as far as the horizon of the financial system,
virtual, outside of time, detached from reality, fictitious. That legacy could explain why the
orthodox analysts view the Chávez government as responsible for the poor results in the
economy between 1999 and 2003, a period they are trying to label as the “lost five years.”
Contrary to their view, let us remember: to a large extent, Hugo Chávez won the presidential
elections of December 1998 because Venezuela was facing its most catastrophic economic,
political, social, institutional, and moral crisis, after 40 years of power sharing between the
traditional parties Acción Democrática (the socialdemocrats) and COPEI (the Christian
democrats). The country and the people agonized as a result of the rampant corruption,
profligacy, and perversity of the Fourth Republic (1958-98).
Venezuela, which hardly benefited from the oil shocks of 1973 and 1979, was sinking, at
a faster speed since the early 1980s. According to Domingo Maza Zavala, currently a director
of the BCV, between 1976 and 1995 alone, the country was awash with nearly 270 billion
dollars in oil revenues, equivalent to twenty times the Marshall Plan. Yet, the national foreign
debt owed by Venezuela doubled between 1978 and 1982. This illustrates well the dynamics
of wastefulness and economic savagery of the so-called “Saudi Venezuela.”
In the early 1990s, with the “Great Turn” and the “Oil Opening” inaugurated by Carlos
Andrés Pérez -- continued by Rafael Caldera’s and Teodoro Petkoff’s “Agenda Venezuela” --
the country was handed, tied and gagged, to the International Monetary Fund. That was the
beginning of an accelerated process of national destruction: the role of the public sector in the
economy was reduced in size, physical capital was divested, the economy was de-
industrialized, strategic sectors were privatized, and historical labor conquests were taken
away. Among others, the following companies were privatized and even de-nationalized: the
Compañía Nacional de Teléfonos (Cantv), the Siderúrgica del Orinoco (Sidor), the
Venezolana Internacional de Aviación S.A. (Viasa). They extended a long list of financial
46 G. M. Zatzman and M. R. Islam

institutions, sugar mills, naval shipyards, and companies in the construction sector. In 1998,
there were specific plans to submit PDVSA to the international cartels.
Everything was done, supposedly, in the name of lowering the budget deficit,
encouraging the inflow of foreign capital, modernizing the national industry, attaining greater
efficiency, productivity, and competitiveness, reducing inflation, and lowering unemployment
-- semantic trickery to sugarcoat the Washington Consensus and make it palatable. Less than
ten years later, international entities like the UN Economic Commission for Latin America
(ECLAC), the World Bank, the IMF and even the Vatican recognized the spectacular failure
of these policies. However, long before their belated conclusions were made public, the
Venezuelan people had already raised up and pursued an alternative. Episodes of this uprising
were the Caracazo in 1989 and the two civic-military rebellions in 1992 -- the first one led by
the then unknown commander Chávez. These insurrections, unlike what happened in the
other countries of our region, constrained to some extent the implementation of the neoliberal
agenda.

Four Stages in the Economy during the Chávez Government

Under the government of Hugo Chávez, the Venezuelan economy has gone through four
different and clearly defined stages. As the continuous analysis that we have been conducting
in the last four years demonstrates, in each of these moments -- sometimes determined by the
actions of the government itself, sometimes by the reactions of the opposition to the changes
in progress -- the country has experienced sharp turns in the direction of its fiscal, monetary,
and foreign-exchange policies.
In 1999, Venezuela’s GDP fell 6 percent as a result of the extremely deteriorated
condition of the economy and of the campaign of emotional blackmail against the recently-
elected president by the mass media which are historically connected to foreign interests. Let
us remember that Maritza Izaguirre, minister of finance under Caldera, remained in office
during the first nine months of the new government. The contraction of the economy was a
natural reflection of this period of adaptation, combined with inertias that dated back to the
third quarter of 1998, as well as the oil’s extremely low international price -- close to 9 dollars
per barrel at the time.
The recovery of the prices of the crude -- direct fruit of actions undertaken by the Chávez
government -- and the expansive fiscal and monetary policies put in place marked the
beginning of a new stage. During the years 2000 and 2001, the GDP increases of 3.7 and 3.4
percent, respectively. In these eight quarters, the non-oil GDP grew 4 percent on average,
whereas the oil GDP only rose 1.2 percent. There were verifiable drops in unemployment, the
consumer price index and the interest rate, which led to an increase in credit, consumption
and GDP per capita.
Between the inauguration of Hugo Chávez in February 1999 and by midyear in 2002, the
oligarchy -- connected to foreign interests in the oil sector -- adopted a cautious stance. The
outbreak of discontent occurred by the end of 2001, when the government submitted a
legislative package seeking to induce deep structural changes in the main sectors of the
economy: the state company Petróleos de Venezuela S.A. (PDVSA); and laws encompassing
liquid and gas hydrocarbons, land ownership, the financial system, income taxation, and the
cooperatives.
Truth, Consequences and Intentions 47

Then the third phase began: a battle that lasted a year and a half, approximately. Between
the end of 2001 and February 2003, everything happened in Venezuela: the bosses’ lockout in
December 2001; the coup d’etat promoted by the CIA in April 2002; conspiracies and the “oil
sabotage” between the last quarter of 2002 and February 2003.
The foreseeable result: the Venezuelan economy fell 8.9 percent and 7.7 percent in 2002
and 2003, respectively. This was a collapse akin to a war economy. The surprising result:
Hugo Chávez emerged stronger from the crisis. After the coup d’etat failed, the coup
participants in the armed forces were exposed and demoralized. The sadistic attack against
PDVSA demonstrated the anti-national and desperate nature of the privileged class,
threatened as it felt by the nationalist actions of the government. The conspiracy managed
from Washington caused the collapse of oil production from 3 million barrels per day to
25,000, paralyzing production and triggering the bankruptcy of hundreds of companies. In the
first and second quarters of 2003, the GDP fell 15 percent and 25 percent, respectively.
Altogether, for seven consecutive quarters, the economy, the income per capita, and the
international reserves fell -- all accompanied by a rise of the unemployment rate to 20.7
percent, of the annual inflation rate to 27.1 percent, and of the interest rate to 22 percent.
But, the third quarter of 2003 ushered the beginning of the fourth and current phase of the
Venezuelan economy in the administration of Hugo Chávez: the recovery. To understand the
magnitude of this recovery, consider the size of the disasters in 2002 and 2003. Today, for
example, the gross formation of fixed capital -- the additional accumulation of capital assets -
-reaches 24.2 percent of total GDP. In the middle of the 2003 conspiracies, it fell to 14.0
percent. Venezuela is just starting to walk again.

Why Is the Economy Growing?

This reinvigoration of the Venezuelan economy is direct -- although nonexclusive --


result of the increase in oil prices to an average of 57.4 dollars per barrel (Brent blend,
December 2005). The hydrocarbons are -- and will continue to be for years to come -- a pillar
of the economy. But, then, what else is news in Venezuela?
The novel thing is that definitely the country is sowing or planting oil in the productive
sectors of the economy, as required by Arturo Uslar Pietri seventy years ago. A portion of the
oil revenues is used as a funding source to structure and strengthen the domestic market
(“endogenous development”) and jumpstart a sovereign process of industrialization and
definitive economic independence. The oil provides an instrument in overcoming the rentier,
unproductive, and importing economy already established by the 1920s, when the operations
of the “devil’s excrement” began in the Lake Maracaibo.
In particular, the sowing of oil is being made possible through seven mechanisms: (1) the
amendment to the hydrocarbons law and the increase in royalties received by the government
from the transnational oil companies; (2) the adoption of controls over the exchange rate in
early 2003, which doubled the international reserves (from 15 to 30 billion dollars) and made
possible the implementation of further measures; (3) the new law governing the central bank
and the creation of the Fondo de Desarrollo Nacional (FONDEN) with already a balance of
almost 9 billion dollars; (4) the new approaches by the tax collection authority, the SENIAT,
that increased tax revenues in 60 percent this year -- mostly from large domestic and
transnational companies, historically the deadbeats and evaders; (5) a broad plan of public
48 G. M. Zatzman and M. R. Islam

investments in a platform of basic industries, with their consequent multiplier and accelerator
effects on private investment in industries that transform basic inputs into products of higher
value added; (6) the contribution in 2005 of nearly 5 billion dollars to the Misiones Sociales,
as an emergency mechanism to honor the immense accumulated social debt, lower
unemployment, and fight inflation; (7) the work conducted by the Ministry of Agriculture and
Land (MAT) to rescue and reactivate the production of a million and a half hectares of
unproductive large estates, strengthening the 2006 Sowing Plan and incorporating thousands
of farmers and workers into the productive process.
These seven mechanisms account for the fact that, since 2004 and in spite of the strong
growth in oil prices, the non-oil GDP grew significantly faster than the oil GDP,
demonstrating the positive impact of oil exports on activities not directly related to crude
extraction. While in the second quarter of 1999 the share of non-oil GDP was 70.5 percent of
total GDP, today it stands at 76.0 percent. And, partly as a result, in this period, the share of
the oil GDP in total GDP shrunk from 20.1 percent to 14.9 percent.
Even more significant is the acceleration in the manufacturing industry between early
2003 and the present. Manufacturing was the sector that grew the fastest in the period,
recently surpassing oil GDP -- for the first time since 1997, starting year of this statistical
series at the BCV. This dynamism can be verified especially by the consistent increases in
electricity consumption, automotive vehicle sales, cement, durable products for civil
construction, iron, and aluminum. Within the manufacturing industry, the branches that grew
fastest are: automotive vehicles, trailers, and semi-trailers (13.5 percent); food, drinks, and
tobacco (10.6 percent), rubbers and plastic products (10.3 percent), and machinery and
equipment (7.0 percent). The share of manufacturing in total GDP, which shrunk to 14.7
percent during the “oil sabotage,” is now reaching 16.7 percent with a momentum to grow
briskly. These results will be improved upon when the full impact of the
“Agreement/Framework for the Recovery of the Industry and Transformation of the
Production Model” as well as the “Decree for the Provision of Inputs and Raw Materials to
the National Manufacturing Sector” are felt. These policy instruments seek to limit the
exports of raw materials and to guarantee basic inputs -- like aluminum, iron, steel, and wood
-- to the Venezuelan producers. From early 2003, the share of final consumption goods in
total imports has gone down from 37.6 percent to 24.2 percent, accompanied by an increase in
the acquisition of goods devoted to gross capital formation from 12.3 percent to 25.7 percent
of the total. That is to say, Venezuela has invested its foreign exchange in purchasing
machinery, parts, and equipment that make it possible for the process of sovereign
industrialization to proceed.
The dollars at the FONDEN have been reserved to finance strategic development plans in
such sectors as basic industries, oil, gas, physical infrastructure, transportation, and housing.
Within these outlines, new companies are being created and new projects are unfolding like
the new Venezuelan iron-and-steel plan for the production of special steels, a factory of
“seamless” oil tubes, three new oil refineries, ten sawmills, plants to produce cement, plants
to improve iron ore, factories to produce aluminum sheets, plants to produce pulp and paper,
and many others. In addition to that, the Inter-American Development Bank recently
approved a credit of 750 million dollars for the construction of the hydroelectric power plant
at Tocoma. Altogether, just by itself, the national power system will receive investments that
approach 3 billion dollars in 2006.
Truth, Consequences and Intentions 49

All these plans have been directed by the Venezuelan government, which will control at
least 51 percent of these initiatives, although many will involve strategic associations with
other countries or private -- national or foreign -- investors. The goal is to strengthen
international relations, especially with other nations in Latin America, with China, Spain,
India, Iran, and Italy, in the spirit of building an Alternativa Bolivariana para la América
(ALBA) and contributing to creating a multi-polar world. Instances of this are the recent
initiative to build the oil refinery Abreu e Lima, in the Brazilian state of Pernambuco, agreed
between PDVSA and Petrobrás; the agreements with Argentina to build oil tankers at the
Santiago River shipyards; and the bi-national tractor factory Venirán Tractor, with the
government of Iran, which has already turned out its first 400 units.
The government initiatives to join forces with national entrepreneurs are many, looking
to reactivate the industrial and agricultural apparatus. The goal is not solely the economic
recovery, but the creation of bases to abandon the rentier model, sustained by raw oil wealth,
and to establish a new productive, endogenous model, with internal dynamics and life, able to
guarantee economic growth and national development. In March of 2005, the ministry of
basic industries and mining (MIBAM) was created, commissioned to build the bases for a
sovereign process of industrialization.

New Threats and Attempts at Destabilization

A recent note published in the Venezuelan newspaper El Universal, under the headline
“The Economic Recession Looms,” is quite enlightening. The note says: “the Venezuelan
economy is showing the first signs of an imminent recession, due to the stagnation in the non-
oil sector.” This is the exact opposite of what the statistics show -- as I have tried to explain.
The anti-national sectors that brought us the bosses’ lockout, the coup d’etat, the oil
sabotage, and the continuous conspiracies against the country are back at it. This is a reaction
against the inroads made in the profound process of economic and social transformation the
country is undergoing, the progress in income redistribution and social inclusion, which had
very positive effects in the economic activities and the country’s political life during 2005.
The initiatives of the government to extend a hand to the nationalist entrepreneurs show the
will to seek unity in working to activate industry and agriculture, creating jobs and fostering
the endogenous development. In addition to this, Venezuela is expanding its international
relations with important countries and has made effective its membership to MERCOSUR.
All the predictions are that, in 2006, the Venezuelan economy will grow at nearly 6
percent, with parallel advances in an array of economic and social indicators. This is the best
moment the Chávez government has had since it was inaugurated and, with the president’s
proposition to advance towards the “Socialism of the XXI Century,” the stage is set for a
process of even more intense structural changes in the outlook. The next presidential elections
will take place in December, this year, and the prospects of another victory for the Bolivarian
forces has disturbed the White House and its domestic allies. It is possible that, in its
increasing isolation, the Bush administration will again resort to violence to disrupt
democratic and popular governments. On the other hand, as it has happened before, the
actions orchestrated by the U.S. government will meet the resistance of the Venezuelan
people, and each aggression will only increase their consciousness and reinforce their
participative and protagonistic democracy.
50 G. M. Zatzman and M. R. Islam

Luciano Wexell Severo is a Brazilian economist. This article was originally published in
Spanish by Rebelión on 12 March 2006. The English translation was provided by Julio
Huato.

C. DISINFORMATION ABOUT SCIENTIFIC METHOD AND PURPOSE


Item 1

On no question is disinformation more flagrant than that of Charles Darwin’s theory of


evolution by natural selection. Darwin wrote about The Descent of Man (1870) and about The
Origin of Species (1859) – but nowhere did he ever venture into the territory of speculating
about the origins of Man as such.
Nevertheless, as the following article illustrates, one can prove just about anything if one
is sufficiently slippery and careless in deploying self-serving analogies between organic
nature and man-made artifice to bolster asserted beliefs – either as scientific fact, or as an
asserted refutation of falsehoods purportedly decked out as scientific truths. Under the guise
of “opening their minds to new possibilities”, some people will scruple at nothing to
dogmatise the thinking and creative abilities of a new generation of students by serving up a
disinformed “straw-man” misrepresentation of Darwin’s theory.
The very first example, in the paragraph immediately below, elides the necessary and
sufficient conditions of a human-engineered mechanical design into something supposedly
comparable to the “irreducible complexity” of a natural organism. The fact that NOTHING in
the mousetrap evolved into existence, that no previous species of the device developed new
characteristics that became the modern mousetrap, means that all its complexity is entirely
knowable and determinable in the present. Its complexity lacks precisely the irreducibility
that is to be compared with natural organisms.
On the other hand, with organic life forms, sufficient evidence from the fossil record and
the living record of plant and animal life on earth exists to buttress the assertion that many
life-forms came into existence most likely following some species differentiation developed
and evolving out of some earlier form. There is an aspect of the complexity that is truly
irreducible – irreducible in the sense that we are unable to extract any further information
pertaining to our own existence as a distinct species. All we know is that, at some definite
point, the previous species is succeeded by another distinct species. The successor may share
a huge SIMILARITY with the forbear, yet is sufficiently differentiated that successor and
forbear are mutually unrecognisable to the other. The fact is, for example, that the DNA of
the chimpanzee is more than 99 percent identical to that of the human genome. Yet, neither
would ever mistake the other as a member of its species. Indeed, even as conscious as we
humans are, our own forbears seem familiar and recognisable to us going back a few aeons,
even as we adduce evidence of humanoids walking upright on the face of the earth more the
2.5 billion years ago.
Hence the mousetrap illustration of “intelligent design” turns out to be a total non
sequitur. After the evolutionary step is completed in a natural organism, it can then be said
that this or that aspect of it now works differently than before. It is meaningless to speak of
how it could not work if this or that piece were missing. If certain pieces were missing, it
Truth, Consequences and Intentions 51

would be a different species, perhaps even a forbear if not a relative. It should not come as a
shock, but it does, that the individual pulling this stunt is not the bait-and-switch artist in a
Barnum and Bailey Circus sideshow, but an associate professor of molecular biology at Iowa
State University.
The methods of the particular brand of proponents of intelligent design described in this
article merit some elaboration. Behind their apparent objections to Darwin’s interpretations
and conclusions lurks a bedrock rejection of scientific method and its fundamental principle
of remaining true to the fidelity of one’s own observations and of premising all real, grounded
understanding of phenomena on one’s conscious participation in acts of finding out.
The classic argument is that Darwin’s theory ascribes to chance the elaboration of higher
order development and functioning in natural organisms and that such an accomplishment is
too harmonious to have evolved by chance - hence the need to invoke “intelligent design” as
the explanation. Darwin’s theory, meanwhile, says no such thing. Natural selection is above
all natural: all the possible outcomes are already present in Nature. What conditions which
pathway the organism eventually selects derives from the state of the surrounding
environment at that particular time. This itself is also conditioned by previously existing
factors, although these other factors may have nothing whatever to do with the organism in
question. According to this article, many contemporary proponents of intelligent design
render cosmetic obeisance to the undeniable facts of the geological record, but it is more a
case of “reculer pour mieux sauter” than of submitting to the actual meaning and deeper
implications of such facts:

“Intelligent design does not demand a literal reading of the Bible. Unlike traditional
creationists, most adherents agree with the prevailing scientific view that the earth is billions
of years old. And they allow that the designer is not necessarily the Christian God.”

Meanwhile, however:

“According to an informal survey by James Colbert, an associate professor who teaches


introductory biology at Iowa State, one-third of [Iowa State University] freshmen planning to
major in biology agree with the statement that ‘God created human beings pretty much in their
present form at one time within the last 10,000 years.’ Although it’s widely assumed that
college-bound students learn about evolution in high school, Mr. Colbert says that isn’t
always the case.
“ ‘I’ve had frequent conversations with freshmen who told me that their high-school
biology teachers skipped the evolution chapter,’ he says. ‘I would say that high-school
teachers in many cases feel intimidated about teaching evolution. They’re concerned they’re
going to be criticized by parents, students and school boards.’ ”

It may thus be inferred from these statements that:

1. by exploiting the ignorance of the student body about the age of the earth and the
various species groups including humans, it becomes easy to plant and advance
arguments that assume, falsely, that humans have only been around a relatively short
time, that they did not evolve from the apes and that they underwent no significant
evolution since their appearance; and
52 G. M. Zatzman and M. R. Islam

2. if it can be established that humans could not have evolved, then the possibility that
any other plant or animal life evolved is placed in doubt.

All this would be hilarious were it no so tragic. It is clear that there are forces and
individuals in the United States who are quite prepared to follow science and even consider
themselves scientific-minded albeit on a 50-per cent discounted basis. They like the part of
the Law of Conservation of Matter and Energy that points out how matter and energy can be
neither created nor destroyed. However, they object to evolution, i.e., they want to write off
the other bit of this fundamental Law pointing out that matter and energy are converted
continually one into the other. Doubtless many of these same individuals would be the first to
insist that the truth must be 100% unalderatedly true. When it comes to outlook, however,
they are prepared to admit utter nonsense on the same basis as the truth.

Darwinian Struggle
At Some Colleges, Classes Questioning Evolution Take Hold;
‘Intelligent Design’ Doctrine Leaves Room for Creator;
In Iowa, Science on Defense

A Professor Turns Heckler


By DANIEL GOLDEN
Staff Reporter of THE WALL STREET JOURNAL
November 14, 2005; Page A1

AMES, Iowa -- With a magician’s flourish, Thomas Ingebritsen pulled six mousetraps
from a shopping bag and handed them out to students in his “God and Science” seminar. At
his instruction, they removed one component -- either the spring, hammer or holding bar --
from each mousetrap. They then tested the traps, which all failed to snap.

“Is the mousetrap irreducibly complex?” the Iowa State University molecular biologist
asked the class.

“Yes, definitely,” said Jason Mueller, a junior biochemistry major wearing a cross around
his neck.

That’s the answer Mr. Ingebritsen was looking for. He was using the mousetrap to
support the antievolution doctrine known as intelligent design. Like a mousetrap, the
associate professor suggested, living cells are “irreducibly complex” -- they can’t fulfill their
functions without all of their parts. Hence, they could not have evolved bit by bit through
natural selection but must have been devised by a creator.
Intelligent-design courses have cropped up at the state universities of Minnesota, Georgia
and New Mexico, as well as Iowa State, and at private institutions such as Wake Forest and
Carnegie Mellon.
Intelligent design does not demand a literal reading of the Bible. Unlike traditional
creationists, most adherents agree with the prevailing scientific view that the earth is billions
of years old. And they allow that the designer is not necessarily the Christian God.
Truth, Consequences and Intentions 53

Still, professors with evangelical beliefs, including some eminent scientists, have initiated
most of the courses and lectures, often with start-up funding from the John Templeton
Foundation. Established by famous stockpicker Sir John Templeton, the foundation promotes
exploring the boundary of theology and science. It fostered the movement’s growth with
grants of $10,000 and up for guest speakers, library materials, research and conferences.
According to an informal survey by James Colbert, an associate professor who teaches
introductory biology at Iowa State, one-third of ISU freshmen planning to major in biology
agree with the statement that “God created human beings pretty much in their present form at
one time within the last 10,000 years.” Although it’s widely assumed that college-bound
students learn about evolution in high school, Mr. Colbert says that isn’t always the case.

“I’ve had frequent conversations with freshmen who told me that their high-school
biology teachers skipped the evolution chapter,” he says. “I would say that high-school
teachers in many cases feel intimidated about teaching evolution. They’re concerned they’re
going to be criticized by parents, students and school boards.”

Templeton-funded proponents of intelligent design include Christopher Macosko, a


professor of chemical engineering at University of Minnesota. Mr. Macosko, a member of the
National Academy of Engineering, became a born-again Christian as an assistant professor
after a falling-out with a business partner. For eight years, he’s taught a freshman seminar:
“Life: By Chance or By Design?” According to Mr. Macosko, “All the students who finish
my course say, ‘Gee, I didn’t realize how shaky evolution is.’”
Another recipient of Templeton funding, Harold Delaney, a professor of psychology at
the University of New Mexico, taught an honors seminar in 2003 and 2004 on “Origins:
Science, Faith and Philosophy.” Co-taught by Michael Kent, a scientist at Sandia National
Laboratories, the course included readings on both sides as well as a guest lecture by David
Keller, another intelligent-design advocate on the New Mexico faculty.

Item 2

These paradoxes about the constancy of physical constants are all removable once the
timescale is appropriately adjusted. This hints at a very important byproduct of seeking the
all-natural pathway, viz., one must also be prepared to adjust one’s temporal conceptions. The
most interesting remarks in this report are those in the last five paragraphs, in which, when
things don’t “add up”, the temptation re-emerges to add dimensions that are not physically
accessible ikn order to account for natural transformations over vast period of time. These
attempts suggest that the researchers themselves in this realm operate entirely unconscious of
the implications of looking at time itself within nature as the fourth physically irreversible
dimension rather that some independently-varying measure.

A Universal Constant on the Move

Is the proton losing weight, or has the fabric of the Universe changed?
By Mark Peplow
Nature.com
54 G. M. Zatzman and M. R. Islam

20 April 2006

How heavy is a proton compared to an electron? The answer seems to have changed over
the past 12 billion years.
It seems that nothing stays the same: not even the ‘constants’ of physics. An experiment
suggests that the mass ratio of two fundamental subatomic particles has decreased over the
past 12 billion years, for no apparent reason.
The startling finding comes from a team of scientists who have calculated exactly how
much heavier a proton is than an electron. For most purposes, it is about 1,836 times heavier.
But dig down a few decimal places and the team claims that this value has changed over time.
The researchers admit that they are only about 99.7% sure of their result, which
physicists reckon is a little better than ‘evidence for’ but not nearly an ‘observation of’ the
effect. If confirmed, however, the discovery could rewrite our understanding of the forces that
make our Universe tick.

Fickle Forces
This is not the first time physicists have suspected physical constants of inconstancy.
In 1937, the physicist Paul Dirac famously suggested that the strength of gravity could
change over time. And arguments about the fine-structure constant, , have raged for years (see
‘The inconstant constant?’). The fine-structure constant measures the strength of the
electromagnetic force that keeps electrons in place inside atoms and molecules.
Some physicists have argued that the equations describing our Universe allow for
variance in the relative masses of a proton and electron. In fact, they have said, this value
could theoretically vary more than does, and so might be easier to pin down.

Times they are a-changing


To look for such variation, Wim Ubachs, a physicist from the Free University in
Amsterdam, the Netherlands, and his colleagues studied how a cool gas of hydrogen
molecules in their lab absorbed ultraviolet laser light. The exact frequencies of light that are
absorbed by each hydrogen molecule (H2), which is made of two protons and two electrons,
depend on the relative masses of these constituent particles.
Then they compared this result with observations of two clouds of hydrogen molecules
about 12 billion light years away, which are illuminated from behind by distant quasars.
Although the light changes frequency on its long journey through space, researchers at the
European Southern Observatory in Chile were able to unpick what the original frequencies
absorbed by the hydrogen were.
Ubachs’ comparison suggests that over this vast timescale, which is most of the lifetime
of the Universe, the proton-to-electron mass ratio has decreased by 0.002%. The scientists
report their research in Physical Review Letters [1].

Constant Craving
Ubachs says that his team’s laser measurements are hundreds of times more accurate than
previous laboratory data. This improves their detection of the mass ratio effect by a factor of
two to three.
Truth, Consequences and Intentions 55

“They’ve done the best job of anyone so far,” agrees John Webb, a physicist at the
University of New South Wales in Sydney, Australia, who has studied changes in both the
proton-electron mass ratio and the fine-structure constant [2, 3].
So what could be causing the ratio to change? Both Ubachs and Webb say that it is
unlikely that protons are losing weight. Instead, some theories suggest that extra dimensions
occupied by the particle might be changing shape.
Or perhaps it’s a consequence of the speed of light slowing down, or general relativity
behaving in odd ways. “We just don’t know what the explanation is,” Webb admits.

Firm it up
If Ubachs’ finding is confirmed, it would be an “experimental foundation stone” for
physics, says Webb.
Ubachs says that the observations could be improved or confirmed by looking at
hydrogen clouds in the lab over a time period of, say, five years, but with a billion times
greater precision. This would remove the difficulty of working out the precise wavelength of
very dim light after it has passed through billions of light years of space.
But it could also remove the effect altogether. “It may be that it was only in an early
epoch of the Universe that the value changed,” suggests Ubachs.

REFERENCES
[1] Reinhold E., et al. Phys. Rev. Lett., 96. 151101 (2006).
[2] Tzanavaris P., Webb J. K., Murphy M. T., Flambaum V. V.and Curran S. J. Phys. Rev.
Lett., 95. 041301 (2005).
[3] Webb J. K., et al. Phys. Rev. Lett., 87. 091301 (2001).

D. BIOMEDICAL DISINFORMATION
Item 1

Two obvious questions not asked in the following otherwise informative summary are: 1.
what triggers the body to produce or stop producing its own vitamin supplies in the first
place; and 2. why should we assume that the body will treat a synthesised substitute for
whatever vitamin-complex the body is not producing itself the same as it would treat that
same vitamin-complex produced by the body itself? The logic of “chemicals are chemicals”
dismisses out of hand the very idea of considering such questions seriously. This remains the
case, even though it is well understood, for example, that the only Vitamin D the human body
can use efficiently is that which it can obtain from sunlight-triggered photosynthesis.
Parodying a line made famous by the “winning-est” coach in U.S. professional football, when
it comes to natural processes, pathway isn’t everything: it’s the only thing. The fact that no
two things anywhere in nature are identical and that their pathway in nature is a critical
primary component, not a minor secondary matter, are both very bad, unwanted news for
56 G. M. Zatzman and M. R. Islam

proponents and exponents of the very business model into which ehemical engineering has
been slotted, based on mass production and sale of a uniform and homogenised product.

Prevention
The Case Against Vitamins
Recent studies show that many vitamins not only don’t help. They may actually cause
harm.
By TARA PARKER-POPE
The Wall Street Journal, 20 March 2006, Page R1

Every day, millions of Americans gobble down fistfuls of vitamins in a bid to ward off ill
health. They swallow megadoses of vitamin C in hopes of boosting their immune systems, B
vitamins to protect their hearts, and vitamin E, beta carotene and other antioxidants to fight
cancer.
It’s estimated that 70% of American households buy vitamins. Annual spending on
vitamins reached $7 billion last year, according to industry figures.
But a troubling body of research is beginning to suggest that vitamin supplements may be
doing more harm than good. Over the past several years, studies that were expected to prove
dramatic benefits from vitamin use have instead shown the opposite.
Beta carotene was seen as a cancer fighter, but it appeared to promote lung cancer in a
study of former smokers. Too much vitamin A, sometimes taken to boost the immune system,
can increase a woman’s risk for hip fracture. A study of whether vitamin E improved heart
health showed higher rates of congestive heart failure among vitamin users.
And there are growing concerns that antioxidants, long viewed as cancer fighters, may
actually promote some cancer and interfere with treatments.
Last summer, the prestigious Medical Letter, a nonprofit group that studies the evidence
and develops consensus statements to advise doctors about important medical issues, issued a
critical report on a number of different vitamins, stressing the apparent risks that have
emerged from recent studies. The Food and Nutrition Board of the National Academy of
Sciences -- the top U.S. authority for nutritional recommendations -- has concluded that
taking antioxidant supplements serves no purpose.

“People hear that if they take vitamins they’ll feel better,” says Edgar R. Miller, clinical
investigator for the National Institute on Aging and author of an analysis that showed a higher
risk of death among vitamin E users in several studies. “But when you put [vitamins] to the
test in clinical trials, the results are hugely disappointing and in some cases show harm.
People think they are going to live longer, but the evidence doesn’t support that. Sometimes
it’s actually the opposite.”

Item 2

The following article, while highly informative about the true state of scientific
understanding at the U.S. Food and Drug Administration, persists nevertheless in spreading
the disinformation that associates sunlight with ultraviolet light. The point about sunlight is
that it includes UV as well as IR and all frequencies in between. The shortcoming of all
Truth, Consequences and Intentions 57

artificial substitutes for sunlight is that they do not contain the entire visible spectrum. The
damage that correlates with reported excessive exposure of humans to some forms of artificial
light is probably not unconnected to the wide gaps in frequency spectrum of the artificial
forms. Selling increased exposure and use of various sources of artificial light began with
planting the story that excessive exposure to sunlight – as opposed to reasonable, interrupted
and certainly not 16-hours-at-a-stretch exposure – “causes” cancers of the skin.

Mushrooms Are Unlikely Source of Vitamin D


By ANDREW BRIDGES, The Associated Press
Tuesday 18 Apr 2006, 02:17 ET

WASHINGTON - Mushrooms may soon emerge from the dark as an unlikely but
significant source of vitamin D, the sunshine vitamin that helps keep bones strong and fights
disease.

New research, while preliminary, suggests that brief exposure to ultraviolet light can zap
even the blandest and whitest farmed mushrooms with a giant serving of the vitamin. The
Food and Drug Administration proposed the study, which is being funded by industry.
Exposing growing or just-picked mushrooms to UV light would be cheap and easy to do
if it could mean turning the agricultural product into a unique plant source of vitamin D,
scientists and growers said. That would be a boon especially for people who don’t eat fish or
milk, which is today the major fortified source of the important vitamin.
The ongoing work so far has found that a single serving of white button mushrooms —
the most commonly sold mushroom — will contain 869 percent the daily value of vitamin D
once exposed to just five minutes of UV light after being harvested. If confirmed, that would
be more than what’s in two tablespoons of cod liver oil, one of the richest — and most
detested — natural sources of the vitamin, according to the National Institutes of Health.
Sunshine is a significant source of vitamin D, since natural UV rays trigger vitamin D
synthesis in the skin. Mushrooms also synthesize vitamin D, albeit in a different form,
through UV exposure. Growers typically raise the mushrooms indoors in the dark, switching
on fluorescent lights only at harvest time. That means they now contain negligible amounts of
vitamin D.
Research, including new findings also being presented at the conference, consistently has
shown that many adult Americans do not spend enough time outside to receive enough UV
exposure needed to produce ample vitamin D. The problem is especially acute in winter.
That worries health officials and not only because of rickets, the soft-bone disease linked
to vitamin D deficiency that was a scourge in decades past. Vitamin D is increasingly thought
to play a role in reducing the risk of osteoporosis, cardiovascular disease and tooth loss, as
well as in reducing mortality associated with colon, breast, prostate and other cancers.
Beelman said his research has shown that exposing growing mushrooms to three hours of
artificial UV light increases their vitamin D content significantly. That could be easier than
exposing fresh-picked mushrooms to light, Beelman said. The only drawback is that the white
button mushrooms — like people — tend to darken with increased UV exposure, he added.
58 G. M. Zatzman and M. R. Islam

Item 3

Aspirin was well-established as a pain reliever, as well as a substance that could help
heart disease sufferers keep down ongoing risks from clogged arteries. Then along came the
Cox-2 inhibitor class of painkillers – utterly unnecessary since aspirin was readily available
and much cheaper. Once some of these drugs became implicated in heart attacks – heart
attacks that were predicted in the clinical research, but never mentioned to the US Food and
Drug Administration by the companies developing them – the bloom was off the rose for
Cox-2-inhibiting painkillers. Furthermore, multi-billion-dollar class-action “product liability”
lawsuits followed in the wake of some of the heart attack deaths, while an enormous
investment in utterly unnecessary research and development remains to be recouped… What
to do?
Sure enough, as the following article points out, “over the past four years, medical
publications have become full of talk about ‘aspirin resistance’ -- suggesting that millions
who take an aspirin a day to prevent heart attacks are wasting their effort. If that is true,
widespread testing might be needed to detect the condition and doctors might have to turn to
aspirin substitutes costing $4 a day.

“But reports and commentary on the subject often fail to point out that many of those
raising alarms about aspirin resistance have financial ties with drug and test makers who stand
to profit from the idea’s acceptance.”

On this score, one of the leading such “drugs” launched by Harvard University, Plavix,
has become especially notorious. “Before Plavix we rarely heard a mention of aspirin
resistance,” the director of cardiology training at Cedars-Sinai Medical Center in Los Angeles
– who doesn’t test patients for aspirin resistance and accepts no industry funding or
consulting work – told The Wall Street Journal reporter compiling this story. “One has to
wonder if the commercial implications of this phenomenon trump scientific reality,” he
added.
What we have here is biomedical research and development as a 100-per-cent, 22-carat,
gold-plated defrauding of the public. Its operative disinformation goes something like this: “if
you don’t get tested for ‘aspirin resistance’, it may turn out that your failure to eat or drink the
poison we’re selling could be bad for you.”

Critical Dose

Aspirin Dispute Is Fueled by Funds Of Industry Rivals


A Cheap Remedy for Clotting Used by Millions of Patients Is Undermined by Research
as Bayer's Friends Fight Back
By DAVID ARMSTRONG
The Wall Street Journal, 24 April 2006, Page A1

Over the past four years, medical publications have become full of talk about “aspirin
resistance” -- suggesting that millions who take an aspirin a day to prevent heart attacks are
Truth, Consequences and Intentions 59

wasting their effort. If that is true, widespread testing might be needed to detect the condition
and doctors might have to turn to aspirin substitutes costing $4 a day.
But reports and commentary on the subject often fail to point out that many of those
raising alarms about aspirin resistance have financial ties with drug and test makers who stand
to profit from the idea’s acceptance.
Last July, Harvard Medical School associate professor Daniel Simon warned that aspirin
resistance may afflict as many as 30% of the 25 million Americans taking aspirin for their
hearts. He wrote in Physician’s Weekly, a trade publication, that these people are at higher
risk for heart attacks and strokes and may need other anti-clotting drugs.
The article didn’t mention that Dr. Simon receives research funding from Accumetrics
Inc., a privately held San Diego company that makes a test to measure aspirin resistance, and
from pharmaceuticals maker Schering-Plough Corp., which sells a drug being tested as a
potential benefit for patients deemed aspirin-resistant. He is also a consultant and paid
speaker for Schering-Plough. Physician’s Weekly Managing Editor Keith D’Oria says he
knew of the ties, but didn’t disclose them. He said the publication never discloses possible
conflicts and instead uses the information for other purposes, such as contacting drug
companies listed by doctors to see if they might place an ad near the doctor’s commentary.
The issue of aspirin resistance is a powerful example of how key academic researchers
with a financial interest can influence the care Americans receive. Fears of aspirin resistance
have boosted sales of the anticlotting pill Plavix, the world’s second best-selling drug after
cholesterol fighter Lipitor. Even some doctors who are trying to debunk aspirin resistance
have financial ties -- to aspirin maker Bayer AG. …

Item 4

In the context of biomedical science, the disinformation identified elsewhere in such


notions as “chemicals are chemicals” takes the forms “biochemicals are biochemicals” and
“neural pathways are neural pathways”. Such is the thinking underlying any and every
attempt to replicate the sequencing of cognitive acts by remote control outside that of the host
brain.

Scientists Probe the Use of the Tongue


By MELISSA NELSON, The Associated Press
Mon 24 Apr 2006 19:32 ET
See full text at Item 3 of Section A above

E. THE NATURAL PATH AS THE BASIS OF THE LAW AND


OUTLOOK OF INDIGENOUS PEOPLE
“Warriors – Why we stand up”
Mohawk Nation News
5 November 2006.
60 G. M. Zatzman and M. R. Islam

Introduction

Sa-o-ie-ra; warriors in other Indigenous nations; why we went off the path; our effect on
non-indigenous society; criticisms; who was Karonhiaktajeh; history of the warriors; the goal
of peace; ganistensera; role of clans and women relatives.
Tekarontakeh, an elder Renskarakete, outlined some of the historical background on how
the “Warrior Society” was re-established. He said, “It was always here because the people
were always here. For a time it just wasn’t visible”.

Sa-o-ie-ra
“Sa-o-ie-ra” means it’s a natural instinct instilled in us by creation. Every creature on
earth has this natural instinct to protect itself, its species, its territories and everything that
sustains its life. We do what we must do to survive. A warrior doesn’t have to ponder whether
he should protect himself, his people, his clan, his land, water and everything that helps to
sustain his people. These are our sources of life. Our full understanding of the natural world is
how we will survive. When we are attacked, our instincts tell us that we have to defend
ourselves as fully and completely as possible. That’s how Creation made us.

All Indigenous Nations Have Warriors


The colonists are alarmed that our People are picking up this warrior image no matter
what Indigenous nation we come from. Why? They thought they had destroyed us through
colonization, genocide, fear and all the strategies that have been applied elsewhere in the
world. Our will comes from nature. Even the non-natives depend on that too. To destroy us
they destroy themselves.
A lot of time and effort is spent trying to dissolve this positive image of the warrior. At
Six Nations at the beginning of the reclamation of “Kanenhstaton”, in the confusion of people
rising up to defend the land, efforts were made by a few to not fly the “warrior flag”. We
argued that it signified that we want to fight for our survival as a people and to carry out our
instructions. In the end, not only did one or two flags come out, but the whole community put
the flag in front of their homes and on their cars. The colonists are trying to discourage our
people from identifying with us. More and more are coming to understand our position. We
have no choice but to live according to the men’s duties as outlined in our constitution, the
Kaiannereh’ko:wa/Great Law of Peace.

What Took Us off the Path? It Was the Genocide We Went Through
The colonists came here to conquer. They brought their strange religions and hierarchical
institutions. These were contrary to how we see life. We saw these as “man-made beliefs”.
Rather than relying on ourselves, they wanted to pacify us. They wanted our decisions to be
made by someone somewhere up on the hierarchical ladder. They attacked our spirit in many
ways so we lost confidence and could no longer defend ourselves or what was ours. They did
not want us to defend our lives that creation had prescribed for us. Our knowledge is based on
our observation of natural ways. This confusion destroyed many of us.
These invaders immediately tried to take over Turtle Island . One of the first strategies
was to use us for commerce and military allies. In the meantime they worked to kill us off.
After a while they successfully killed off 99% of us throughout the Western Hemisphere . 115
Truth, Consequences and Intentions 61

million of our people died off. We are the descendants of those few whose ancestors survived
the biggest holocaust in all mankind. They had to make some crucial decisions for us to be
here today.

Non-Indigenous Society Must Learn Too


What brought us back? It was the minds of our ancestors who thought about how we
were to survive in this news state of affairs. Our ancestors willed themselves so that we would
be here today. We have to educate ourselves about this past history and how we got through
it. It was the Kaianereh’ko:wa – which is modeled on the ways of the natural world. We just
have to look at what it does, think about and discuss it. Then we will find our answers.
The time has come for the non-Indigenous people to learn why we are standing up as we
do. We can’t educate only ourselves. We must educate the settlers too. Many non-native
people are interested in who are the real Onkwehonwe? It means “the real people forever”
because nature is here forever. At one time these non-natives had ancestors who were
indigenous somewhere else once too. They need to understand the world view we are coming
from. Maybe they have an instinctual desire to find a home. Many of them are sensitive
enough to understand that when you are in someone else’s land, you have to tread carefully.
What happened to us is the guests came here and tried to kill of their hosts. This is bad
manners and a violation of nature. A parasite will die if it kills what it is living off of. You
could disrupt something and not even know you’re doing it.

Criticisms of the Warriors


We hear a lot of provocative statements about the warriors by those who don’t know very
much about “Rotiskenrakete”, the carriers of the soil of Turtle Island . Colonial society
translates this to mean “warrior”. They are not a legend. Warriors are real. Warriors are part
of the Kaianereh’ko:wa/Great Law of Peace.
There is not only a lack of understanding but a deliberate misrepresentation of the
warriors. White society is designed to control its people. To do this they have to instill fear in
everybody. The “warriors” have been stereotyped to evoke fear in the non-native people.
Because of this lack of understanding, at the Six Nations reclamation of our land and
other sites of resistance, our men have had to cover their faces. The warriors of old did not
have to deal with media manipulations, high tech police surveillance techniques, files, wanted
posters over police wires, or phony charges to criminalize them into submission to the
colonial system. This is the price we have to pay for carrying out our duties to our people.
The Rotisken’ra:kete did not fail in their duties. As a matter of fact, they have been
phenomenally successful. Had they failed, we would not be here today. Currently the
colonists are spending millions of dollars to discredit our Rotiskenrakete. But we are still here
and our flag is still flying. If the Ongwehonwe are finally destroyed that is when the
Rotisken’ra:kete are destroyed”.
The warrior flag is flown in many part of the world by people who resist tyranny and
totalitarianism. The designer of the flag, Karonhiaktajeh, originally called it a “unity flag”. It
was the settlers who called it “a warrior flag”. On April 20th 2006 when the Ontario
Provincial Police tried to invade Six Nations and beat up our women and children, they
awoke the lions. Those flags went up right across Canada . Not one group, band council or
anyone condemned it. They knew that what we did was right. The propaganda to destroy our
spirit was lost. There have been continuous weak attempts here and there to defame us.
62 G. M. Zatzman and M. R. Islam

Karonhiaktajeh
Karonhiaktajeh was a mentor to the Rotisken’ra:kete. He was vilified during his lifetime.
Now he is revered and has become a legend because he adhered to the philosophy of the
Great Law. His book, “The Warrior’s Handbook”, has been read by many Rotisken’ra:kete
everywhere.
The warriors had become dormant for sometime. But they always existed underground.
When the women resisted the imposition of the Indian Act in Kahnawake in 1889, the
warriors were right there behind the women when they made objections to the government.
They were always there supporting the people who stood up for the Great Law.
It all started back in Kahnawake in the 1950’s with the formation of the Singing Society.
We know that the place of the Rotisken’ra:kete is imbedded in the Great Law. In our past the
Rotisken’ra:kete were always respected.
The young Rotisken’ra:kete went to the traditional council in Kahnawake. We needed
guidance from the older people. We found very few around to help us. Many had forgotten
the traditional role of the men in Indigenous society. Many had put our history aside.

Historically
Historically warriors have been outstanding soldiers. This is only one aspect. There are
examples like the battle of Chateauguay where 250 Mohawks stood against 7000 Americans
and repelled them. There are many other instances such as when 80 stood against 2500 at
Queenston Heights while the British ran away north to St. Catharines Ontario . At
Michilimakinac only a few canoe loads of warriors fought against thousands. Many colonists
did not want to fight the Rotisken’ra:kete. Some adversaries would just give up and leave.
The only real adversaries were other Indigenous nations. Both nations respected the specific
rules of war as outlined in the Kaianereh’ko:wa/Great Law of Peace. The Law states that the
“war is not over until it is won”. Many make the mistake of saying, “We will fight until we
die”. But it is really, “We will fight until we win”.
Many of our tactics were imitated by the settler society. Guerilla warfare was adopted by
the Europeans. For example, the elite covering themselves in black so as not be seen in the
forest is one of our tactics.

Peace is Warrior goal


We always look towards peace throughout any of our battles. There is always a plan for
peace. This does not mean making another nation subservient. It means that the nation we
fight must understand that we will fight them until they agree to live peacefully with us under
the Great Law.
We never set out to destroy the other nation. They must agree to live in peace and
equality. If not, then we absorb them and they are no more. No nation wanted that.
If they agreed to the peace, they would retain their language, culture, government, land
and ways.
In our minds, we always wanted to make peace. If the adversary did not want to make
peace with us, then the black wampum string would be dropped. This meant that we would
then fight until we won. The war is not over until we win. In 1784 the United States sued the
Confederacy for peace. This is because they had done so much harm to the Confederacy, so
we continued to fight with the newly formed United States . Being a peaceful minded people
Truth, Consequences and Intentions 63

we agreed to enter into a peaceful agreement with them which we have lived up to and they
continue to violate.
The Rensken’ra:kete is not a soldier for the king. He is a soldier of all the People. His
name comes from okenra tsi rokete – the soil that he carries in his pouch to remind him of
who he is – a son, a man, a brother, a husband, a father, a grandfather. It reminds him what
his duties are as a man. He must show utmost respect to the female. The two combined - the
male and female - is the continuation of life. He carries the soil in his pouch which he hangs
around his neck. When he is away he touches this to remind himself of the land and the
people he comes from.

Ganistensera
All the warrior’s power begins with his mother – ganistensera. He is connected to her
through the onerista, the navel. This is how he is connected to all the women relatives on his
mother’s side. The istenha is the one who carried him and gave him life. That is who his
“creator” is.

Importance of the Clans


The clans oversee the development of all the children. They watch the children closely
throughout their lives. Each one has brought some qualities and gifts with them to help the
people. We don’t know what these are. So we must watch them and help them reveal their
special gifts to us.
We looked for qualities in each child. For example, Ayonwatha’s family is the record and
wampum keeper. The young men learn all the rituals they need to have a strong memory. The
Tekarihoken family are the Turtle. They are taught to be neutral and to look at all sides. The
Wolf listens to both sides. The War Chief comes out of the Wolf family. Once he takes that
position he must carry himself as though he has no clan because he is there for all the clans.
He is given the title of Ayonwes who interacts with the outside nations. The clan mother of
the Tekarihoken carries the title of the Tehatirihoken. They learn how to separate things and
identify the elements for what they are. They examine everything totally. If a Royaner/chief
should die, they can replace him immediately. The chiefs are trained and raised for certain
titles. Every clan has particular attributes. It is everyone’s responsibility to develop them.

Our Clan Mothers


Te-ha-ti-ka-on-ion are the people who observe these young people and find the ones with
special qualities. Both boys and girls are trained for basic responsibilities and for particular
duties. They must all learn the basic aspects of Indigenous life. Some are found to be better
than others in carrying out certain responsibilities. The women are especially adept at finding
the required qualities in a young person to be trained for certain duties. Some might have
knowledge but not the temperament. By observing them, their special qualities emerge and
come to the surface. This becomes clear to the community as to what talents this young
person brought with him or her to help their people.

Our Women Relatives


In their early years, the young males spend all their time with their women relatives. This
enables them to know the duties and responsibilities of the women. They develop a profound
64 G. M. Zatzman and M. R. Islam

respect for the women and all people. They also spend a lot of time with their grandparents,
listening to them, looking for plants, animals and fruits. The whole process of developing
their observation skills starts from birth.

REFERENCES
Armstrong, David. 2006. “Critical Dose: Aspirin Dispute Is Fueled By Funds Of Industry
Rivals, A Cheap Remedy For Clotting Used By Millions Of Patients Is Undermined By
Research As Bayer's Friends Fight Back”, The Wall Street Journal (New York: Dow
Jones - 24 Apr), Page A1.
Bains, Hardial S. 1967. Necessity for Change (Toronto: National Publications Centre. 30th
Anniversary Edition - 1997).
Begley, Sharon. 2006. “SCIENCE JOURNAL: Quantitative Analysis Offers Tools To Predict
Likely Terrorist Moves”, The Wall Street Journal (New York: Dow Jones – 17 Feb),
Page B1.
Bejerano, G. et al. 2004. “Ultraconserved Elements in the Human Genome”, Science 304
[May]:1321-1325.
Bridges, Andrew. 2006. “Mushrooms Are Unlikely Source Of Vitamin D”, The Associated
Press newswire (Tue 18 Apr 02:17 ET).
Freeth, T. et al., 2006, “Decoding the ancient Greek astronomical calculator known as the
Antikythera Mechanism”, Nature 444, (30 Nov) pp 587-591; and also John Noble
Wilford, 2006, “An Ancient Computer Surprises Scientists”, The New York Times [29
Nov].
Gibbs, W.W. 2003. “The unseen genome: gems among the junk”, Scientific American 289(5):
46-53.
Gardner, M. 1983. “The Game of Life, Parts I-III”, Chh. 20-22 in Wheels, Life, and other
Mathematical Amusements (New York: W. H. Freeman).
Golden, Daniel. 2006. “Darwinian Struggle: At Some Colleges, Classes Questioning
Evolution Take Hold; ‘Intelligent Design’ Doctrine Leaves Room For Creator; In Iowa,
Science On Defense; A Professor Turns Heckler”, The Wall Street Journal (New York:
Dow Jones – 14 Nov), Page A1.
Golden, Frederic and Wynn, Wilton. 1984. “Rehabilitating Galileo's image: Pope John Paul
moves to correct a 350-year-old wrong”, Time (New York: Time-Warner, 12 Mar -
International edition) p.72, last accessed 17 December 2006 at
http://www.cas.muohio.edu/~marcumsd/p111/lectures/grehab.htm.
Grugg, G.M. and Newman, D.J., 2001, “Medicinals for the Millenia: The Historic Record”,
Annals of the New York Academy of Sciences, vol. 953, 3-25; Also, J. Steenhuysen, 2007,
‘Mother Nature Still a Rich Source of New Drugs’, Reuters [20 Mar].
Lu, P.J. and Steinhardt, P.J., 2007, “Decagonal and Quasicrystalline Tilings in Medieval
Islamic Architecture,” Science 315 [27 Feb], p. 1106.
Kettlewell, Julianna. 2004. “‘Junk’ throws up precious secret”, BBC News Online (London,
12 May), last accessed 17 December 2006 at http://news.bbc.co.uk/2/hi/science/nature
/3703935.stm
Truth, Consequences and Intentions 65

Knox, Noelle and Fetterman, Mindy. 2006. “Need To Keep House Payments Low? Try A 50-
Year Mortgage”, USA Today (Washington DC: Gannett – 10 May 06:54 ET)Hegel,
GWF. 1952 [1821]. Philosophy of Right (Oxford: Clarendon Press, trans. T. M. Knox).
Islam, M.R. 2003. Revolution in Education (Halifax, Canada: EEC Research Group ISBN 0-
9733656-1-7).
--------------. 2006. “Computing for the Information Age”, Keynote speech, 36th International
Conference on Computers and Industrial Engineering, Taiwan, June, 2006.
--------------, Mousavizadegan, H., Mustafiz, S., and Belhaj, H. 2007. A Handbook of
Knowledge-Based Reservoir Simulation, Gulf Publishing Co., Houston, TX, to be
published in March, 2007.
-------------- and Zatzman, G.M., 2006b, “Emulating Nature in the Information Age”, 57th
Annual Session of Indian Chemical Engineering Congress [Chemcon-2006], Dec. 27-30,
Ankelshwar, India.
--------------, Shapiro, R., and Zatzman, G.M., 2006, “Energy Crunch: What more lies ahead?”
The Dialogue: Global Dialogue on Natural Resources, Center for International and
Strategic Studies, Washington DC, April 3-4, 2006.
Ketata, C., Satish, M.G. and Islam, M.R., 2007, “Dynamic Numbers for Chaotic Nature”,
ICCES 07, Miami, Florida, Jan.
Khan, M.I. and Islam, M.R., 2007, A Handbook of Sustainable Petroleum Management and
Operations, 550 pp. (approx.) (Houston TX: Gulf Publishing Co., in press).
--------------, Zatzman, G. M., and Islam, M. R., 2005. “A Novel Sustainability Criterion as
Applied in Developing Technologies and Management Tools”, in Second International
Conference on Sustainable Planning and Development. Bologna, Italy.
McLuhan, H. Marshall. 1964. Understanding Media: The Extensions of Man (New York:
McGraw-Hill). There is a 2004 edition from MIT Press in Cambridge MA, with a new
introduction by Harper’s magazine editor Lewis Lapham.
Mousavizadegan, H., Mustafiz, S., and Islam, M.R., “The Knowledge Dimension: Towards
Understanding the Mathematics of Intangibles”, J. Nat. Sci. Sus. Tech., in press.
Nelson, Melissa. 2006. “Scientists Probe The Use Of The Tongue”, The Associated Press
newswire (Mon 24 Apr 19:32 ET).
Parker-Pope, Tara. 2006. “Prevention: The Case Against Vitamins - Recent Studies Show
That Many Vitamins Not Only Don’t Help. They May Actually Cause Harm”, The Wall
Street Journal (New York: Dow Jones - 20 Mar), Page R1.
Pauling, Linus. 1968. “Orthomolecular psychiatry: varying the concentrations of substances
normally present in the human body may control mental disease”, pp. 265-271 and
“Vitamin therapy: treatment for the mentally ill”, Science 160.
Peplow, Mark. 2006. “A Universal Constant On The Move: Is The Proton Losing Weight, Or
Has The Fabric Of The Universe Changed?”, Nature.com (20 Apr).
Sen, Amartya. 2006. “Democracy Isn’t ‘Western’”, The Wall Street Journal (New York: Dow
Jones – 24 Mar).
Shane, Scott. 2006. “Zarqawi Built Global Jihadist Network On Internet”, The New York
Times (9 Jun).
“Warriors – Why We Stand Up”, Mohawk Nation News (5 November 2006).
Website 1: http://www2.forthnet.gr/presocratics/heracln.htm
Wexell Severo, Luciano. 2006. “In Venezuela, Oil Sows Emancipation”, Rebelión (Madrid –
12 Mar, tr. Julio Huato).
66 G. M. Zatzman and M. R. Islam

Wolfram, S. A. 2002. New Kind of Science (Champaign, IL: Wolfram Media).


Zatzman, G.M. and Islam, M.R. 2006. Economics of Intangibles (New York: Nova Science
Publishers), 393 pp.
Zatzman, G.M., 2006, “The Honey Æ Sugar Æ Saccharin® Æ Aspartame®, or HSS®A®
Syndrome: A note”, J. Nature Science and Sustainable Technology, vol. 1, no. 3.
In: Perspectives on Sustainable Technology ISBN: 978-1-60456-069-5
Editor: M. Rafiqul Islam, pp. 67-96 © 2008 Nova Science Publishers, Inc.

Chapter 2

A COMPARATIVE PATHWAY ANALYSIS OF A


SUSTAINABLE AND AN UNSUSTAINABLE PRODUCT

M. I. Khan* A. B. Chettri, and S. Y. Lakhal a


Faculty of Engineering
Dalhousie University, Halifax, NS, B3J 2X4, Canada
a
Faculty of Business Administration
University of Moncton, Moncton, NB, E1A 3E9, Canada

ABSTRACT
Generally the idea of ‘sustainability’ implies a moral responsibility on the
technological development to be accountable for effects relating to the natural
environment and to future generations. However, most of the widely accepted
technological developments are not sustainable. The main contentious issue is that most
of them are misled as ‘sustainable’ due to improper sustainability assessment criteria.
With a recently developed sustainability criterion, it can be demonstrated that most of the
technologies that belong to the ‘chemical approach’ are not sustainable. In this paper, a
detailed pathway study is performed including its origin, degradation, oxidation and
decomposition in order to demonstrate how a natural product is sustainable and the
synthetic product is unsustainable. In this research, two homologous products
polyurethane fiber and wool fiber were selected for the sustainability assessment. They
both appear to be the same and in terms of durability, however, one is of natural origin
and the other is made of hydrocarbon products. The pathways of these products (chemical
for polyurethane and biochemical for wool) used to make these products were studied
and the results show how they diverge. The degradation behavior both oxidation and
photo degradation were also studied. They suggested the sustainability of wool and non-
sustainability of the other. Finally, a direct laboratory degradation experiment,
application of microwave, on these products was also undertaken. This experimental
result further confirmed the sustainability status of non-synthetic wool fiber and
polyurethane.
68 M. I. Khan, A. B. Chettri and S. Y. Lakhala

INTRODUCTION
With a recently developed sustainability criterion by Khan et al. (2005), it can be
demonstrated that most of the technologies belong to the ‘chemical approach’ are not
sustainable. Khan (2006) developed a method to evaluate the sustainability considering its
economic, environmental and social impacts.
Natural fibers exhibit many advantageous properties having low density materials
yielding light weight composites with high specific properties (O’Donnell, 2004). These
natural fibers are cost effective, easy to process, and renewable resource which in turn reduce
the dependency on foreign and domestic petroleum oil.
Modern technological advancement brings many different products for the daily uses of
human life. Most of them are not environmentally friendly and cause numerous problems.
However, these products have been so widely accepted that no one asks the question weather
it is sustainable or not. Now a day, most popular household items are plastics, which is
completely unsustainable, environmentally unacceptable, and is incontrovertibly harmful to
the ecosystem.
In the last two decades especially after UN Conference of Economic Development
sustainability and sustainable development has become a very commonly and loosely used
term. However, it is hardly achieved in the present technological and other resource
development (Khan, 2006). There are many guidelines or frameworks have been developed to
achieve sustainability (GRI, 2002; UNCSD, 2001; IChemE, 2002), which are based on
mainly socio-economic and narrowly environmental objectives. Khan (2006) proposed a new
protocol for the technological as well as other developments.

Table 1. Basic Difference between Polyurethane and Wool Fiber

Polyurethane Wool
Type Artificial fiber; alien products Natural Fiber, which grows in most of the
to the nature. organism.
Composition Urethane -monomer; it’s Made of alpha-karyotin, which is most
completely humongous valuable protein. However, wool is
compound and same pattern heterogeneous compound vary from species
to species even protein itself is different and
complex in a single species.
Diversity There is no diversity. Highly diverse. Complex process of
urethane synthesis very little is known so far. It’s
different segments like different monomers
Functionality Single-functional just as Multifunctional such as for the protection of
plastic organisms, supplies of nutrients,
Adaptability It is non-adjustable and non- It can adapt with the changes in different
adoptable and can not change conditions, such as temperature, humidity,
itself life the natural products light intensity. It protects itself and protects
do. the organism where it grows.
Time factor Non-progressive. It does It is regressive and changes according to
change with time time for example it degrades by time
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 69

Polyurethane Wool
Perfectness It creates all kind of problem. It is perfect and does not create problem.
From carcinogenic products Instead solve the problem.
to unknown product

In this research Khan (2006) protocol is applied in finding out which technology is
sustainable and which one is unsustainable. Two products such as, polyurethane and wool
fiber were taken to study in this research. They both appear to be the same and in terms of
durability, they both are unbeatable. However, one is natural and other one synthetic. Further
examination indicates that the similarity between wool and polyurethane fibers stops at
t=’right now’. Table 1 shows detailed differences between these two seeming similar fibers.

METHODOLOGY
A scientific analysis of wool and polyurethane fiber was carried out based on its
structure, manufacturing and processing. Detailed analyses of their pathways were carried out
along with their comparative characteristics. A laboratory analysis of microwave degradation
was also investigated. The Samsung Mini-Chef MW101OC microwave was used for the
microwave tests. The running frequency of this microwave generator is 60 Hz and the output
power is 120V. SEM microphotography was also conducted to examine the structural
differences between the two fibers.
In addition to manufacturing of these products, the recycling and waste management of
polyurethane also was examined. A recently proposed sustainability model (Khan, 2006) is
applied for evaluating the sustainability of polyurethane and wool.

RESULTS AND DISCUSSIONS


Chemical Composition of Polyurethane Fiber

Polyurethanes fiber is a polymeric fiber. In a polyurethane molecule, urethane linkages


are in the backbone. Figure 1 shows simple polyurethane chain. A more complex form will
have any polymer containing the urethane linkage in its backbone chain. Crystal can form in
any object in which the molecules are arranged in a regular order and pattern.

Figure 1. Polyurethane polymeric fiber.


70 M. I. Khan, A. B. Chettri and S. Y. Lakhala

A polymeric fiber is a polymer whose chains are stretched out straight (or close to
straight) and lined up next to each other, all along the same axis (Figure 1). A urethane
linkage is chain that forms polyurethane is presented in Figure 2.
Polymers arranged in fibers can be spun into threads and used as textiles (Figures 3 and
4). The clothes, carpet, and rope are made out of polymeric fibers. Some other plastic
polymers, which can be drawn into fibers are polyethylene, polypropylene, nylon, polyester,
kevlar and nomex, and polyacrylonitrile. Figure 4 shows the scanning electron microscopic
(SEM) microphotograph of polyurethane. Each fiber is linear and there is no scale or
segment, which is different from natural fiber (Khan and Islam, 2005c).

Figure 2. Urethane linkage in polyurethane chain.

Figure 3. Polyethylene or nylon fiber.

Biochemical Composition of Wool

Wool is an extremely complex, natural and biodegradable protein fiber which is both
renewable and recyclable. It has an inbuilt environment advantage because it is a natural fiber
grown without the use of any herbicides and fertilizers. Wool fibers grow in small bundles
called ‘staples’ which contain thousands of fibers. Wool fiber is so resilient and elastic that it
can be bent and twisted over 30,000 times without danger of breaking or being damaged
(Canesis, 2005). Every wool fiber has a natural elasticity that allows it to be stretched by as
much as one third and then to spring back into place.
As a biological product, wool is mainly composed of 45.2 % carbon, 27.9% oxygen,
6.6% hydrogen, 15.1% nitrogen and 5.2a% sulphur. About 91 percent of the wool is made up
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 71

of alpha keratins which are fibrous proteins. Amino acids are the building blocks of the alpha
keratins. The keratin found in wool is called "hard" keratin. This type of keratin does not
dissolve in water and is quite resilient. Keratin is an important, insoluble protein and it is
made from eighteen amino acids. Amino Acids Present in wool are: cysteine, aspartic acid,
serine, alanine, glutamic acid, praline, threonine, isoleucine, glycine, tyrosine, leucine,
phenylalanine, valine, histidine, arginine and methionine. The most abundant of these amino
acids is cystine which gives hair much of its strength.

Figure 4. SEM microphotograph of polyurethane fiber.

The amino acids are joined to each other by chemical bonds called peptide bonds or end
bonds. The long chain of amino acids is called a polypeptide chain and is linked by peptide
bonds (Figure 5). The polypeptide chains are intertwined around each other in a helix shape.
The wool is made up of alpha keratins which are fibrous proteins consisting of parallel chain
of peptides. The various amino acids in the keratin are bound to each other via special
'peptide' bonds to form a peptide chain. The linear sequence of these amino acids is called the
primary structure. However, these bound amino acids also have a three-dimentional
arrangement. The arrangement of neighboring amino acids is the secondary structure. The
secondary structure of 'alpha' keratin is that of an Alpha Helix and is due to the amino acid
composition in the primary structure. This is a twirled-like structure of the amino acid chain.
This chain is depicted in Figure 5.

Figure 5. Structural bond for amino acid chain.

The molecular structure of wool fibers behaves like a helix that gives wool its flexibility
and elasticity (Figure 6). The hydrogen bonds (dashed lines) that link adjacent coils of the
helix provide a stiffening effect. Figure 7 indicates that the wool has several micro air pockets
which retain air in it. Still air pockets have a excellent insulation qualities, making wool fibers
ideal for thermal protection (Rahbur et al., 2005).
72 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Figure 6. Alpha helix wool structure (source Canesis, 2004).

Trapped air
'particles'

Skin surface

Figure 7. Insulating pockets of still air (source Canesis, 2004).

The alpha helix in wool is reinforced by weak hydrogen bonding between amino acids
above and below other amino acids in the helix (Figure 8). In wool, three to seven of these
alpha helices can be curled around each other to form three-strand or seven-strand ropes.

Figure 8. Chemical bond of alpha helix (source Kaiser, 2005).

Alpha keratin is one of the proteins in hair, wool, nails, hoofs and horns. It is also a
member of a large family of intracellular keratins found in the cytoskeleton. In keratin fibers,
long stretches of alpha helix are interspersed with globular regions. This pattern is what gives
natural wool fibers their stretchiness. In the keratin represented here, the first 178 amino acids
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 73

and the last 155 form globular domains on either end of a 310 amino acid fibrous domain.
The fibrous region itself is composed of three helical regions separated by shorter linkers. In
the fibrous domain, a repeating 7-unit sequence stabilizes the interaction between pairs of
helices in two adjacent strands that wind around each other to form a duplex. In the formation
of this coil, the more hydrophobic amino acids of the 7-unit sequence meet to form an
insoluble core, while charged amino acids on opposing strands attract each other to stabilize
the complex (Canesis, 2004). The magnified wool fiber is shown in Figure 9(a). Figure (b)
and (c) are the staples from fine and course woolled sheep. The scanned electronic picture of
wool is shown in Figure 10. It shows that natural fiber wool has many scales around (Figure
10).

Figure 9. a) Magnified wool fiber b) Staples from fine woolled sheep c) Staples from coarse woolled
sheep (source: Canesis, 2004).

Pathways of Polyurethane

Details pathways of polyurethane are presented in Figure 11. This product manufactured
from hydrocarbon. The exploration of hydrocarbon is environmentally very expensive and
causes many environmental problems (Khan and Islam, 2005a; 2005c and 2007; Khan et al.,
2006b). Presently available hydrocarbon refinery process also uses toxic catalysts and heavy
metals (Lakhal et al., 2005). Each steps of production especially, monomer to dimer,
oligomers and finally polymers, which is used in polyurethane has many toxic catalysts and
releases known and unknown toxic and carcinogenic compounds (Table 2 and Figures 11 and
13).
74 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Figure 10. SEM photomicrograph of wool fiber where it shows the presence of scales.

In the presence of a small molecule called diazobicyclo[2.2.2]octane (DABCO) diol and


diisocyanyte make polymer. When two monomers are stirred with DABCO then the
polymerization take place. Continuing the polymerization a brand new urethane dimmer is
formed.

Drilling Hydrocarbon Toxic Vegetable


mud, comp
heavy
Exploration ounds
Protein
metal

Enzyme
Benefi
Toxic Toxic cial
Refining Digestion comp
catalysts comp
ounds ounds

Enzyme
ATP,
Diol Diisocyanate Metabolism NAD
PH

Dabcoo

Primary Monomer Amino Acids

Di ol

Urethane Diamer Highly Poly-peptides


toxic +
carcinogens

Highly
Oligomers toxic + Alpha-karyotin
carcinogens

Highly
Polyurethane toxic + Sheep’s wool
carcinogens

Figure 11. Pathways of unsustainable polyurethane and inherently sustainable product wool both of
them have similar functions.
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 75

This urethane dimer has an alcohol group on one end, and an isocyanate group on the
other, so it can react with either a diol or a diisocyanate to form a trimer. Otherwise it can
react with another dimer, or a trimer, or even higher oligomers. In this way, monomers and
oligomers combine and combine until we get high molecular weight polyurethane (Figures 11
and 12).
When polyurethane degrades then the most toxic form of PBDEs (penta-BDE) escapes
into the environment. The PBDEs is traced higher amount in human breast milk. This
compound is commonly found in every consumer products (Table 2). It is a highly toxic
compound and exposure to it causes adverse health effects including thyroid hormone
disruption, permanent learning and memory impairment, behavioral changes, hearing deficits,
delayed puberty onset, decreased sperm count, fetal malformations and, possibly, cancer
(Lunder. and Sharp, 2003). It is reported by Lunder and Sharp (2003) that exposure to PBDEs
during infancy leads to more significant harm at a much lower level than exposure during
adulthood. Recently, reported breast milk contamination of PBDEs might create a disaster in
the near future.

Figure 12. Chemical reaction urethane production by using diisociyanate and diol in presence of
DABCO.

Figure 13. Ethylene Glycol Oxidation Pathway in Alkaline Solution (After Matsuoka et al., 2005)
76 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Scrap of flexible polyurethane foams from slabstock manufacturing leads to a serious


environmental threat (Molero et al., 2006). This study reported that glycolysis of flexible
polyurethane foams is executed to chemically recycle the polyol, a constituent of the
polyurethane manufacturing process. Among the various types, diethylene glycol was found
most effective to obtain high purity polyol phase. The polyurethane foam is thus
contaminated by glycol which during oxidation produces toxic carbon monoxide. Matsuoka et
al (2005) reported a study on electro oxidation of methanol and glycol and found that Electro-
oxidation of ethylene glycol at 400mV gave glycolate, oxalate and formate, among which
glycolate and formate produces toxic CO emission. Matsuoka et al (2005) reported a study on
electro oxidation of methanol and glycol and found that electro-oxidation of ethylene glycol
at 400mV forme glycolate, oxalate and formate (Figure 13). The glycolate was obtained by
three-electron oxidation of ethylene glycol, and was an electrochemically active product even
at 400mV, which led to the further oxidation of glycolate. Oxalate was found stable, no
further oxidation was seen and was termed as non poisoning path. The other product of glycol
oxidation is called formate which is termed as poisoning path or CO poisoning path. The
glycolate formation decreased from 40-18 % and formate increased from 15-20% between
400 and 500mV. Thus, ethylene glycol oxidation produced CO instead of CO2 and follows
the poisoning path over 500 mV. The glycol oxidation produces glycol aldehyde as
intermediate products.

Table 2. Mother’s breast milk contaminated poly brominated (PBDEs) fire retardants
are found in everyday consumer products

Materials Types of
Examples of consumer products
used in PBDEs used
Back coatings and impregnation of home and office
Polyurethane furniture, industrial drapes, carpets, automotive seating,
Deca, Penta
Fibers aircraft and train seating, insulation in refrigerators,
freezers, and building insulations.
Home and office furniture (couches and chairs, carpet
Polyurethane padding, mattresses and mattress pads) automobile, bus,
Penta
foam plane and train seating, sound insulation panels,
imitation wood, packaging materials
Computers, televisions, hair dryers, curling irons, copy
machines, fax machines, printers, coffee makers, plastic
automotive parts, lighting panels, PVC wire and cables,
Plastics Deca, Octa, Penta
electrical connectors, fuses, housings, boxes and
switches, lamp sockets, waste-water pipes, underground
junction boxes, circuit boards, smoke detectors
Source: WHO (1994); Lunder and Sharp (2003).

Various types of amines used in polyurethane manufacturing will have significant impact
on the human health as well as in the environment. Samuel and Steinman (1995) investigated
that laboratory report on animals showed that diethanolamine (DEA) is a carcinogen with
major impact in kidney, liver and brain. Nitrosamine a byproduct of DEA is also considered a
carcinogen.
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 77

Purified Monomers
hydrocarbo
ns

Hydrocarbo Dimers
n deposition

Plant Polymers
decomposit
on

In complete
I cycle Eur athane

Non
degradable
wastes

Figure 14. Incomplete of life cycle of polyurethane due to non-biodegradability.

Pathways of Wool

Wool is natural product and it follows completely natural path which does not have any
negative environmental impacts. Figure 11 shows the pathways of sheep wools. In the whole
process, a sheep takes vegetable as food. Sheep digest the plant leaf/grass and make the grass
simple nutrients that are body can absorb readily. This simple nutrient is converted into amino
acid, poly-peptides and finally into alpha karyotin which is the composition of wool. The
whole process takes place through biological activities. As a result, during the wool
generation process there is no release of toxic elements, gases or products. Some biological
products/byproducts are generated that are actually essential for the sheep. One such
component is adenosine tri-phosphate (ATP). Therefore, the wool generation process is truly
sustainable and it can run infinite time period without harming the environment.

Degradation of Polyurethane

Polyurethane and other plastic products are widely accepted for its non-degradability.
Incomplete lifecycle of polyurethane is shown in Figure 14. It creates irreversible
environmentally problems. Generally, synthetic polymers are rarely biodegradable because of
the fact that in a polymer chain, apart from carbon atoms there are frequently N and O atoms
where oxidation and enzymatic degradation should take place. Synthetic polymers are only
susceptible to microbial degradation if they have biodegradable constituents introduced into
the technological process. Overall, the degradation rate of the polyurethane depends on the
components of a polymer, their structure and the plasticizers added during the manufacturing.
Polyurethanes containing polyethers are reported to be highly resistant to biodegradation
(Szostak-Kotowa, 2004).
78 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Main uses of polyurethane fiber are for plastic carpets. Several recent studies have also
linked to plastic carpets to higher occurrences of asthma (Islam, 2005c). It is due to the
entrance of small particulates in human lung and enters in the human oxidation cycle, which
consists of billions of cells of the body. When a plastic products including polyurethane is
burnt it releases 400 toxic products. Similarly, due to low-temperature-oxidation (LTO) can
release same amount of toxic products can be released in even home temperature. According
to Islam (2005a), the point frequently overlooked here is that, in a manner likely analogous to
LTO identified in petroleum combustion (Islam et al., 1991; Islam and Ali, 2001), oxidation
products are released even when the oxidation takes place at the relatively low temperature of
the human respiration process. To date, little has been reported about the LTO of
polyurethane in the human health context. Only recently, an epidemiological study has linked
polyurethane to asthma (Jaakkola et al., 2006).

Degradation of Wools

Wool is a natural product and microorganisms can decompose it. It is a bio-based


polymer which is synthesized by bacteria and is an integral part of ecosystem function.
Biopolymers thus are capable of being utilized (biodegraded) by living matter and so can be
disposed safely and ecologically sound ways through disposal processes (waste management)
like composting, soil application, and biological wastewater treatment (Narayan, 2004).
Biobased materials such as wool offer value in the sustainability/life-cycle equation as it
becomes a part of the biological carbon cycle. Life Cycle Assessment (LCAs) of these
biobased materials indicates reduced environmental impact and energy use when compared to
petroleum based materials. Figure 15 shows the life cycle of wool. The complete cycle shows
the natural regeneration of wool.
The degradation of wool is caused by both bacteria and fungi. However, keratin is
basically degraded by fungi especially by those belonging to the genera Microsporum,
Trichophyton, Fusarium, Rhizopus, Chaetomium, Aspergillus and Penicillium. Further
investigations of the biodegradation of wool by fungi indicate that keratinolysis proceeds by
denaturation of the substrate by disulphide bridges, which are the source of the natural
resistance of keratin by hydrolytic degradation of protein via extracellular proteinases. The
rate of bacterial degradation depends on the chemical composition, molecular structure as
well as the degree of substrate polymerization (Agarwal and Puvathingal, 1969). Besides
these, keratinolytic bacteria species in the genera Bacillus (B. mesentericus, B. subtilis, B.
cereus, and B. mycoides) and Pseudomonas, and some actinomycetes, e.g. Streptomyces
fradiae (Agarwal and Puvathingal, 1969) have more influence in the degradation process.
Microorganisms attach wool at various stages from acquisition to utilization. In general
the fatty acid in wool has high resistance to microbial attack. However, as the raw material
contains many impurities, which make the wool highly susceptible to microbial degradation.
McCarthy and Greaves (1988) reported that bacteria simultaneously degrade and stain the
impure wool. For example, Pseudomonas aeruginosa, which under alkaline conditions causes
green coloration of wool and in acid conditions red coloration.
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 79

Amino Protein
Ac id Synthesis

Poly- Alpha
peptides keratins

Food Sheep’s
sources for Wool
sheep

Plant Micr obial


nutrients, Decomposit
nitrogen ion
Poly-
peptides

Figure 15. Complete lifecycle of wool where it shows natural regeneration process.

A laboratory study of degradation of wool was carried out (Khan and Islam, 2005c). The
wool was heated in microwave oven and the degradation was compared with the original
structure. Figure 16 A and B shows the before and after microwave degradation picture of the
wool. Interestingly, a natural fiber wool does not changes structurally in compare to synthetic
product polyurethane. Figure 17 A and B show the change of polyurethane fiber due to
microwave.
Both wool and polyurethane are treated in a microwave for similar condition parameters.
Within the same time period the natural wool fiber did not change at all, but the polyurethane
completely changed. It became completely liquid forms and then transfer to a solid ball by
creating strong burning smell. This experimental result proves the natural wool resilience.
This quality makes wool an ideal fiber and naturally safer. Polyurethane on the other hand is
inherently harmful to the environment. Electronic microscopy (Figure 16 A and B) shows that
the chemical composition and the presence of moisture enable the wool to resist burning.
Instead of burning, wool chars when it gets flame. A comparative characteristic of wool and
polyurethane fiber is shown in Table 3.

Figure 16. SEM photomicrograph of wool fiber before (A) and after (A) microwave oxidation.
80 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Figure 17. SEM photomicrograph of polyurethane before (A) and after microwave (B) treatment

RECYCLING OF POLYURETHANE WASTE


Polyurethane has been used in massive scales during manufacturing of appliances,
automobiles, beddings, carpet cushion and upholstered furniture. The industries are
recovering and recycling polyurethane waste materials form discarded products as well as
from manufacturing processes. The recycling of any plastic waste is being done generally in
three ways: mechanical, chemical and thermal recycling. Mechanical recycling consists of
melting, shredding, or granulation of waste plastic. Even though, plastic is sorted manually,
sophisticated techniques, such as, X-ray fluorescence, infrared and near infrared
spectroscopy, electrostatics and flotation have been introduced recently (Website 1). Sorted
plastic material is melted down directly and moulded into a new shape, or melted down after
being shredded into flakes to process into granules. Plastic wastes are highly toxic materials,
further exposure to X-rays or infrared rays will make the products even more toxic.
The chemical or thermal recycling process for converting certain plastics back to raw
materials is called depolymerization. Chemical recycling process breaks down the polymer
into their constituent monomers which are again reused in the refineries, petrochemicals and
chemical processes. This process is highly energy and cost intensive and requires very large
quantities of used plastic for reprocessing to be economically viable. Most of the plastic waste
consist not only polyurethane but fiber reinforced materials which cannot be easily recycled
simply with conventional processes. These reinforced plastics are thermoset and contain a
significant fraction of glass or other fibre and heavy filler materials such as calcium
carbonate. In chemical recycling, monomers are broken down into its base components. One
such method is the DuPont-patented process called “ammonolysis”. This process
depolymerizes plastic by ammonolysis process, where plastic is melt and pumped into a
reactor and depolymerized at high temperatures and pressures using a catalysts and ammonia
(NR Canada, 1998).
In case of thermal depolymerization, the waste plastic is exposed to high pressure and
temperature, usually more than 3500C. Under such conditions, the plastic waste is converted
to distillate, coke and oil, which are in the form of raw materials to make the monomers and
then to polymers. Figure 18 shows the overview of the plastic recycling process.
Recycling of any plastic waste has several problems. The process produces solid carbon
and toxic gases. Hydrolysis of pulverized polycarbonate in supercritical reactors produces
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 81

Bisphenol A which is a very toxic compound. Coloring pigments used to color the finished
products are highly toxic compounds. Dioxins are produced when plastics are incinerated.
Phthalates are a group of chemicals which are hormone disrupters. Plastic toys made out of
PVC are often softened with phthalates and their burning produces toxic fumes. Recycling
plastic has only single reuse unlike paper or glass materials.
Various hazardous additives are added to the polymer to achieve the desired material
quality. These additives include colorants, stabilizers and plasticizers that include toxic
components such as lead and cadmium. Studies indicate that plastics contribute 28 percent of
all cadmium in municipal solid waste and about 2 percent of all lead (TSPE, 1997). Huge
amount of natural gas and other fossil fuels are used for depolymerization ammonolysis as
energy source.

Natural Monomers Polymer Fabricator Consumer


Gas/Oil Resins products

T hermal Chemical Mechanical


depolymerization Depolymerizati method
to raw materials on to (Pellet/Flake) Collection/
Monomers Sorting

Figure 18. Schematic of plastic recycling process.

Table 3. Plastic oxidation experimental data

Time in minutes 0 0.5 1.5 3.45 3.55 4.2


Temperature oC 25 102 183 122 89 30

Today’s industrial development has created such a crisis that life without platic is
difficult to imagine. Approximately four million metric tons of plastics is produced from
crude oil every day (Islam, 2005). Today, plastic production itself consumes 8% of the
world’s total oil production, but Maske (2001) reported as 4% of total oil production (Website
2). Burning the plastics will produce more than 4000 toxic fumes, 80 of them are know
carcinogens. Though, there are talks on recycling, only about 7% of the total plastic is
recycled today and rest of the plastic are either disposed to the environment or succeptible to
oxidation. Plastic products are difficult to degrade. Table 4 shows the decomposition rates for
various plastics. Some plastic products such Styrofoam never degrades and emits lots of toxic
compounds all the time. Some plastic such as glass bottles takes 500 years to completely
decompose.
An experiment was carried out to determine the oxidation rate by burning the plastic in
normal conditions. It took 3 minutes and 45 seconds to oxidize 2gms of plastic. Table 3 is the
summary of the lab data for plastic oxidation.
82 M. I. Khan, A. B. Chettri and S. Y. Lakhala

200
180
160
140
120
temp in C

100
80
60
40
20
0
0 1 2 3 4 5
Time in minutes

Figure 19. Plastic oxidation (Time in minute vs Temperature in Celsius).

UNSUSTAINABLE TECHNOLOGIES
At present, it is hard to find any technology, which brings benefits to human beings for
the long-term. Plastic technology is one of the examples of an unsustainable technology. In
this study, we are considering plastic as a case study. It is reported that daily, millions of tons
plastic products are produced. About 500 billion to 1 trillion plastic bags are used worldwide
every year (Source: Vincent Cobb, founder of reuseablebags.com.) The first plastic sandwich
bags were introduced in 1957. Department stores started using plastic bags in the late 1970s
and supermarket chains introduced the bags in the early 1980s.
Natural plastics have been in use for thousands of years, dating back to the time of the
Pharaohs and the old Chinese civilization. Natural resins, animal shells, horns and other
products were more flexible than cotton and more rigid than stone and have been in use for
household products, from toys and combs to plastic wraps and drum diaphragms. Until some
50 years ago, natural plastics were being used for making buttons, small cases, knobs,
phonograph records, mirror frames, and many coating applications worldwide. There was no
evidence that these materials posed any environmental threat. The only problem with natural
plastics, it seemed, was they could not be mass-produced or at least mass production of these
natural plastics was not known to humankind. In order to find more efficient ways to produce
plastics and rubbers, scientists began trying to produce these materials in the laboratory. Ever
since the accidental discovery of American inventor, Charles Goodyear that the property of
natural rubber can be altered with the addition of inorganic additives in 1839, the culture of
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 83

adding unnatural materials to manufacture plastic began. During this development, the focus
was to make sure the final products had homogeneity, consistency, and durability in
macroscopic features, without regard to the actual process of reaching this status. What has
happened after this phase of mass production is what can be characterized as the plastic
revolution.

Table 4. Decomposition rate for Plastic

Plastic decomposition rates


Paper 2-4 weeks
Leaves 1-3 months
Orange peels 6 months
Milk cartoon 5 years
Plastic bags 10-20 years
Plastic container 50-80 years
Aluminum can 80 years
Tin can 100 years
Plastic soda bottle 450 years
Glass bottle 500 years
Styrofoam Never
Source: Penn State University (http://www.solcomhouse.com/recycling.html).

Table 5. Characteristics comparison of polyurethane and wool

Polyurethane Wool
Artificial fiber Natural Fiber
Non-biological polymers composed of Alph-protein based biological polymer
urethane monomer
Simple (same segments and same Complex (different segments like different
monomers) monomers in
Homogenous Heterogeneous
Photooxidation releases toxic compounds Natural no toxic gases
Non-biodegradable biodegradable
Non-adjustable and non-adoptable Adjustable ( flexible by conditions ,it can
change itself in different conditions )
Incomplete lifecycle and not regenerate Complete lifecycle in case of regeneration
Creates environmental problem No environmental problem

Today, some 90 million barrels of crude oil is produced in order to sustain our lifestyle.
Crude oil is nothing but plants and other living objects, processed over millions of years. The
original ingredient of crude oil is not harmful to living objects and it is not likely that the
older form of the same would be harmful, even if it contains trace elements that are
individually toxic. It is true that crude oil is easily decomposed by common bacteria, at a rate
comparable to the degradation of biological waste (Livingston and Islam, 1999). Even when
some toxic chemicals are added to the fractionated crude oil, for instance, motor oil, the
84 M. I. Khan, A. B. Chettri and S. Y. Lakhala

degradation rate is found to be rather high (Chaalal et al., 2005). As long as bacteria are
present in abundance, it seems, any liquid will be degraded. The problem starts when the
crude oil components are either turned into solid residues or burned to generate gaseous
products.
During this first phase of transformation, thermal cracking prevails, in which significant
amounts of solid residue are produced. Much of this solid residue is used for producing tar
and related products. Some of this residue is reinforced with metals to produce long-chain
molecules in the name of soft plastic and hard plastic. This is the phase that becomes most
harmful for the environment over the long term: suddenly, and easily, crude oil components
are turned into materials that will last practically forever. The feature most responsible for
plastics’ broad popularity and ubiquity is also responsible for the most damaging long-term
implications. We currently produce more than four million metric tons of plastic every day
from the 90 million barrels of crude oil produced. More than 30% of this plastic is used by the
packaging industry (Market development Plan, 1996). In 2003, 57% of the beach waste was
identified to be from plastic materials (Islam and Zatzman, 2005).
It is reported that, only in United Kingdom alone, three million tons of plastic is disposed
every year (Waste Online, 2005). Even though the talks for recycling is abound, only 7% of
the plastics produced are recycled and the rest of the plastic materials are either disposed to
the environment and are susceptible to oxidation (low-temperature oxidation, LTO, at the
very least). Figure 19 shows used plastics in a collecting center, later which is will be
processed for recycling. Figure 20 shows the same plastics are packed for deliver to recycling
factory.
Current daily production of plastics (from hydrocarbons) is greater than the consumption
of carbohydrates by the entire human population (Islam, 2005a). Our lifestyle is awash in
plastics. Households that boast ‘wall to wall carpets’ are in fact covered with plastic. The vast
majority of shoe soles are plastic. Most clothing is plastic. Television sets, fridges, cars,
paints, and computer chassis – practically everything that ‘modern’ civilization has to offer is
plastic. The liner that cookware boasting a non-sticky liner is non-sticky because of the
plastic coating. The coating on hardwood is plastic. The material that makes virgin wool
manageable is plastic. The material of medicinal capsule coatings is plastic. The list goes on.
Recently it was disclosed that food products are dipped in ‘edible’ plastic to give them the
appearance of freshness and crispness. This modern age is synonymous with plastic in exactly
the same way that it is synonymous with cancer, AIDS, and other modern diseases.

Toxic Compounds from Plastic

Plastic products and its production processes release numerous type of toxic compounds
(Islam, 2003). Table 2 shows the release of toxic compounds from plastics and their related
effects. More than 70,000 synthetic chemicals and metals are currently in commercial use in
the U.S. The toxicity of most of these is unknown or incompletely studied. In humans,
exposure to some may cause mutation, cancer, reproductive and developmental disorders,
adverse neurological and immunological effects, or other injury. Reproductive and
developmental effects are of concern because of important consequences for couples
attempting to conceive and because exposure to certain substances during critical periods of
fetal or infant development may have lifelong and even intergenerational effects. The industry
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 85

responsible for creating raw plastic materials is by far the biggest user of listed chemicals,
reportedly using nearly 270 million pounds in 1993 alone. Plastic materials and resins are the
top industrial users of these chemicals.

Figure 20a. Waste plastic in a collecting center (Courtesy: Mann, 2005).

Figure 20b. Collected plastics are packed for recycling (Courtesy: Mann, 2005).

The biggest problem with plastics, like that of nuclear waste from atomic power plants, is
the absence of any environmentally safe method of waste disposal. If disposed of out-of-
doors, the respiratory system in any ambient organic life form is threatened. If incinerated,
toxic fumes almost as bad as cigarettes are released. Typically, plastic materials will produce
some 400 toxic fumes, including 80 known carcinogens. Yet most plastics are flammable,
accidental burning is always a possibility and most importantly are always emitting toxins
due to low-temperature oxidations (Islam, 2005a and 2005b).
86 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Environmental Impacts Issues

Today, these plastic products are manufactured entirely from petroleum products, which
depend on the supply of a non-renewable resource. There are many different types of
environmental impacts from the products. For example, plastics are generally produced from
fossil fuels, which are gradually becoming depleted. The production process itself involves
energy consumption and further resource depletion. During production, emissions may occur
in water, air or soil. Emissions of concern include heavy metals, chlorofluorocarbons,
polycyclic aromatic hydrocarbons, volatile organic compounds, sulfur oxides and dust. These
emissions have effects, such as ozone depletion, carcinogenicity, smog, acid rain, etc. Thus
the production of plastic materials can have adverse effects on ecosystems, human health and
the physical environment.
Overall, the U.S. plastics and related industries employed about 2.2 million U.S. workers
and contributed nearly $400 million to the economy in 2002, according to The Society of the
Plastics Industry (Lowy, 2004).
The main issue of plastic products is the air emissions of monomer and volatile solvent.
These products are released from industrial production process as well as during uses of the
products. When a plastic is burnt it is oxidized, releasing many highly toxic compounds.
Modern household uses of plastic continuously releases toxic compounds by a slower
oxidation or photo-oxidation.
Wastewater bearing solvent residues from separation processes, and from wet scrubbers
enter in the food chain. The residual monomer in product and small molecules (plasticizers,
stabilizers) slowly release into the environment, for example, by leaching slowly into water.
Islam (2005a) reported potential impacts of plastic if they simply left inside the
household. The conventional theory appears to suggest that nothing consequential happens
because they are all so durable. In support of this conclusion, the absence of detection of
anything leaching into the environment from these plastics on a daily basis is ritually cited.
This unwarranted assumption that “if we cannot see (detect), it does not exist” in fact
represents the starting-point of the real problem. In fact, some portion of the plastic is being
released continuously into the atmosphere at a steady rate, be it the plastic on the household
carpet, the computer chassis, or the pacifier that the baby is constantly sucking. The current
unavailability of tools capable of detecting and-or analyzing emissions on this scale can
hardly be asserted or assumed to prove the harmlessness of these emissions. Human beings in
particular constantly renew their body materials, and plastic contains components in trace
quantities small enough to “fool” the living organism in the process of replacing something
essential.
Each such defective replacement is likely to induce some long-term damage. For
instance, hydrocarbon molecules can be treated as a replacement of carbohydrates (it is
indeed fatal when it comes to lung diaphragms), lead can replace zinc, and so on. Recently it
was noticed that plastic baby bottles release dioxins when exposed to microwave
(Mittelstaedt, M., 2006b). From this, two essential points may be inferred: plastics always
release some toxins, and microwave exposure enhances molecular breakdown. In other
words: something clearly unsafe following microwave irradiation was in fact already unsafe,
prior to radiation exposure. Several recent studies have also linked to plastic carpets to higher
occurrences of asthma. It should hardly be surprising that the human oxidation cycle (through
the lungs and billions of cells of the body) can oxidize the plastic molecules that indeed give
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 87

rise to the 400 toxic products that are identified when plastic is burnt. The point frequently
overlooked here is that, in a manner likely analogous to low-temperature-oxidation (LTO)
identified in petroleum combustion (Islam et al., 1991; Islam and Farouq Ali, 2001),
oxidation products are released even when the oxidation takes place at the relatively low
temperature of the human respiration process. Little has been reported to date on the LTO of
plastic materials in this human health context.
Air emissions data for certain key criteria pollutants (ozone precursors) are available
from the National Emission Trends (NET) database (1999), and hazardous air pollutant
emissions data are available from the National Toxics Inventory (NTI) database (1996 is the
most recent year for which final data are available). Major emissions from plastic sector are
shown in Figure 21. The total emissions of volatile organic compounds (VOCs), nitrogen
oxides (NOx) and hazardous air pollutants (HAPs) are 40187, 31017 and 19493 tons per year
(Figure 21).
The plastics sector contributes to greenhouse gas emissions from both fuel and non-fuel
sources. Another document in this series, Greenhouse Gas Estimates for Selected Industry
Sectors, provides estimates based on fuel consumption information from the Energy
Information Administration (EIA) of the U. S. Department of Energy, and the Inventory of
U.S. Greenhouse Gas Emissions and Sinks, issued by the EPA. (The EIA document is sector-
specific for energy intensive sectors, but does not provide emission data, while the EPA
document provides emission data, but not explicitly on a sector-specific basis. See the
estimates document for details of how the calculation was carried out).

Figure 21. Total amount of VOCs, NOx and HAPs released from plastic industry.

Based on those calculations, the plastics sector in 2000 was responsible for 68.1
teragrams (Tg) (million metric tons) of carbon dioxide equivalent emissions from fuel
consumption, and 9.9 Tg CO2 equivalent emissions (as nitrous oxide) from non-fuel sources
(mostly for the production of adipic acid, a constituent of some forms of nylon), for a total of
88 M. I. Khan, A. B. Chettri and S. Y. Lakhala

78.0 Tg CO2 equivalent. In comparison, the chemical sector as a whole (including plastics)
accounted for 531.1 Tg CO2 equivalent. Thus, plastics are a sizeable contributor, but not the
dominant contributor, to greenhouse gas emissions compared with the entire chemical sector.
However, if one considers that CO2 and other greenhouse gases released from plastics are fall
under the category of ‘bad gases’ (high isotope number) and cannot be recycled by the
ecosystem, the negative impact of plastics becomes very high.
A special risk associated with products of the plastic sector is the leaching of plasticizers
added to polymer formulations to improve material properties. An example is the concern
over the leaching of the plasticizer DEHP from polyvinyl chloride used in medical devices.
This was the subject of an FDA Safety Alert issued in 2002. Other phthalate plasticizers are
found in a wide variety of consumer products, including children's toys and food wrap. Since
phthalates are soluble in fat, PVC wrap used for meat and cheese is of particular concern.
A number of common monomers are known or suspect reproductive toxins or
carcinogens. Vinyl chloride is a confirmed carcinogen which is commonly used in PVC.
Styrene is a possible carcinogen which is used in polystyrene. Toluene diisocyanate possible
carcinogen, known acute toxicity which is commonly used in making polyurethane. A
probable human carcinogen, acrylonitrile, used in acrylic resins and fibers. A possible
reproductive toxin, methyl methacrylate, used in acrylic resins and fibers.

Table 6. Known Adverse Health Effects of Commonly Used Plastics

Plastic Common Uses Adverse Health Effects

Polyvinyl Food packaging, plastic wrap, containers for Can cause cancer, birth defects, genetic
chloride toiletries, cosmetics, crib bumpers, floor tiles, changes, chronic bronchitis, ulcers, skin
pacifiers, shower curtains, toys, water pipes, diseases, deafness, vision failure,
garden hoses, auto upholstery, inflatable indigestion, and liver dysfunction
swimming pools

Phthalates Softened vinyl products manufactured with Endocrine disruption, linked to asthma,
(DEHP, DINP, phthalates include vinyl clothing, emulsion paint, developmental and reproductive effects.
and others) footwear, printing inks, non-mouthing toys and Medical waste with PVC and pthalates
children’s products, product packaging and food is regularly incinerated causing public
wrap, vinyl flooring, blood bags and tubing, IV health effects from the release of
containers and components, surgical gloves, dioxins and mercury, including cancer,
breathing tubes, general purpose labware, birth defects, hormonal changes,
inhalation masks, many other medical devices declining sperm counts, infertility,
endometriosis, and immune system
impairment.

Polystyrene Many food containers for meats, fish, cheeses, Can irritate eyes, nose and throat and
yogurt, foam and clear clamshell containers, foam can cause dizziness and
and rigid plates, clear bakery containers, unconsciousness. Migrates into food and
packaging "peanuts", foam packaging, audio stores in body fat. Elevated rates of
cassette housings, CD cases, disposable cutlery, lymphatic and hematopoietic cancers for
building insulation, flotation devices, ice buckets, workers.
wall tile, paints, serving trays, throw-away hot
drink cups, toys

Polyethylene Water and soda bottles, carpet fiber, chewing gum, Suspected human carcinogen
coffee stirrers, drinking glasses, food containers
and wrappers, heat-sealed plastic packaging,
kitchenware, plastic bags, squeeze bottles, toys
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 89

Plastic Common Uses Adverse Health Effects

Polyester Bedding, clothing, disposable diapers, food Can cause eye and respiratory-tract
packaging, tampons, upholstery irritation and acute skin rashes

Urea-formaldehyde Particle board, plywood, building insulation, fabric Formaldehyde is a suspected carcinogen
finishes and has been shown to cause birth
defects and genetic changes. Inhaling
formaldehyde can cause cough, swelling
of the throat, watery eyes, breathing
problems, headaches, rashes, tiredness

Polyurethane Cushions, mattresses, pillows Bronchitis, coughing, skin and eye


Foam problems. Can release toluene
diisocyanate which can produce severe
lung problems

Acrylic Clothing, blankets, carpets made from acrylic Can cause breathing difficulties,
fibers, adhesives, contact lenses, dentures, floor vomiting, diarrhea, nausea, weakness,
waxes, food preparation equipment, disposable headache and fatigue
diapers, sanitary napkins, paints

Tetrafluoro-ethylene Non-stick coating on cookware, clothes irons, Can irritate eyes, nose and throat and
ironing board covers, plumbing and tools can cause breathing difficulties
Sources: Plastic Task Force (1999).

How Much of it is Known?

The science that has developed the technologies of plastic also makes it available the
dangers of using them. Unfortunately, no research result is allowed to be published when it
contradicts the expectation of the corporations that funded the research (Shapiro et al., 2006).
Even the government funded research does not offer any hope because the topics are either
screened by industry sponsors (the jointly funded projects) or the journals would find all the
excuses not to publish the results in fear of repurcation. Not to mention the hawkish reviewers
who have vested interest in maintaining status quo. Therefore, the discussion of how much is
known seems to be futile. Even if these facts are known, who is going to fight the propaganda
machine?
The website run by the ecological center as well as many others have long listed the
hazards of plastic. Even the government sites have listed some cautious scientific results,
albeit without much elaboration or inference (CDC, 2001). Table 6 lists some of the findings
that are readily available on the internet. Note that these results rely only on measurable
amounts that migrate from the plastic to the products contained within. In all reported studies,
the possibility of contamination due to sustained exposure and/or low temperature oxidation
is not identified. Also, the focus in this study is short-term implications and safety issues. For
each item, long-term implication is immeasurably devastating.
Unsustainable plastic products have been promoted due to its non-degradability, light
weight, flexibility and low cost (Table 7). However, it has health, and environmental costs. It
consumes fossil fuel, a non-sustainable, heavily polluting and disappearing commodity. It
produces pollution and utilizes high energy during manufacturing. It accumulates non-
biodegradable waste plastic in the environment, persisting on land indefinitely as litter and
90 M. I. Khan, A. B. Chettri and S. Y. Lakhala

breaking up into pieces that choke and clog animal digestive systems. It releases dioxin
continuously to the atmosphere. Plastic releases toxic polymers and other chemicals and
contaminate our food that they contain (Table 6). The released chemicals threaten human
health and reproductive systems. Considering all these, the plastic revolution epitomizes what
modern technology development is all about. Every promise made to justify the making of
plastic products has been a false one. As evidenced from practically daily repeating of the
mishaps by the plastic products, ranging from non-stick cookware (Mittelstaedt, M., 2006b)
to polyurethane tube for unborn, plastic products represent the mindset that allowed short-
term (as in ‘right now’) focus to obsessively dominate technology development.

Table 7. Differences between natural and synthetic materials

Natural Materials Synthetic Materials


1. Multiple/flexible 1. Exactness/simple (same monomers)
(different segments, parts, different monomers
in polymers; Non-symmetric, non-uniform)
2. Non linear 2. Linear
3. Heterogeneous 3. Homogenous/uniform
4. Has its own natural process 4. Breaks natural process
5. Recycles, life cycle 5. Disposable/one time use
6. Infinite 6. Finite
7. Non symmetric 7. Symmetric
8. Productive design 8. Reproductive design
9. Reversible 9. Irreversible
10. Knowledge 10. Ignorance or antiknowledge
11. Phenomenal and sustainable 11.Aphenomenal and unsustainable
12. Dynamic/chaotic 12. Static
13. No boundary 13. Based on boundary conditions
14. Enzyme 14. Catalyst
15. Self-similarity (fractal nature) is only a 15. Self similarity imposed
perception
16. Multifunctional 16. Single functional
17. Reversible 17. Irreversible
18. Progressive (dynamic; youth marked by 18. Non-progressive
quicker change)
19. Unlimited adaptability (infinite adaptability; 19. Zero-adaptability (controlled
any condition) condition)

CONCLUSION
To achieve sustainability in technological development, a fair, consistent and
scientifically acceptable criterion is needed. In this study, the time or temporal scale is
considered as the prime selecting criterion for assuring inherent sustainability in technological
development. The proposed model shows it to be feasible and that it could be easily applied
to achieve true sustainability. This approach is particularly applicable to assess sustainable
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 91

technology and other management tools, while the straightforward flowchart model being
proposed should facilitate sustainability evaluation.
Conventional technologies and management tools have been analyzed based on the
proposed screening criterion. For example, the detailed pathway study was performed,
including origin, degradation, oxidation and decomposition, in order to demonstrate how a
natural product is sustainable and a synthetic product unsustainable. In this research, two
similar products polyurethane fiber and wool fiber were selected for the sustainability study.
It is shown that even when the two products have similar macroscopic characteristics, they
can be completely on opposite ends of the sustainability spectrum. Natural fiber wool found
to be truly sustainable while the polyurethane is completely unsustainable. A similar pathway
analysis might well be applied to determine whether the development of an entire technology
is sustainable or unsustainable.

REFERENCES
Adrianto, L, Matsuda, Y and Sakuma, Y. 2005. Assessing local sustainability of .sherries
system: a multi-criteria participatory approach with the case of Yoron Island, Kagoshima
prefecture, Japan. Marine Policy, Vol. 29: 9-23.
Agarwal, P.N. and Puvathingal, J.M. 1969. Microbiological deterioration of woollen
materials. Textile Research Journal, Vol. 39, 38.
Appleton, A.F., 2006. Sustainability: A practitioner’s reflection, Technology in Society: in
press.
Brown, L., Postel, S., and Flavin, C., 1991. From Growth To Sustainable Development. R.
Goodland (editor), Environmentally Sustainable Economic Development, Paris:
UNESCO, pp. 93-98.
Canesis, 2004. Wool the natural fiber, Canesis Network Ltd, Private Bag 4749, Christchurch,
New Zealand.
Cariou, R., Jean-Philippe, A., Marchand, P., Berrebi, A., Zalko, D., Andre, F. and Bizec, B.
2005. New multiresidue analytical method dedicated to trace level measurement of
brominated flame retardants in human biological matrices, Journal of Chromatography ,
Vol. 1100, No. 2: 144-152.
CDC (Centers for Disease Control) Report, 2001, National report on human exposure to
environmental chemicals. Centers for Disease Control and Prevention, National Center
for Environmental Health, Division of Laboratory Sciences, Mail Stop F-20, 4770 Buford
Highway, NE, Atlanta, Georgia 30341-3724, NCEH Pub#: 05-0725.
CEIA (Centre d’Estudis d’Informaci Ambiental). 2001. A new model of environmental
communication from Consumption to use of information. European Environment Agency,
Copenhagen. 65 pp.
Chaalal, O., Tango, M., and Islam, M.R. 2005. A New Technique of Solar Bioremediation,
Energy Sources, vol. 27, no. 3.
Chhetri, A.B., and Islam, M.R. 2006. Problems Associated with Conventional Natural Gas
Processing and Some Innovative Solutions, J. of Petroleum Science and Technology,
Accepted.
92 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Costanza, R., J. Cumberland, H. Daly, R. Goodland and R. Norgaard, 1997. An introduction


to ecological economics, international society for ecological economics and St. Lucie
Press, Florida.
CWRT (Center for Waste Reduction Technologies, AIChE). 2002. Collaborative Projects:
Focus area Sustainable Development: Development of Baseline Metrics; 2002.
<http://www.aiche.org/cwrt/pdf/ BaselineMetrics.pdf> [Accessed: May 28, 2005]
Daly, H. E. 1992. Allocation. distribution. and scale: towards an economics that is efficient.
just and sustainable. Ecological Economics, Vol. 6: 185-193.
Daly, H.E., 1999. Ecological economics and the ecology of economics, essay in criticism,
Edward Elgar, U.K.
Darton R., 2002. Sustainable development and energy: predicting the future. In. Proceedings
of the 15th International Conference of Chemical and Process Engineering; 2002.
Dewulf J, Van Langenhove H, Mulder J, van den Berg MMD, van der Kooi HJ, de Swaan
Arons J., 2002. Green Chem, Vol. 2:108–14.
Dewulf, J. and Langenhove, H. V. 2004. Integrated industrial ecology principles into a set of
environmental sustainability indicators for technology assessment. Resource
Conservation and Recycling: in press.
Donnelly, K., Beckett-Furnell, Z., Traeger, S., Okrasinski T. and Holman, S., 2006. Eco-
design implemented through a product-based environmental management system,
Journal of Cleaner Production, Vol xx: 1-11.
Eissen M, Metzger J.O., 2002. Environmental performance metrics for daily use in synthetic
chemistry. Chem. Eur. J. 2002;8:3581–5.
Gibson, R. 1991. Should Environmentalists Pursue Sustainable Development? Probe Post,
22-25.
Gleick, J., 1987, Chaos – making a new science, Penguin Books, NY, 352 pp.
GRI (Global Reporting Initiative). 2002. Sustainability Reporting Guidelines. GRI: Boston.
Hawken, P., 1992. The Ecology of Commerce. New York: Plenum.
IChemE (Institute of Chemical Engineers). 2002. The sustainability metrics, sustainable
development progress metrics recommended for use in the process industries; 2002.
<http://www.getf.org/file/toolmanager/O16F26202.pdf.> [Accessed: June 10,2005].
Islam, M.R. and Farouq Ali, S.M., 1991, Scaling of in-situ combustion experiments, J. Pet.
Sci.Eng., vol. 6, 367-379.
Islam, M.R. and Zatzman, G.M., 2005. Unravelling the mysteries of chaos and change: the
knowledge-based technology development, Proceedings of the First International
Conference on Modeling, Simulation and Applied Optimization, Sharjah, U.A.E.
February 1-3, 2005.
Islam, M.R., 2003. Revolution in Education, Halifax, Canada, EECRG Publication.
Islam, M.R., 2004, Inherently-sustainable energy production schemes, EEC Innovation, vol.
2, no. 3, 38-47.
Islam, M.R., 2005. Unraveling the Mystery of Chaos and Change: The knowledge-based
Technology Development. EEC Innovation. Vol 2, No: 2and3, ISSn 1708-307.
Islam, M.R., 2005a. Knowledge-based technologies for the information age, JICEC05-
Keynote speech, Jordan International Chemical Engineering Conference V, 12-14
September 2005, Amman, Jordan.
Islam, M.R., 2005b, Unraveling the mysteries of chaos and change: knowledge-based
technology development”, EEC Innovation, vol. 2, no. 2 and 3, 45-87.
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 93

Islam, M.R., 2005c. Developing knowledge-based technologies for the information age
through virtual universities, E-transformation Conference, April 21, 2005. Turkey.
Islam, M.R., Verma. A., and Ali, S.M. F., 1991, In situ combustion - the essential reaction
kinetics", Heavy Crude and Tar Sands - Hydrocarbons for the 21st Century, vol. 4,
UNITAR/UNDP.
Jaakkola, J., J., K, Leromnimon, A and Jaakkola, M.S., 2006. Interior Surface Materials and
Asthma in Adults: A Population-based Incident Case-Control Study. American Journal of
Epidemiology, October 15, 2006.
Judes, U., 2000. Towards a culture of sustainability. W. L. Leal Filho (editor),
Communicating Sustainability, Vol. 8pp. 97-121. Berlin: Peter Lang.
Kaiser, G.E., 2005. Personal Communication for the Figure of Secondary Structure of a
Protein or Polypeptide Alpha Helix, Biology Department, D203-F, The Community
College of Baltimore County, Catonsville Campus, Baltimore, MD 21228.
Khan, M.I. and Islam, M.R,. 2005a. Assessing sustainability of technological developments:
an alternative approach of selecting indicators in the case of Offshore operations. ASME
Congress, 2005, Orlando, Florida, Nov 5-11, 2005, Paper no.: IMECE2005-82999.
Khan, M.I. and Islam, M.R., 2005b. Sustainable marine resources management: framework
for environmental sustainability in offshore oil and gas operations. Fifth International
Conference on Ecosystems and Sustainable Development. Cadiz, Spain, May 03-05,
2005.
Khan, M.I. and Islam, M.R., 2005c. Achieving True technological sustainability: pathway
analysis of a sustainable and an unsustainable product, International Congress of
Chemistry and Environment, Indore, India, 24-26 December 2005.
Khan, M.I. and Islam, M.R., 2007. True Sustainability in Technological Development and
Natural Resources Management. Nova Science Publishers, New York: 381 pp. [ISBN: 1-
60021-203-4].
Khan, M.I., 2006. Development and Application of Criteria for True Sustainability. Journal
of Nature Science and Sustainable Technology, Vol. 1, No. 1: 1-37.
Khan, M.I., Chhetri, A.B., and Islam, M.R., 2006a. Analyzing sustainability of community-
based energy development technologies, Energy Sources: in press.
Khan, M.I., Lakhal, Y.S., Satish, M., and Islam, M.R., 2006b. Towards achieving
sustainability: application of green supply chain model in offshore oil and gas operations.
Int. J. Risk Assessment and Management: in press.
Kunisue, T., Masayoshi Muraoka, Masako Ohtake, Agus Sudaryanto, Nguyen Hung Minh,
Daisuke Ueno, Yumi Higaki, Miyuki Ochi, Oyuna Tsydenova, Satoko Kamikawa et al.,
2006. Contamination status of persistent organochlorines in human breast milk from
Japan: Recent levels and temporal trend, Chemosphere: in press.
Labuschange, C. Brent, A.C. and Erck, R.P.G., 2005. Assessing the sustainability
performances of industries. Journal of Cleaner Production, 13: 373-385.
Lakhal, S., S. H'mida and R. Islam, 2005. A Green supply chain for a petroleum company,
Proceedings of 35th International Conference on Computer and Industrial Engineering,
Istanbul, Turkey, June 19-22, 2005, Vol. 2: 1273-1280.
Lange J-P. Sustainable development: efficiency and recycling in chemicals manufacturing.
Green Chem., 2002; 4:546–50.
94 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Leal Filho, W. L. (1999). Sustainability and university life: some European perspectives. W.
Leal Filho (ed.), Sustainability and University Life: Environmental Education,
Communication and Sustainability (pp. 9-11). Berlin: Peter Lang.
Lems S, van derKooi HJ, deSwaan Arons J. 2002. The sustainability of resource utilization.
Green Chem., Vol.4: 308–13.
Leontieff, W. 1973. Structure of the world economy: outline of a simple input-output
formulation, Stockholm: Nobel Memorial Lecture, 11 December, 1973.
Livingston, R.J., and Islam, M.R., 1999, Laboratory modeling, field study and numerical
simulation of bioremediation of petroleum contaminants", Energy Sources, vol. 21 (1/2),
113-130.
Lowe EA,Warren JL, Moran SR. Discovering industrial ecology—an executive briefing and
sourcebook. Columbus: Battelle Press; 1997.
Lowy, J. 2004. Plastic left holding the bag as environmental plague. Nations around world
look at a ban. <http://seattlepi.nwsource.com/national/182949_bags21.html>.
Lubchenco, J. A., et al. 1991. The sustainable biosphere initiative: an ecological research
agenda. Ecology 72:371- 412.
Lunder, S. and Sharp, R., 2003. Mother’s milk, record levels of toxic fire retardants found in
American mother’s breast milk. Environmental Working Group, Washington, USA.
Mann, H., 2005. Personal communication, Professor, Civil Engineering Department,
Dalhousie University, Halifax, Canada.
Market Development Plan, 1996. Market status report: postconsumer plastics, business waste
reduction, Integrated Waste Development Board, Public Affairs Office. California.
Marx, K. 1883. Capital: A critique of political economy Vol. II: The Process of Circulation of
Capital, (London, Edited by Frederick Engels.
Maske, J. 2001. Life in PLASTIC, it's fantastic, GEMINI, Gemini, NTNU and SINTEF
Research News, N-7465 Trondheim, Norway.
Matsuoka, K., Iriyama, Y., Abe, T., Matsuoka, M., Ogumi, Z., 2005. Electro-oxidation of
methanol and ethylene glycol on platinum in alkaline solution: Poisoning effects and
product analysis. Electrochimica Acta, Vol.51: 1085–1090.
McCarthy, B.J., Greaves, P.H., 1988. Mildew-causes, detection methods and prevention.
Wool Science Review, Vol. 85, 27–48.
MEA (Millennium Ecosystem Assessment), 2005. The millennium ecosystem assessment,
Commissioned by the United Nations, the work is a four-year effort by 1,300 scientists
from 95 countries.
Miller, G. (1994). Living in the Environment: Principles, Connections and Solutions.
California: Wadsworth Publishing.
Mittelstaedt, M., 2006a. Chemical used in water bottles linked to prostate cancer, The Globe
and Mail, Friday, 09 June 2006.
Mittelstaedt, M., 2006b. Toxic Shock series (May 27-June 1), The Globe and Mail, Saturday
27 May 2006.
Molero, C., Lucas, A. D. and Rodrıguez, J. F., 2006. Recovery of polyols from flexible
polyurethane foam by ‘‘split-phase’’ glycolysis: Glycol influence. Polymer Degradation
and Stability. Vol. 91: 221-228.
Narayan, R., 2004. Drivers and rationale for use of biobased materials based on life cycle
assessment (LCA). GPEC 2004 Paper.
A Comparative Pathway Analysis of a Sustainable and an Unsustainable Product 95

Natural Resources Canada, 1998. Alberta Post-Consumer Plastics Recycling Strategy


Recycling. Texas, Society of Petroleum Engineers 1997.
Nikiforuk, A. (1990). Sustainable Rhetoric. Harrowsmith, 14-16.
OCED, 1998. Towards sustainable development: environmental indicators. Paris:
Organization for Economic Cooperation and Development; 132pp.
OECD, 1993. Organization for Economic Cooperation and development core set of indicators
for environmental performance reviews. A synthesis report by the Group on State of the
Environment,. Paris, 1993.
Plastic Task Force (1999). Adverse health effects of plastics.
<http://www.ecologycenter.org/erc /fact_sheets plastichealtheffects.html#
plastichealthgrid>
Pokharel, G.R., Chhetri, A, B., Devkota, S. and Shrestha, P., 2003. En route to strong
sustainability: can decentralized community owned micro hydro energy systems in Nepal
Realize the Paradigm? A case study of Thampalkot VDC in Sindhupalchowk District in
Nepal. International Conference on Renewable Energy Technology for Rural
Development. Kathmandu, Nepal.
Pokharel, G.R., Chhetri, A.B., Khan, M.I., and Islam, M.R., 2006. Decentralized micro hydro
energy systems in Nepal: en route to sustainable energy development, Energy Sources: in
press.
Rahbur S., Khan, M.M., M. Satish, Ma, F. and Islam, M.R., 2005. Experimental and
numerical studies on natural insulation materials, ASME Congress, 2005, Orlando,
Florida, Nov 5-11, 2005. IMECE2005-82409.
Rees, W. (1989). Sustainable development: myths and realities. Proceedings of the
Conference on Sustainable Development Winnipeg, Manitoba: IISD.
Robinson, J. G. 1993. The limits to caring: sustainable living and the loss of biodiversity.
Conservation Biology 7: 20- 28.
Saito, K, Ogawa, M. Takekuma, M., Ohmura, A., Migaku Kawaguchi a, Rie Ito a, Koichi
Inoue a, Yasuhiko Matsuki c, Hiroyuki Nakazawa, 2005. Systematic analysis and overall
toxicity evaluation of dioxins and hexachlorobenzene in human milk, Chemosphere, Vol.
61: 1215–1220.
Shapiro, R., Zatzman, G., Mohiuddin, Y., 2006. Understanding Disinformation Prophet-
Driven Research, Journal Nature Science and Sustainable Technology: in press.
Smith P. How green is my process? A practical guide to green metrics. In: Proceedings of the
Conference Green Chemistry on Sustainable Products and Processes; 2001.
Spangenberg, J.H. and Bonniot, O., 1998. Sustainability indicators-a compass on the road
towards sustainability. Wuppertal Paper No. 81, February 1998. ISSN No. 0949-5266.
Sraffa, P. 1960. Production of Commodities by Means of Commodities. Cambridge,
Cambridge University Press.
Sudaryanto, A., Tatsuya Kunisue, Natsuko Kajiwara, Hisato Iwata, Tussy A. Adibroto,
Phillipus Hartono and Shinsuke Tanabe, 2006. Specific accumulation of organochlorines
in human breast milk from Indonesia: Levels, distribution, accumulation kinetics and
infant health risk, Environmental Pollution, Vol. 139, No. 1: 107-117.
Szostak-Kotowa, J., 2004. Biodeterioration of textiles, International Biodeterioration and
Biodegradation, Vol. 53: 165 – 170.
UNCSD (United Nations Commission on Sustainable Development), 2001. Indicators of
Sustainable Development: Guidelines and Methodologies, United Nations, New York.
96 M. I. Khan, A. B. Chettri and S. Y. Lakhala

Wackernagel, M., and Rees, W. (1996). Our ecological footprint. Gabriola Island: New
Society Publishers.
Waste Online, 2005. Plastic recycling information sheet. < http://www.wasteonline. org.uk/
resources/InformationSheets/Plastics.htm> [Accessed: February 20, 2006].
WCED (World Commission on Environment and Development) 1987. Our common future.
World Conference on Environment and Development. Oxford: Oxford University Press;
1987. 400pp.
Website 1: www.oakdenehollins.co.uk/ [Accessed: May 03, 2006]
Website 2: www.wasteonline.org.uk/resources/InformationSheets/Plastics.htm [Accessed:
May 12, 2006]
Welford, R. (1995). Environmental strategy and sustainable development: the corporate
challenge for the 21st Century. London: Routledge.
Winterton N., 2001. Twelve more green chemistry principles. Green Chem. Vol. 3: G73–5.
World Health Organization (WHO). 1994. Brominated diphenyl ethers. Environmental
Health Criteria, Vol.162. International Program on Chemical Safety.
Wright, T. 2002. Definitions and frameworks for environmental sustainability in Higher
education. International Journal of Sustainability In. Higher Education Policy, Vol. 15
(2).
In: Perspectives on Sustainable Technology ISBN: 978-1-60456-069-5
Editor: M. Rafiqul Islam, pp. 97-104 © 2008 Nova Science Publishers, Inc.

Chapter 3

A NUMERICAL SOLUTION OF REACTION-DIFFUSION


BRUSSELATOR SYSTEM BY A.D.M.

J. Biazar∗ and Z. Ayati


Department of Mathematic, Faculty of science, University of Guilan,
P.O.Box 1914. Rasht, Iran

ABSTRACT
Adomian decomposition method has been applied to solve many functional
equations so far. In this work, Adomian decomposition method is applied for the
numerical solution of a class two-dimentional initial or boundary value problems
presented by a nonlinear system of partial differential equations. The system, known as
the reaction-diffusion Brusselator, arises in the modeling of certain diffusion processes.
Numerical results are presented for some specific problems.

Keywords: Decomposition method, reaction-diffusion Brusselator, system of partial


differential equations.

INTRODUCTION
Reaction-diffusion Brusselator prepares a useful model for study the cooperative
processes in chemical kinetics. Such as trimolecular reaction steps arises in the formation of
ozone by atomic oxygen via a triple collision. This system governs also in enzymatic
reactions and in plasma and laser physics in multiple couplings between certain modes.
Reaction-diffusion Brusselator system has the following general form


Corresponding autor: E-mail addresses:biazar@giulan.ac.ir,Jbiazar@dal.ca.
98 J. Biazar and Z. Ayati

⎧ ∂u ∂ 2u ∂ 2u
⎪ = B + u 2
v − ( A + 1) u + α ( + )
⎪ ∂t ∂x 2 ∂y 2

⎪ ∂v = Au − u 2 v + α ( ∂ v + ∂ v )
2 2
(1)
⎪⎩ ∂t ∂x 2 ∂y 2

where α , A and B are constant (Adomian, 1994).


Let's have the following initial conditions

u ( x, y,0) = f ( x, y )
v( x, y,0) = g ( x, y ) (2)

To illustrate the method some examples are presented.

ADOMIAN DECOMPOSITION METHOD APPLIED TO SYSTEM (1)


Let's us consider the system (1) in an operator form

⎧⎪ Lt u = B + u 2 v − ( A + 1)u + α ( Lxx u + L yy u )
⎨ (3)
⎪⎩ Lt v = Au − u v + α ( Lxx v + L yy v)
2

where

∂ ∂2 ∂2
Lt = , Lxx = 2 , Lyy = 2 .
∂t ∂x ∂y
t
∫ (.)dt to both sides of (3), we have
−1
By applying the inverse operator Lt =
0

⎧u ( x, y , t ) = u ( x, y ,0) + Bt + t (u 2 v − ( A + 1)u + α ( L u + L u ))dt ,


⎪ ∫0 xx yy
⎨ t (4)
⎪v( x, y, t ) = v( x, y,0) + ∫ ( Au − u 2 v + α ( Lxx v + L yy v))dt.
⎩ 0

Considering initial conditions, we derive

⎧u ( x, y, t ) = f ( x, y ) + Bt + t (u 2 v − ( A + 1)u + α ( L u + L u ))dt ,
⎪ ∫0 xx yy
⎨ t (5)
⎪v( x, y, t ) = g ( x, y ) + ∫ ( Au − u 2 v + α ( L xx v + L yy v))dt.
⎩ 0
A Numerical Solution of Reaction-Diffusion Brusselator System by A.D.M. 99

Eq. (5) is the canonical form of Eq. (1). The ADM consists of representing the functions
u ( x, y, t ) and v( x, y, t ) by as the summation of series, say

∞ ∞
u = ∑ u n , v = ∑ vn (6)
n =0 n =0

2
And the nonlinear term u v is represented as:

u 2 v = ∑ An (u 0 , u1 , u 2 ,..., u n )
n=0
(7)

Where An (u 0 , u1 , u 2 ,..., u n ) are called Adomian polynomials and should be determined


(Biazar, Babolian, Nouri and Islam, 2002) . Substituting (6) and (7) into (5) leads to:

⎧∞ ∞ t ∞ t ∞ t

⎪ ∑
⎪n=0
u n = f ( x, y ) + Bt + ∑n=0
∫0 n
A dt − ( A + 1)∑n=0
∫0 n
u dt + α ∑
n=0
∫0 (Lxxun + Lyyun )dt
⎨∞ ∞ ∞ t ∞ t (8)
⎪ v = g(x, y) + A t u dt −

⎪⎩n=0 n ∑
n=0
∫0
n ∑n=0
∫0
An dt + α ∑ ∫ (Lxxvn + Lyyvn )dt
n=0
0

Therefore from (8) the following procedure can be defined:

u 0 = f ( x, y ) + Bt , v0 = g ( x, y )
t t t
u n+1 = ∫ An dt − ( A + 1) ∫ u n dt + α ∫ ( L xx u n + L yy u n )dt n = 0,1,2, …
0 0 0
(9)
t t t
v n +1 = A∫ u n − ∫ An + α ∫ ( L xx v n + L yy v n )dt n = 0,1,2, …
0 0 0

Using the maple program by the second author, based on the definition of Adomian
2
polynomials, the first few Adomian polynomials for the nonlinear term u v will be derived
as follows

2
A0 := u0 v0
2
A1 := 2 u0 v0 u1 + u0 v1
2 2
A2 := u1 v0 + 2 u0 v1 u1 + 2 u0 v0 u2 + u0 v2
2 2
A3 := 2 u1 v0 u2 + u1 v1 + 2 u0 v2 u1 + 2 u0 v1 u2 + 2 u0 v0 u3 + u0 v3
2 2
A4 := u2 v0 + 2 u1 v 1 u2 + 2 u1 v0 u3 + u1 v2 + 2 u0 v3 u1 + 2 u0 v2 u2 + 2 u0 v1 u3
2
+ 2 u 0 v 0 u4 + u 0 v 4
100 J. Biazar and Z. Ayati

We can determine the components u 0 , u1 , u 2 , … and v0 , v1 , v 2 , … as many as is


necessary to the desired accuracy for the approximated solution. So the approximations
n −1 n −1
u ( n ) = ∑ u i and v ( n ) = ∑ vi can be used to approximate the solutions.
i =0 i =0

NUMERICAL RESULTS
Have two examples are presented to illustrated the method.

Example 1

Consider the nonlinear system with the following initial condition:

⎧ ∂u 1 ∂ 2u ∂ 2u
⎪ = u 2
v − 2u + ( + )
⎪ ∂t 4 ∂x 2 ∂y 2 (10)

⎪ ∂v = u − u 2 v + 1 ( ∂ v + ∂ v )
2 2

⎪⎩ ∂t 4 ∂x 2 ∂y 2

u ( x, y,0) = e − x − y , v( x, y,0) = e x + y

t
∫ (.)dt to both sides of (10), we get:
−1
Applying the inverse operator Lt =
0

⎧ 1 ∂ 2u ∂ 2u
∫0
t
− x− y
⎪ u ( x , y , t ) = e + (u 2
v − 2u + ( + ))dt
⎪ 4 ∂x 2 ∂y 2 (11)

⎪v( x, y, t ) = e x + y + t (u − u 2 v + 1 ( ∂ v + ∂ v ))dt
2 2

⎪⎩ ∫0 4 ∂x 2 ∂y 2

Using the model discussed as (9), we have

u 0 = e − x − y , v0 = e x + y
1 t ∂ 2un ∂ 2un (12)
u n +1 = ∫ An dt − 2∫ u n dt +
4 ∫0 ∂x 2
t t
( + )dt
0 0 ∂y 2
1 t ∂ 2 vn ∂ 2 vn
v n +1 = ∫ u n − ∫ An +
t t

4 ∫0 ∂x 2
( + )dt
0 0 ∂y 2
A Numerical Solution of Reaction-Diffusion Brusselator System by A.D.M. 101

First few terms are

1 ( −x − y ) 1 (x + y)
u1 := − t e v1 := t e
2 2
1 2 ( −x − y ) 1 (x + y)
u2 := t e v2 := t 2 e
8 8
1 ( −x − y ) 1 3 (x + y)
u3 := − t 3 e v3 := t e
48 48
1 4 ( −x − y ) 1 4 (x + y)
u4 := t e v4 := t e
384 384

Therefore the general terms would be derive as the following

n
t n −x− y t
u n = (−1) n n
e , vn = n e x+ y
2 n! 2 n!

So

∞ ∞ t t
t n − x− y n t
n
− − x− y −
u = ∑ u n = ∑ (−1) n n
e = e − x− y
∑ ( −1) n
=e − x − y e 2 = e 2

n=0 n=0 2 n! 2 n!
∞ ∞ t t
t n x+ y tn x+ y +
v = ∑ vn = ∑ n
e = e x+ y
∑ n
=e x + y e 2 = e 2

n =0 n = 0 2 n! 2 n!

Which are the exact solutions.

Example 2

Consider the following system with initial values

⎧ ∂u 3 1 ∂ 2u ∂ 2u
⎪ = 1 + u 2
v − u + ( + )
⎪ ∂t 2 500 ∂x 2 ∂y 2

⎪ ∂u = 1 u − u 2 v + 1 ( ∂ v + ∂ v )
2 2

⎪⎩ ∂t 2 500 ∂x 2 ∂y 2

u ( x, y,0) = x 2 , v( x, y,0) = y 2

t
∫ (.)dt , we get:
−1
Applying the inverse operator Lt =
0
102 J. Biazar and Z. Ayati

⎧ t 3 1 ∂ 2u ∂ 2u
= + + ∫0 − + +
2 2
⎪ u ( x, y , t ) x t (u v u ( ))dt
⎪ 2 500 ∂x 2 ∂y 2

⎪v( x, y, t ) = y 2 + t ( 1 u − u 2 v + 1 ( ∂ v + ∂ v ))dt
2 2

⎪⎩ ∫0 2 500 ∂x 2 ∂y 2

Computing Adomian polynomial by same way as example one, Adomian method leads to
the following scheme.

u 0 = x 2 + t , v0 = y 2
3 t 1 t ∂ 2u n ∂ 2u n
u n +1 = ∫ An dt −
t

2 ∫0 500 ∫0 ∂x 2
u dt + ( + )dt n = 0,1,2, …
∂y 2
n
0

1 t t 1 t ∂ 2 vn ∂ 2 vn
2 ∫0 ∫0 n 500 ∫0 ( ∂x 2 + ∂y 2 )dt
v n +1 = u n − A + n = 0,1,2, …

and we will have

1 3 1 3
u1 := y 2 t 3 − t 2 + t 2 x 2 y 2 + x 4 y 2 t + t − x2 t
3 4 250 2

1 1 1 1
v1 := − y 2 t 3 − t 2 x 2 y 2 + t 2 + x 2 t + t − x4 y2 t
3 4 2 250

1 2 5 4 1 5 2 2 1 5 2 4 2 4 1 4 2 5 4 4 2 1 4 2
u2 := − y 2 t 6 + t y − t x y + t + t x y − t y − t x y + t x
18 15 3 20 3 2 6 4
1 4 1 2 3 5 1 3 4
+ t + y t − 2 t3 x2 y2 + t3 x4 − t3 x6 y2 + t3 x2 + t3 + t3 x4 y4
750 250 12 250 8 3
2 2 2 2 9 2 4 2 9 2 2 1 2 4 1 2 6 1 2 8 2 3 2
+ t2 x6 y4 + t x y − t x y + t x + t x + t x − t x y − t
125 4 8 250 4 2 500

1 2 6 2 5 4 1 5 2 2 1 5 5 4 2 1 4 2 4 2 4 5 4 4 2
v 2 := y t − t y + t x y − t + t y − t − t x y + t x y
18 15 3 20 12 750 3 6
1 4 2 1 3 5 3 2 2 4 3 4 4 1 2 3 1 3 2 5 3 4
− t x − t + t x y − t x y − y t − t x − t x + t3 x6 y2
4 8 3 3 250 250 12
1 2 3 2 2 2 6 4 2 2 2 2 7 2 4 2 1 2 6 1 2 4 1 2 8 2
+ t − t x −t x y − t x y + t x y − t x − t x + t x y
500 8 125 4 4 250 2

Three terms approximations to the solutions are as follows


A Numerical Solution of Reaction-Diffusion Brusselator System by A.D.M. 103

3 3 189 2 253 2 3 3 2 1 3 2 1 2 4 1 2 6 5 3 4
u := t − t + y t − x t + x2 + t x + t x + t x + t x
8 250 750 2 250 250 4 12
1 5 1 4 1 2 6 2 5 4 1 4 2 1 4 2 251
− t 3 x6 y2 − 2 t3 x2 y2 + t + t − y t + t y − t y + t x + t
20 750 18 15 2 4 250
127 2 2 2 9 2 2 1 2 8 2 9 2 4 2 2 6 4 4 3 4 4
+ x4 y2 t + t x y + t x − t x y − t x y +t x y + t x y
125 8 2 4 3
5 4 4 2 2 4 2 4 1 5 2 2
− t x y + t x y − t x y
6 3 3
3 2 2 253 2 3 1 2 1 2 6 2 5 4 5 4 2 1 4 2 5 3 4
v := t → y 2 − t x − y t + x t+ y t − t y + t y − t x − t x
8 750 2 18 15 12 4 12
1 3 2 1 2 4 1 2 6 1 127 2 2 2 1 5 2 2 2 4 2 4
− t x − t x − t x + t− t x y + t x y − t x y
250 250 4 250 125 3 3
5 3 2 2 3 6 2 1 5 2 6 4 7 2 4 2 1 2 8 2 4 3 4 4
+ t x y +t x y − t − t x y + t x y + t x y − t x y − x4 y2 t
3 20 4 2 3
1 63 2 5 4 4 2 1 4
− t3 + t + t x y − t
8 250 6 750

CONCLUSION
The main goal of this work has been to derive an approximation for the solutions of
reaction-diffusion Brusselator system. We have achieved this goal by applying Adomian
decomposition method. The advantage of this article is twofold. First, the decomposition
method reduces the computational work. Second, in comparison with existing numerical
techniques, the decomposition method is an improvement with regard to its accuracy and
rapid convergence. The computations are done using Maple 9.

APPENDIX
Let’s consider G(u) as a nonlinear part of equation, then by using the following program
in Maple we can compute Adomian polynomials:

> restart:
> With (student):
> f:=G(u):
> f:=unapply(f,u):
> n:=k;
> u[lambda]:=sum(u[i]*lambda^i,i=0..n);
> G[lambda]:=subs (u=u[lambda], N(u)):
> G: =unapply (G[lambda], lambda);
> For i from 0 to n do
> A[i] :=(( D@@i) (G) (0)/i!);
> od;
104 J. Biazar and Z. Ayati

REFERENCES
G. Adomian, The diffusion-Brusselator equation, computes. Math Appl.29 (1995) 1-3.
G. Adomian, Solving Frontier problem of Physics: The Decomposition Method, Lower
Academic press, 1994.
J. Biazar, E. Babolian, A. Nouri, R. Islam, An alternate algorithm for computing Adomian
Decomposition method in special cases, Applied Mathematics and Computation 38 (2-3)
(2003) 523-529.
In: Perspectives on Sustainable Technology ISBN: 978-1-60456-069-5
Editor: M. Rafiqul Islam, pp. 105-130 © 2008 Nova Science Publishers, Inc.

Chapter 4

ZERO-WASTE LIVING WITH INHERENTLY


SUSTAINABLE TECHNOLOGIES

M. M. Khan∗1,, D. Prior2, and M. R. Islam1


1
Civil and Resource Engineering Dept., Dalhousie University,
Halifax, Nova Scotia, Canada
2
Veridety Environmental., Halifax, Nova Scotia, Canada

ABSTRACT
The modern age is synonymous with wasting habits, whereas nature does not
produce any waste. The fundamental motion and the mass cannot be created or destroyed
dictates that only transformation of materials from its one phase to another phase take
place. However, the mass balance alone does not guarantee zero waste. Nature is perfect,
which means it operates at 100% efficiency. This postulate necessitates that any product
that is the outcome of a natural process must be entirely usable by some other process,
which is in turn would result into products that are suitable as an input to the process. A
perfect system is 100% recyclable and therefore zero-waste. Such a process will renew
zero waste as long as each component of the overall process also operates at with the
principle of zero waste.
That is why, only the imitation of nature can lead us towards truly sustainable
lifestyle. This paper is aimed at emulating a zero-waste living. In a desired zero-waste
scheme, the products and byproducts of one process is used for another process. The
process involves a number of novel designs, including biomass energy, solar energy
(refrigeration and other applications) and a desalination process. In this paper, an
integrated loop system is investigated for a hundred apartment building with an
approximate population of three hundred people. The system includes an anaerobic
digester, which is the key unit of the integrated process. Such process produces sufficient
amount of methane gas, which can be used as fuel for heating and cooling as well as
solar-absorption based refrigeration. The purification of the gas can also find suitable
applications in fuel cells, which converts chemical energy to electricity. The paper also
includes mass balance equation, which estimates methane production in biogas and
ammonia production in the effluent. Calculations show a daily methane production of


Corresponding author: Email: mmkhan@dal.ca
106 M. M. Khan, D. Prior, and M. R. Islam

6.435m3 and ammonia production of 1.09 kg. Whereas, the stoichiometry of chemical
equations of desalination plant finds that 1 kg of ammonia and 2.63 kg (approximately 14
m3) of CO2 can produce approximately 92 kg of fresh water from sea-water on a 100%
conversion basis. The ammonia-enriched effluent along with CO2 (exhaust) from the
digester can be used for desalination plants. This ammonia can also be used to run
absorption refrigeration units. Besides, free solar energy has a wide range of household
applications, such as direct heating or cooling through absorption based refrigeration
units. These not only reduce the energy requirements supplied from fossil fuel but also
contribute to a substantial cost-savings leading to a cleaner and more economical life
style.

Keywords: Biogas, desalination, solar energy, sustainability.

INTRODUCTION
Non-renewable energy sources are of predominantly used today. Nearly 90% of today’s
energy is supplied by oil, gas, and coal (Salameh, 2003). The burning of fossil fuel accounts
for more than 100 times greater dependence than the energy generated through renewable
sources (solar, wind, biomass and geothermal energy). However, fossil fuels are limited.
According to present consumption level, known reserves for coal, oil, gas and nuclear
correspond to a duration of the order of 230, 45, 63 and 54 years, respectively (Rubbia, 2006).
Moreover, today’s atmospheric pollution is derived from their combustion that produces
many toxic by products, the most devastating being plastics (Khan et al., 2005). Considering
these environmental concerns and limited resources, it is mandatory to depend on clean,
domestic and renewable sources of energy. Even though solar energy, beaming over the
planet, is squandered, it is already has been identified as the best source of energy. This
source is clean, abundant and free of cost. However, the method of utilizing solar energy is
different from one application to another application. Most existing processes are energy-
inefficient and mass-wasteful. Even when solar energy is utilized, the mere fact that the most
common usage is the use of photovoltaic, the maximum efficiency can be only 15% (Gupta et
al., 2006). In this paper, direct use, without intermediate conversion into electricity of solar
energy, is proposed in order to develop both heating and cooling systems that can make
modern household design independent of electrical supplies (Khan and Islam, 2007).
In this study, various approaches are advanced that would reposition fossil fuel
production in a zero-waste mode. In a desired zero-waste scheme, the products and by products
of one process is used for another process. Any zero-waste scheme is considered as inherently
sustainable process. However, the sustainability of the processes has been confirmed with the
help of pro-nature technology developed by Khan and Islam (2007).
After the industrial revolution, civilization has actually become synonymous with
wasting habits. At present, the most-practiced energy and mass consumption options are
possibly the most inefficient that the mankind has ever experienced. But there is some
possibility of expanding production and consumption for our own needs but on the basis of a
net-zero waste of mass or energy, either at the input or output of any process. Following on
this, it becomes feasible to propose approaches to zero-waste (mass) living in an urban
setting, including processing and regeneration of solid, liquid and gas. The process is shown
in the following Figure 1.
Zero-Waste Living with Inherently Sustainable Technologies 107

Figure 1. Zero-waste mass utilization scheme.

Anaerobic bio-digester (Figures 2 and 3) is the principal unit of the above proposed
model which is fed regularly with kitchen waste and sewage waste.

Stir Pressure Gauge


Opening
Bio-waste Biogas
storage

Double
Valve
Opening Thermometer

Double Valve
Opening

Solid
Effluent

Liquid
Effluent

Figure 2. Schematic diagram of a bio-digester. Figure 3. Experimental bio-digester in lab.


108 M. M. Khan, D. Prior, and M. R. Islam

Even the solid waste from other sources, such as water treatment plant, can be used. The
aim of this study is to produce energy and valuable materials from waste. The anaerobic bio-
digestion is the mechanism of converting waste into value added materials such as bio-gas,
ammonia and manure. The products from bio-digester can be further used in different
processes leading a zero-waste living. In addition, the coupling of solar energy utilization
makes the whole concept economically and environmentally attractive.
In this paper, a novel concept of zero-waste living with least fossil energy utilization is
proposed and investigated by integrating different processes. The source of energy and the
utilization of produced materials for other application and the utilization of direct solar energy
are discussed here:

1. Energy from kitchen waste and sewage.


2. Utilization of produced waste in desalination plant.
3. Solar Aquatic process to purify desalinated/waste water.
4. Utilization of biogas in fuel cell.
5. Direct/indirect use of solar energy

ENERGY FROM KITCHEN WASTE AND SEWAGE


There are two fundamentally distinct methods of composting: aerobic and anaerobic. The
anaerobic treatment is an energy generating process rather than one which demands a regular
high input of energy, as in an aerobic biological system (Ince et al., 2001). One significant
advantage of the anaerobic composting is the generation of methane that can be stored for use
as an energy source. In the last decade, the animal waste was the raw material for the biogas
production. Later on, interest was diversified to municipal organic waste. Recently, interest is
growing to the kitchen waste. Mandal and Mandal (1997) performed a comparative study of
biogas production from different waste materials and found kitchen waste as promising
source of biogas as shown in Figure 4. The kitchen waste is characterized by high C/N ratio
and high water contents (Sharma et al., 2000). Ghanem et al. (2001) has identified the
requirement of huge volume of water as the problem of anaerobic process. This study
suggests the use of sewage water or urinated water as the necessary water source for the
process. This utilization increases the organics load and thereby increases the percentage
ammonia in the digester liquid effluent. The use of sewage also facilitates biogas production
due to increase amount of volatile solid load. An estimation of biogas and ammonia
production for a hundred apartment buildings with an approximate population size of three
hundred people is shown here. The uses of biogas have been directed. Besides lighting,
cooking and heating the biogas can be utilized for different applications those have been
discussed. Even the uses liquid effluent (especially ammonia) is described here. Since
anaerobic destruction of organic matter is a reduction process, the final solid product, humus,
is subjected to some aerobic oxidation.
This oxidation is minor, takes place rapidly, and is of no consequence in the utilization of
the material as manure. Even odor is not a problem for anaerobic digester. The odor problem
is very little due to high water content. In addition, anaerobic digester is an air tight close
system and that is why odor doesn’t come out during the degradation process. So it is found if
Zero-Waste Living with Inherently Sustainable Technologies 109

properly digested, the digestion process gives more value and reduces the environmental
problems. During the estimation of biogas production along with other effluents, a number of
assumptions have been made which were taken from different research works. Corresponding
references are put in parenthesis against each assumption.

4
Ammount of biogas produced

3.5
3
2.5
(liter)

2
1.5
1
0.5
0
Horse dung Banana Peels Modar flower Potato leaves
Samples of various waste materials

Figure 4. Biogas generation capacity of waste materials in the various groups (redrawn from Mandal
and Mandal,1997).

ESTIMATION OF THE BIOGAS AND AMMONIA PRODUCTION


100 two-bed room apartment building with approximately 300 persons.

Basis: 100 kg kitchen waste/day.


16 kg Feces/day (0.053kg (dry)/person/day basis).

Assumptions :

Total solids (TS), % =11-20 (Sharma et al., 2000 ; Bouallagui et al., 2004)
Volatile solid (VS), % of TS = 75-87 (Zupancic and Ros., 2003; Bouallagui et al., 2004)
Biogas yield, m3/kg VS = 0.25-0.50 (Owens and Chynoweth., 1993, Sadaka et al., 2003)
Methane content, volume% = 60-70 (Bouallagui et al., 2004)
Retention time for mesophilic digestion, days = 30 (Bouallagui et al., 2004; Al-Masri,
(2001)

Calculation of Biogas Production Per Day

Total Solid content = 100kg kitchen waste/day ×0.17 + 16 kg feces/day


= 33 kg/day.
Volatile solid content = 33 kg TS/day × 0.75
= 24.75 kg/day.
110 M. M. Khan, D. Prior, and M. R. Islam

Biogas yield = 24.75 kg VS/day × 0.40 m3/kg VS


= 9.9 m3/day

Methane (assume 65% in the biogas) = 6.435 m3/ day.


Heating value (35.9 MJ/m3, LLV of Methane) = 231 MJ/ day. (Bouallagui et al., 2004)
CO2 production = 9.9 m3×0.35
= 3.465 m3

Ammonia Production

Assume, Nitrogen content in the waste is 11% of the dry solid.


Again assume 75% conversion and 40% recovery as liquid effluent.

Nitrogen content in the influent = 33kg TS/ day × 0.11


=3.63kg/day.
Conversation to ammonia = 3.63 kg/day × 0.75= 2.7225 kg/day

Ammonia in the liquid effluent = 2.7225 kg /day × 0.40


= 1.09 kg/day
This Ammonia is available for desalination plant and solar absorption cooling system.

Daily Water Requirement

Assume 8% slurry as influent (Al-Masri, 2001)

The weight of slurry = 33kg TS / 0.08


= 412.5 kg

Wet weight of waste = 33 kg TS/ 0.17


= 194.12 kg

Daily water input = (412.5 – 194.12) kg


= 218.38 kg
= 0.22 m3

No fresh water is required for making slurry. Direct utilization of sewage water can be
used for making slurry as discussed earlier. This will be cost effective and again utilization of
waste stream.
The principal value-added outputs of bio-digester are biogas which content about 65%
methane gas and the balance is carbon dioxide. Biogas can directly be used for cooking,
heating and all other applications those are known for methane. The biogas content traces
amount of other gases such as hydrogen sulfide, ammonia etc. which can be removed by
existing removal technology. The economy of biodigester increases and approaches to zero
waste living when the desalination and fuel cell technology are integrated with the system.
Zero-Waste Living with Inherently Sustainable Technologies 111

Utilization of Produced Waste in a Desalination Plant

Even though the Earth is known as the “watery planet”, over 97 percent of the earth’s
water is found in the oceans as salt water. According to Table 1, the percentage of fresh water
that we use for variety of purposes is less than 1%.

Table 1. Water sources and distribution in the earth (EPA, 2006, US)

Types of Water Sources Percentage of


Total Water
Salt Water Ocean 97.2
Ice caps/glaciers 2.38
Ground water 0.397
Fresh Water Surface water (e.g., lakes, river, ponds etc) 0.022
Atmosphere 0.001

However, the modernization of the world is demanding more use water and is placing
tremendous stress in the source of available fresh water. The available fresh water is being
depleted at an alarming rate due to agricultural, urban and industrial water requirements
(Table 2). These activities are also polluting water which are not useable without treating
them properly. Most of the waste water treatment plants are chemical in nature and they
generate large amount of sludge which is often toxic and is thus environmentally stressful if
disposed of by ocean dumping, land filling, spreading or incinerating. They employ
environmentally damaging chemicals to precipitate out solids, phosphorus and chlorine. They
fail to remove metals and synthetic organic compounds. Even after treating water properly,
freshwater scarcity is a growing concern in many regions of the world, especially in the arid
countries. Therefore, exploration of alternative sources has become mandatory. Recently,
desalination has become the most popular method of obtaining fresh water.
It is found that 66% of drinking water demand of Riyadh, the capital of Saudi Arabia, is
supplied by desalinated seawater while the balance is produced from ground water sources
(Al-Muntaz and Ibrahim, 1996 and Alabdula’ali, 1997).

Table 2. Fresh water utilization (EPA, US)

Area of Fresh Water Utilization Percentage of Usable Fresh Water


Agricultural 42
Production of electricity 39
urban and rural homes, offices, and hotels 11
Manufacturing and mining activities. 8

Most of the current practice of desalination is found expensive and energy intensive. The
currently used desalination technologies are generally classified into thermal and membrane
processes (Abu-Arabi and Zurigat, 2005,):
112 M. M. Khan, D. Prior, and M. R. Islam

• Thermal Process: Solar Still, Humidification/dehumidification


• Membrane Technology: Reverse Osmosis (RO), Forward Osmosis (FO)

However, the most recent process is known as the process of chemical approach which is
derived from a recent patent (Rongved, 1997).
The processes and sources must be analysis to scrutinize the reliability, environmentally
friendliness and sustainability. Solar energy utilization is always encouraged as a renewable
and non polluting source. Humidification/dehumidification process is energy intensive. Even
though utilization of nuclear energy is encouraged today, it cannot be appreciated as nuclear
energy is not naturally produced (Chhetri, 2006).
Reverse Osmosis (RO) is an expensive process, for which only 35-50% recovery is
obtained leaving a concentrated brine to be dumped in the sea-shore again. Improvement of
Rivers Osmosis is the Forward Osmosis (FO). In Forward Osmosis, water transports across a
semi-permeable membrane that is impermeable to salt as observed in Rivers Osmosis.
However, instead of using hydraulic pressure to create the driving force for water transport
through the membrane, the FO process utilizes an osmotic pressure gradient. A ‘draw’
solution is used to create an osmotic pressure gradient greater than hydraulic pressure
gradient and thus higher recovery is obtained due to higher driving force for water transport
through the membrane (McCutcheon et al., 2006). A "draw" solution having a significantly
higher osmotic pressure than the saline feed water flows along the permeate side of the
membrane, and water naturally transports across the membrane by osmosis. Osmotic driving
forces in FO can be significantly greater than hydraulic driving forces in RO, potentially
leading to higher water flux rates and recoveries. McCutcheon et al. (2006) observed 75%
recovery by FO. After studying a number of draw solutions, McCutcheon et al. (2006) found
ammoniumbicarbonate (ammonia and CO2 mixture) as the best draw solution for its high
osmotic efficiency, high solubility and lower molecular weight. However, the use of highly
pure (99.9%) CO2 and highly concentrated and purified ammonium solution has placed
sustainability constriction to this process.
The latest process (Rongved, 1997) of desalination is known as the process of chemical
approach that proposed to use pure CO2 and Ammonia. This process has drawn attention due
to its several characteristics features.
The stepwise reaction can be explained as follows (Ibrahim and Jibril, 2005):

Primary reactions:

NH3 + CO2 Æ NH2COOH


NH3 + NH2COOH Æ NH4+ + NH2COO-

The overall primary reaction is

2NH3 + CO2 Æ NH2COO- + NH4+

Secondary reaction:

NH2COO- + H2O Æ NH3 +HCO3-


Zero-Waste Living with Inherently Sustainable Technologies 113

NH4+ + HCO3- + NaCl Æ NaHCO3 + NH4Cl

So the final overall reaction is as follows:

NaCl + NH3 + CO2 + H2O = NaHCO3 + NH4Cl

Stoichiometry of chemical equations:

NaCl + NH3 + CO2 + H2O = NaHCO3 + NH4Cl


(23+35.5) + (14+3) + (12+2×16)+(2+16)=(23+1+12 + 3×16) + (14+4+35.5)
58.5g NaCl + 17g NH3 + 44g CO2 + 18g H2O = 84g Na2CO3 + 53.5g NH4Cl

The input and out of a desalination process both dry basis and wet basis are shown in
Figure 5. Generally, sea water concentration is assumed 3.5% (3.5% NaCl and 96.5 % water).
According to this assumption, it can be calculated that 3.5 unit of sodium chloride is
associated with 96.5 unit of water. Accordingly, 58.5 unit of sodium chloride is associated
with 1612.93 unit of water. Stoichiometry shows that 18 unit of water is consumed during
reaction. So the fresh water production is (1612.93 - 18) = 1594.93 unit. In this process, Soda
Ash (Na2CO3), Ammonium chloride (NH4Cl) and Hydrochloric acid (HCL) are obtained as
byproducts. Soda Ash is a valuable and saleable product. Hydrochloric acid is an important
acid for many industries. Ammonium chloride can be recycled to produce NH3 for reuse in
the process.

Figure 5. Mass balance of desalination plant.

Ibrahim and Jibril (2005) used ammonium hydroxide solution (28% by weight) and
highly pure industrial grade Carbon dioxide (99.5%) to produce desalinated water. However,
the use of these industrial grade chemicals makes this process highly chemical in nature.
Recently, EnPro AS, with Dr. Omar Chaalal, has demonstrated the desalination process
as the natural gas cleaning method where 98% CO2 can be removed from natural gas (EnPro).
114 M. M. Khan, D. Prior, and M. R. Islam

This is very encouraging as it only absorbs CO2 leaving methane free. This process, however,
is termed as purification process of natural gas but cannot be said as the zero-waste process as
it is also involved industrial grade chemicals and inorganic source of raw materials.
The target of this study is to choose process that is pro-nature and operated in a zero-
waste mode. If raw materials are obtained from the natural activities and those are used in an
efficient manner, the process can be operated in the zero-waste mode. This is a part of the
Figure 1, especially where desalination has been introduced. This research suggests the use of
sewage and leachate of anaerobic digester instead of chemically obtained ammonia; those
liquid contains significant amount of ammonia. Biogas has been chosen as the source of CO2
instead of natural gas or other source from non-natural processes. The supply of CO2 can be
increased if the exhaust of methane burner is utilized. The exhaust generally contains CO2
and other trace gases and elements. The choice of raw materials from the natural process
makes this whole process inherently sustainable. Figure 6 shows the block diagram of a
desalination plant of zero-waste theme. This is the process of biogas purification as well as
burner exhaust utilization in a loop system to make the process zero-waste operable. Table 3
shows an overview of the process, sources and the uses.

Figure 6. Block diagram of desalination plant.


Zero-Waste Living with Inherently Sustainable Technologies 115

Table 3. Overview of desalination plant

Reactants and Percentage of Sources Uses


products total input or
output
Water 93 Sea water
Salt(NaCl) 3.37% Sea water
Ammonia (NH3) 1 Sewage,
INPUT Lechate of
biodigester
CO2 2.63 Biogas,
exhaust gas
Sodiumbicarbonate 5 Food, textiles, glass,
(NaHCO3) paper,medicines.
Ammonium 3.2 Medicines and
Chloride (NH4Cl) valuable chemical
productions etc.
OUTPUT
Fresh water 91.8 Freshwater for
Agriculture,
industrial use and
Fish farming.

SOLAR AQUATIC PROCESS TO PURIFY DESALINATED/


WASTE WATER
The desalinated water obtained from desalination plant can be purified along with other
waste water using a solar aquatic system. This paper suggests the solar aquatic waste water
treatment process after reviewing the successful operation of a solar treatment facility which
is operable at Annapolis valley, Canada known as Bear River solar aquatic that produces
mineralized drinkable water from 100% waste material without using any chemicals. The
solar aquatic system uses sunlight, bacteria, green plants, and animals to restore water to pure
conditions. Unlike mechanical treatment processes, these systems are complex, dynamic, self-
organizing, and resilient, so they can adapt to changing effluent quality better than
mechanical/chemical systems. It is known that a natural process has the best adaptability with
any system leaving no vulnerable, long term effect. Therefore, producing drinkable water by
solar aquatic system is very attractive.
Solar Aquatics Systems replicate and optimize natural wetlands processes to treat
wastewater. Each living thing is linked to many others in the chain of nature and thus a zero-
waste living is operable. Solar UV degrades dissolved organic carbon photolytically so that
they can readily be taken up by bacterioplankton (Hader, 2000). In a solar aquatic system, a
number solar tanks are constructed in such a way so that each of the solar tanks can be served
as a mini-ecosystem with preferred organisms, plants and many other aquatic habitats to
break down waste and complex molecules. Each tank participates in treating wastewater that
involves destroying pathogens, reducing biological oxygen demand (BOD), filtering out
116 M. M. Khan, D. Prior, and M. R. Islam

particles, using up nitrogen and phosphorus, and stabilizing or disposing of toxins. All of this
happens faster in constructed ecosystems than in conventional treatment systems. Moreover,
constructed ecosystems are also self-organizing and adaptive so they can handle
inconsistency in waste water. This process eliminates sludge production as seen in the
chemical treatment plant.

Process Description

Waste water is collected from sewage and desalination plant in a large fiberglass tanks
for initial anaerobic digestion. Then it is passed through a series of tanks each of which is
vigorously aerated and copiously supplied with bacteria, green plants from algae to trees,
snails, shrimp, insects, and fish. Each one of the tank-series is capable of handling 75% of the
maximum daily flow (Spencer,1990). Only the first tank is a closed aerobic reactor, which
scrubs odor and reduces the level of organic waste matters. Others are open aerobic reactors.
Here, aerobic bacteria begin to dominate and convert ammonia to nitrates. Water then flows
into the last tank in which the water is pumped out of the top to the next step and any
remaining sediment is pumped from the bottom back to the anaerobic tanks to begin the cycle
again and be eliminated.
After passing all the tanks, the bulk water is transported to a big solar tank with the same
ecosystem. Part of this water can be used for irrigation, flush reuse. However, to make the rest
of water potable, the process then continues in a constructed marshland filled with coarse
sand and gravel and planted with typical wetland species including alligator flag, arrowhead,
pickerel weed, blue flag iris, bulrush, cattail, willow and swamp lily that remove the last
vestiges of nitrogen through the root systems and convert them to harmless nitrogen gas
(Spencer,1990). These plants have the ability to transfer oxygen to their root and support
microbes surrounding their roots. Finally, the water is passed to a rotary drum filter to
separate solids. The water is then treated with solar UV to make the water disinfected and
aerated to get higher quality, chemically free, clean effluent.

Figure 7. Block diagram of a solar aquatic waste water treatment plant.


Zero-Waste Living with Inherently Sustainable Technologies 117

The number of solar tanks varies with the waste water volume to be handled and the
aquatic habitat’s concentration. Figure 3 shows the block diagram of a typical solar aquatic
waste water plant. The aquatic habitat serves different purposes. Some are discussed below
(Todd,1988):
Bacteria phase: The bacteria phase involves the bioaugmentation of the waste stream
near the input with different bacterial types of culture. Todd (1988) identified Bacillus subtilis
and Pseudomonas spp. that reduce BOD levels to those that will allow nitrifying bacteria to
function efficiently to eliminate toxic ammonia from the waste stream. Microbes are a vital
component behind biological treatment systems. Not only do they degrade organic matter in
the water (both dissolved and particulate), they also convert carbon and other nutrients from
an organic to inorganic state. As plants cannot use elements in an organic state, this
conversion is necessary for plant production.
Plant phase: Plant palettes for constructed ecosystems ideally include facultative plants
that do not mind having their roots wet or dry. A variety of higher plants are used in the
system such as willows, dog woods, iris, eucalyptus, and umbrella plants. They are excellent
water purifiers and their roots act as a substrate for beneficial micro-organism including thick
masses of snails. Plants provide oxygen to the upper water column (via photosynthesis and
translocation), enabling the growth and productivity of microorganisms. The plants are
removed when they reach to a marketable size. Aquatic plants are excellent at taking up toxic
substances, including heavy metals and at polishing waster water. Bulrush is specially found
absorbing remaining organic and inorganic materials and cause colloidal substance to
flocculate the settle out (Todd,1988).
Algal phase: The algal is unique to the solar aquatic facility. The activities of algae are
particularly helpful during the cloudy period. During cloudy winter they can purify
efficiently. The green algae phase are seed with Scenedesmus spp., Chlorella spp.,
Micractinium spp. Dictyospaerium spp., Ankistrodesmus spp., Golebkinica spp.,
Sphaerocystis spp., and other rare species (Todd,1988). These algae can rapidly take up
nitrogen, including ammonia, nitrates, and phosphorus. The consume carbon gases and
release oxygen during daylight conditions. Within the marsh the algae are consumed by
grazing snails. The snails in turn are fed to trout after harvest.
Other habitats: Fish (koi, goldfish, tiny freshwater shrimp), snail, trout are the other
living habitats of solar aquatic system. They consume algal and phytoplankton and maintain
their abnormal growth in a system.
The cumulative and chain effect of all the living habitats purify water. Even some plants
are capable of removing toxic heavy metal and leave the water in safe drinkable form.
A desalination process, when combined with biogas production facility, increases the fuel
value of biogas and reduces the air pollution by absorbing green house gases of exhaust gas.
This process also uses ammonia of liquid leachate of anaerobic digester and thereby reduces
the treatment process of that liquid waste. Even when sewage is used as a source of ammonia,
desalination process reduces the ammonia load of the sewage and makes it easier for the next
step of the water purification. The integration of solar aquatic process with desalination
process make it ideal source of fresh water from supply. This is a process of converting waste
into value added materials.
118 M. M. Khan, D. Prior, and M. R. Islam

UTILIZATION OF BIOGAS IN FUEL CELL


Another possible application of biogas is in a fuel cell. Fuel cells are highly efficient
electro-chemical energy conversion devices that convert chemical energy directly into
electricity (Mills et al., 2005). Hydrogen gas is the best fuel for these cells, especially PEM
(Polymer Electrolyte Membrane) cell but the generation of pure hydrogen gas is very costly.
Instead, the use of natural gas has been utilized but the presence of CO in the natural gas
limited the use as platinum anode which is easily contaminated by CO. So, upon removing
the CO (if any) biogas can be a source of fuel for fuel cell. Several researchers have
emphasized the use of biogas for fuel cell (Lemons, 1990, Zhang et al., 2004). A simple
hydrogen fuel cell is shown in Figure 8 which uses costly DuPont patented electrolyte which
is polymer based. However the replacement of the polymer based electrolyte with the bio-
membrane would be the environmentally attractive option. Recently, Yamada and Honma
(2006) have reported a proton conductive biomembrane that shows large anhydrous proton
conductivity from room temperature to 160ºC. Even though the biomembrane was composed
from entirely biomolecular materials (uracil molecules and chitin phosphate), it produced a
current under non-humidified H2/O2 condition at 160ºC when it was used in a fuel cell. The
breakthrough of fuel cell by biomembrane is realistic. Because it is already found that the
electric eel (Electrophorus electricus), which is found in South American tropical regions, has
the ability to produce powerful electric charges (e.g. 5 to 650 volts) (Aquarium of Niagara,
2006). It can be speculated that the biomembrane of the electric eel play the vital role in the
production of electric charge like fuel cell. A number of experiments have been launched to
replace the costly and toxic materials to environmentally acceptable and low cost materials.

Figure 8. Fuel cell (Redrawn from Mills et al., 2005).

Both hydrogen and methane are gaseous fuels and difficult to handle. To overcome this
difficulty, liquid fuels have been promoted. Due to greater volumetric energy densities and
portability, liquid alcohol has been chosen as an alternative fuel for fuel cell (Jiang et al.,
2004 and Xie et al., 2004). Both methanol and ethanol are now successfully used as a fuel
Zero-Waste Living with Inherently Sustainable Technologies 119

source of fuel cells (Andreadis et al., 2006; Rousseau et al., 2006; Zhou et al., 2005 and Xie et
al., 2004). Moreover, recent studies reveal that the direct alcohol fuel cell can use the same
membrane used by PEM fuel cell but the electrodes needs some modification (Jiang et al.,
2004 and Xie et al., 2004). In the direct alcohol fuel cell (DAFC) the fuel cell reactions are as
follows (Xie et al., 2004 and Zhou et al., 2005):

Direct Methanol Fuel Cell (DMFC)

Anode reaction:
CH3OH + H2O → CO2 + 6H+ + 6e-

Cathode reactions:
6H++ 6e- + ½ O2 → 3H2O

Overall reaction:
CH3OH + H2O + ½ O2 → CO2 + 3H2O

Direct ethanol fuel cell (DEFC)

Anode reaction:
CH3CH2OH + 3H2O → 2CO2 + 12H+ + 12e-

Cathode reactions:
12H++ 12e- +3O2 → 6H2O

Overall reaction:
CH3CH2OH + 3O2 → 2CO2 + 3H2O

From the chemical reactions, it is found that both alcohols produce water and CO2 along
with energy. CO2 is not harmful to the environment when it releases from the trees in their
respiration process. That is why this CO2 would not be harmful to the environment if the
source of the alcohol is obtained from biological process. Ethanol is safer and has higher
energy density than methanol. However, complete oxidation of ethanol is found difficult so
far with the current technology (Jiang et al., 2004). Most of the current technology is relied on
industrial grade alcohol. However, fuel alcohol can be obtained from the so-called
lignocellulosic biomass that includes agricultural residues, forestry wastes, municipal solid
waste, agroindustrial wastes, and food processing and other industrial wastes (Murphy and
McCarthy, 2005 and Cardona and Sanchez, 2006). The use of waste material, especially
biologically produced waste utilization increased the versatility of zero-waste living. Cardona
and Sanchez (2006) showed the following flowchart to produce ethanol from biological waste
materials using microbial activities (Figure 9).
According to the information provided by Cardona and Sanchez (2006), it is found that
10% of biomass converted to ethanol leaving significant amount of solid waste and liquid
waste. The waste water is very rich with volatile organic matter which can be sent to the
120 M. M. Khan, D. Prior, and M. R. Islam

anaerobic digester to produce biogas. The solid waste can be used as solid fuel or can be used
as land filling as it has mineral value.

Figure 9. Block diagram for ethanol production from lignocellulosic biomass; Main stream component:
Cellulose (C), Hemicellulose (H), Lignin (L), Glucose (G), Pentose (P), Inhibitor (I) and Ethanol
(EtOH). SSF: Simultanious saccharification and fermentation, SSCF: Simultanious saccharification and
cofermentation (Modied from Cardona and Sanchez, 2006).

DIRECT USE OF SOLAR ENERGY


Energy is the driving force of today’s technology. However, most of the industrial and
household energy utilization is found in the form of electrical energy. Figures 10 and 11 show
the pattern of electricity consumption of cold countries and hot countries, respectively.
A significant portion of the annual energy bill for any apartment building is for the costs
of heating, cooling, and domestic hot water. These costs can be over 60% of the annual total
utility costs. Part of the cost can be reduced by utilization of solar energy.
Zero-Waste Living with Inherently Sustainable Technologies 121

Figure 10. Household electricity usage in cold countries (Redrawn from Nebraska Public Power
District).

Figure 11. Household electricity usage in hot countries (Modified from Tso and Yau, 2003).

Solar energy is essentially unlimited and its use is ecologically benign. The solar constant
of solar energy is 1367.7 W/m² which is defined as the quantity of solar energy (W/m²) at
normal incidence outside the atmosphere (extraterrestrial) at the mean sun-earth distance
(Mendoza, 2005). In space, solar radiation is practically constant; on earth it varies with the
time of the day and year as well as with the latitude and weather. The maximum value on
earth is between 0.8 and 1.0 kW/ m² (The solarserver, 2006). However, indirect solar energy
conversion does not give considerable efficiency. The design of any system that operates
under the direct use of solar energy has maximum energy conversion efficiency. Followings
are some examples of direct use of solar energy:

Space Heating

Solar space heating is a proven effective method to reduce conventional energy


requirements for domestic heating. There are two basic types for home heating: active and
passive. Active systems are divided into liquid and air systems. The active solar heating
system generally consists of the following sub-systems: (1) a solar thermal collector area, (2)
a water storage tank, (3) a secondary water circuit, (4) a domestic hot water (DHW)
122 M. M. Khan, D. Prior, and M. R. Islam

preparation system and (5) an air ventilation/heating system (Badescu and Staicovici, 2006).
They use pumps and pipes (or fans and ducts) to carry heat from the collectors to storage and
from storage to the rooms of the house. Passive systems use the building itself to collect and
store solar heat. The most elementary passive heating concept is letting in sunshine through
glass, then capturing it to warm the inside air and the structure. This is the same greenhouse
effect that turns a car into an oven on a warm day. Passive systems are found more efficient
than the active system (Badescu and Staicovici, 2006). However, the best option is to use both
systems together. This is known as the active solar heating to passive house.

Water Heating

Solar water heaters capture the sun’s thermal energy. Active solar heating system relies
on collectors to perform this task. Residential buildings typically require hot water
temperatures below 95°C. The flat-plate collector is the most commonly used residential
system collector (Jansen, 1985). It is an insulated, weatherproofed box containing a dark
absorber plate covered by one or more transparent or translucent covers. A collector is
typically 2 ft to 4 ft wide, 5 ft to 12 ft long and 4 inches in depth. The absorber plate gathers
the sun’s heat energy. This heat energy warms water that flows through the attached tubes.
Once heated, the liquid is pumped through the tubes to the heat exchanger, located in front of
the storage tank. The heated liquid warms the cooler water in the storage tank. This warm
water is then transferred into the backup storage tank. When the hot water tank falls below a
preset temperature, the backup energy source maintains the preferred water temperature.
A passive solar heating system generally uses storage tanks that are mounted on a
buildings roof. A batch heater consists of one or more water storage tanks placed inside an
insulated box. This box contains a glazed side that faces the sun. The sun’s heat energy
warms the stored water. Batch heaters do not work well in colder climates because they will
freeze. So passive solar heating is less efficient. But this water heating system substantially
reduces the cost of electrical water heating cost.

Refrigeration and Air Cooling

Recently, Khan et al. (2006) showed that if calculated from the base source of energy,
absorption refrigeration shows higher coefficient of performance (COP) compare to vapor
compression refrigeration system. Absorption cooling is already known as the heat dependent
air conditioning system. However, both dual and triple pressure refrigeration cycles are not
solely heat dependent. Only the single pressure refrigeration cycle is a thermally driven cycle
that uses three fluids. One fluid acts as a refrigerant, the second as a pressure-equalizing fluid,
and a third as an absorbing fluid. Heat, rather than electricity, drives the system, allowing
absorption cooling systems to operate on natural gas or any other heat sources. Recent studies
on Einstein single pressure refrigeration system (US Patent: 1,781,541) has generated a
renewed interest and is being viewed as a viable alternative for economical refrigeration
(Alefeld, 1980; Dannen, 1997; and Delano, 1998). In the Einstein cycle, butane acts as the
refrigerant, ammonia as an inert gas, and water as an absorbent. The following features have
made this refrigeration system unique:
Zero-Waste Living with Inherently Sustainable Technologies 123

- Higher heating efficiency


- Inexpensive equipment
- No moving parts
- High Reliability
- Portability

Khan et al. (2005) indicated that when this refrigeration system is coupled with the solar
system, it becomes the best choice among all other types of refrigerators because of its
environmentally friendly and ecologically benign features. With some modifications, the
same unit can be utilized as a solar cooler (Khan et al., 2006b). Solar collectors are required
for this process. The fluid in the solar collector’s receiver is used as the energy carrier, which
transfers energy from receiver to an area of interest. However, most of the solar collector
fluids are found to be toxic and harmful to the environment. To get rid of this toxicity, Khan
et al. (2006b) used environmentally friendly vegetable oil as solar thermal oil and found
successful as energy career. Figure 12 shows a parabolic trough that uses vegetable oil to
absorb solar energy and transfers to a preferred place.

Figure 12. Constructed parabolic trough.

The parabolic solar collector can provide the necessary energy to run a absorption
system. A household kitchen based biogas production can enhance the operability of this
refrigeration in the absence of sun. Biogas can be burnt to get the necessary heat to operate
the cooling system especially at night. The required ammonia can be collected from the
effluent of the anaerobic digester. The whole process will save fossil fuel and consume waste
materials.
124 M. M. Khan, D. Prior, and M. R. Islam

Solar Stirling Engine

Recently, the utilization of solar energy has been extended to Stirling engine. The Stirling
engine is a simple type of external heat engine that works on two heat reservoirs (hot and cold
reservoirs) like Carnot engine and uses a compressible fluid as the working fluid
(Kongtragool and Wongwises, 2005). This engine can be operated both by solar energy and
heat from any combustible materials such as field waste, rice husk or the like, biomass
methane. Because the working fluid is fixed in a closed system, there are no problems with
contamination and explosion (Abdullah et al., 2005). The basic background and the
chronological development of Stirling engine is found in the literature (Hutchinson, 2005;
Kerdchang et al., 2005; Kongtragool and Wongwises, 2005; Mancini et al., 2003 and Valdes,
2004). Due to environmental concerns and increase in energy cost, the Stirling engine has
received much attention in the last few decades. The development of low temperature
differential (LTD) Stirling engine facilitates the use of Stirling technology in a wide range of
temperature differences from various heat sources. The development of LTD Stirling
technology has facilitated the use of any kind of receiver of solar collector such as flat panel
collector, parabolic trough or dish collector in any country as the temperature is not the prime
issue for LTD Stirling technology (Kongtragool and Wongwises, 2005). The Stirling
technology can be used in any application where mechanical work is necessary. This study
suggests the use of solar Stirling technology as part of zero waste process.

SUSTAINABILITY ANALYSIS
The concept ‘zero waste living’ has been generated from the undisturbed activities of
nature. Nature is operating in zero waste modes, generating tangible, intangible and long term
benefits for the whole world. Natural activities increase its orderliness on a path that
converges at infinity after providing maximum benefits over time. However, any product that
resembles to a natural product does not guarantee its sustainability unless the natural pathway
is followed (Khan, 2006). The term ‘sustainable’ is very growing concept of today’s
technology development. Simply, it can be inferred that the word ‘sustainable’ implies the
benefit from immediate to long term for the all living beings. That is why any natural product
or process is said to be inherently sustainable. Immediate benefits are termed as tangibles and
long term benefits are termed as intangibles. However, focus only on tangible benefit might
mislead the technological development unless intangible benefits are considered.
Even after extensive development of different technologies from decade to decades, it is
found that the world is becoming a container of toxic materials and loosing its healthy
atmosphere continuously. That is why, it is necessary to test the sustainability of any process
and the pathway of the process. To date a number of definitions of sustainability are found in
the literature; the common of all definitions of sustainability is the concern about the well-
being of future generation (Krysiak and Krysiak, 2006). However, most of the assessment
processes of sustainability are incomplete and because of the lack of appropriate assessment
process, there exists a tendency to claim a product or technology sustainable without proper
analysis. Recently, Khan and Islam (2007) developed a sustainability test model distinct from
other as they emphasized on time factor and showed the direction of both new and existing
Zero-Waste Living with Inherently Sustainable Technologies 125

technologies. According to this model, if and only if, a process travels a path that is beneficial
for an infinite span of time, it is sustainable; otherwise the process must fall in a direction that
is not beneficial in the long run. The pro-nature technology is the long-term solution, while
anti-nature one is the result of ∆t approaching 0. The most commonly used theme is to select
technologies that are good for t=’right now’, or ∆t =0. In reality, such models are non-existent
and, thus, aphenomenal and cannot be placed on the graph (Figure 13). However, “good”
technologies can be developed following the principles of nature. In nature, all functions or
techniques are inherently sustainable, efficient and functional for an unlimited time period. In
other words, as far as natural processes are concerned, ‘time tends to Infinity’. This can be
expressed as t or, for that matter, ∆t→∞ (Khan and Islam 2007).
It is found from the figure that perception does not influence the direction and the
intensity of the process. Only the base is found shifted towards top or bottom, however, the
direction of the process is explicit when the process advances with time.
The model has been developed observing the nature and by the analysis of human
intervention in the past years. Today’s regulatory organizations are banning lots of chemicals
and synthetic materials from production and regular use. However, the goal of banning still
remains controversial (Toman and Palmer, 1997). The vulnerable effect of products or
processes that are tangible in nature are selected for ban. However, the products or processes
that have intangible, long term adverse effects are allowed to continue. Khan et al (2006a) has
shown that intangible and long term harm is more detrimental than tangible, short term harm.
Any process or product should be analyzed carefully to understand both tangible and
intangible effect.
In this study, the proposed ‘zero-waste model’ has been tested for sustainability
according to the above definition and details analysis has been presented here. The
technology that exists in nature must be sustainable, because natural process is already proved
to be beneficial for long term. The bio-digestion process is an adequate example of natural
technology and that is why the technology is sustainable. Burning of marsh gas is seen in the
nature. So the burning of bio-gas would not be anti-nature. It is assumed that the carbon
dioxide production from the petroleum fuel (the old food) and the carbon dioxide from the
renewable bio materials (new food) are not same. The same chemical with different isotope
number must not be identical. The exact difference between a chemical existed for millions of
years and the same chemical existed for few years is not clearly revealed. According to the
nature it can be said that the waste/exhaust from the bio-material is not harmful.
Khan et al (2006a) introduced a mathematical model to this concept and explained the
role of intention in the sustainable development process. Intention is found as the key tool for
any process to be sustainable. In this paper, this model is further developed by analyzing the
term ‘perception’ which was found important at the beginning of any process explained by
Khan et al (2006a). Perception varies from person to person. It is very subjective and there is
no way to prove if a perception is true or false. Perception is completely one’s personal
opinion developed from one’s experience without appropriate knowledge. That is why
perception cannot be used as the base of the model. However, if perception is used in the
model, the model would look like as follows (Figure 14).
The difference between natural and synthetic materials shows that natural materials are
very non-linear, complex but shows unlimited adaptability (Islam, 2005). In nature, there is
no need to test natural products. Only the synthetic products should be analysis carefully for
126 M. M. Khan, D. Prior, and M. R. Islam

sustainability. So, instead of pure materials, leachate ammonia and carbon dioxide (exhaust)
are the best choice to make the desalination process sustainable.

Figure 13. Direction of sustainability/green technology (Redrawn from Khan et al.,2006).

Figure 14. Sustainability model with perception (Modified from Khan et al.,2006).
Zero-Waste Living with Inherently Sustainable Technologies 127

The problem of fuel cell is the use of highly pure fossil fuel, synthetic and toxic
electrolyte and the ultra pure metallic electrode. However, the use of biogas, biomethanol or
bioethanol in the fuel cell, the replacement of synthetic membrane by biomembrane and the
use of non toxic electrode can direct the fuel cell technology towards sustainability.
Ammonia, butane and water exist in nature. So the uses of these materials make the
refrigeration process sustainable. However, ammonia must be collected from the biological
process such as from biodigester or sewage.
Solar energy is non-toxic and free. The utilization of solar energy must be sustainable.
The replacement of toxic thermal oil by vegetable oil makes the process sustainable. The
utilization of vegetable oil keeps the whole equipment corrosion free (Al-Darby, 2004 and Al
Darby et al., 2005). Even any use of solar energy in turns reduces the dependency of fossil
fuel and thereby saving the world from more air pollution. So this is also sustainable
technology.

CONCLUSION
It is well-known that prevention is better than cure. So, banning should not be the policy.
Sometimes it is too late to take any action, specially for those products or proceses, which
have long term and intangible effect. It is important to activate sustainable assessment tool to
prevent the detrimental technology development and to encourage sustainable technology. It
is a responsibility to all living human to save the future generation and to make this world
livable in the long term. Zero-waste living with efficient and maximum utilization of solar
heating is found attractive. This scheme is socially responsible, environmentally attractive
and economic. Some estimated calculation shows that the process is indeed viable. This is a
composite and integrated process. The dependence of one process to other process maximizes
the utilization of waste leaving zero waste. Besides, the use of free sunlight shows the
reduction of the dependency of the electricity or fuel cell. The whole process is sustainable.

ACKNOWLEDGEMENT
The authors would like to acknowledge the contribution of the Atlantic Innovation Fund
(AIF).

REFERENCES
Abdullah, S., Yousif, B. F. and Sopian, K., 2005. Design Consideration of Low Temperature
Differential Double-Acting Stirling Engine for Solar Application, Renewable Energy,
Vol. 30 (12):1923-1941
Abu-Arabi, M. and Zurigat, Y., 2005. Year-Round Comparative Study of Three Types of
Solar Desalination Units, Desalination, Vol.172 (2):137-143.
Aquarium of Niagara,http://www.aquariumofniagara.org/aquarium/electric_eel.htm,
[Accessed: August 25, 2006].
128 M. M. Khan, D. Prior, and M. R. Islam

Alabdula'aly, A.I., 1997. Fluoride Content in Drinking Water Supplies of Riyadh, Saudi
Arabia, Environmental Monitoring and Assessment, Vol. 48(3): 261-272.
Alefeld, G., 1980. Einstein as Inventor, Physics Today, May, 9-13.
Al-Masri, M.R., 2001. Changes in Biogas Production due to Different Ratios of Some Animal
and Agricultural Wastes, Bioresource Technology, Vol. 77(1): 97-100.
Al-Mutaz, Ibrahim S., 1996. Comparative Study of RO and MSF Desalination Plants,
Desalination,Vol. 106(1-3): 99-106.
Andreadis, G., Song, S. and Tsiakaras, P., 2006. Direct Ethanol Fuel Cell Anode Simulation
Model, Journal of Power Sources, Vol. 157(2): 657-665.
Badescu, V. and Staicovici, M. D., 2006. Renewable Energy for Passive House Heating:
Model of the Active Solar Heating System, Energy and Buildings, Vol. 38(2):129-141.
Bear River Solar Aquatic, http://collections.ic.gc.ca/western/bearriver.html, [Accessed:
August 25, 2006]
Bouallagui, H., Haouari, O., Touhami, Y., Ben Cheikh, R., Marouani, L. and Hamdi, M.,
2004. Effect of Temperature on the Performance of an Anaerobic Tubular Reactor
Treating Fruit and Vegetable Waste, Process Biochemistry, Vol. 39(12): 2143-2148.
Cardona, A.C.A. and Sanchez, T.O.J., 2006. Energy Consumption Analysis of Integrated
Flowsheets for Production of Fuel Ethanol from Lignocellulosic Biomass, Energy, Vol.
31(13): 2111-2123.
Dannen, G., 1997. The Einstein-Szilard Refrigerators, Scientific American, 90-95.
Delano, A., 1998. Design Analysis of the Einstein Refrigeration Cycle, Ph.D. Thesis, Georgia
Institute of Technology, Atlanta, Georgia.
Einstein, A. and Szilard, (1930) Refrigeration (Appl: 16 Dec. 1927; Priority: Germany, 16
Dec. 1926) Pat. No. 1,781,541 (United States), 11 Nov.
EnPro, http://www.enprotechnology.com/en/news.html, [Accessed: August 25, 2006]
EPA. US, www.epa.gov/NE/students/pdfs/ww_intro.pdf, [Accessed: August 25, 2006]
Ghanem, I.I.I., Guowei, G. and Jinfu, Z., 2001. Leachate Production and Disposal of Kitchen
Food Solid Waste by Dry Fermentation for Biogas Generation, Renewable Energy, Vol.
23 (3-4).
Gupta, A., Parikh, V. and Compaan, A. D., 2006. High Efficiency Ultra-Thin Sputtered Cdte
Solar Cells, Solar Energy Materials and Solar Cells, Vol. 90 (15): 2263-2271.
Hader, D. P., 2000. Effects of Solar UV-Radiation on Aquatic Ecosystems,
Advances in Space Research, Vol. 26 (12):2029-2040.
Hutchinson, H., 2005. Run Silent, Run Long, Mechanical Engineering, vol. 127 (SUPPL.):5-
7
Ince, B.K., Ince, O., Anderson, G.K. and Arayici, S., 2001. Assessment of Biogas use as an
Energy Source from Anaerobic Digestion of Brewery Wastewater, Water, Air, and Soil
Pollution, Vol.126 (3-4): 239-251.
Ibrahim, A.A. and Jibril, B.Y., 2005. Chemical Treatment of Sabkha in Saudi Arabia
Desalination, Vol.174 (2): 205-210.
Islam, M.R., 2005. Knowledge-Based Technologies for the Information Age, International
Chemical Engineering Conference, JICEC05-Keynote speech, Amman, September 12-
14.
Jansen, T.J., 1985. Solar Engineering Technology, Prentice-Hall, Inc., New Jersey, USA.
Zero-Waste Living with Inherently Sustainable Technologies 129

Jiang, L., Sun, G., Zhou, Z., Zhou, W. and Xin, Q., 2004. Preparation and Characterization of
Ptsn/C Anode Electrocatalysts for Direct Ethanol Fuel Cell, Catalysis Today, Vol. 93-
95: 665-670.
Kerdchang, P., MaungWin, M., Teekasap, S., Hirunlabh, J., Khedari, J. and Zeghmati, B.,
2005. Development of a New Solar Thermal Engine System for Circulating Water for
Aeration, Solar Energy, Vol. 78 (4 SPEC. ISS.): 518-527
Khan, M.I., 2006. Towards Sustainability in Offshore Oil and Gas Operations, Ph.D.
Dissertation, Faculty of Engineering, Dalhousie University, Canada, 440 p.
Khan M. I. and Islam M.R., 2007. Achieving True Sustainability in Technological
Development and Natural Resources Management, Nova Science Publishers, New York,
USA, 381 p.
Khan, M.M., Prior, D., & Islam, M.R., 2005. Thermodynamic Irreversibility Analysis of a
Single Pressure Refrigeration Cycle Operated by a Solar Trough Collector Field, 33rd
Annual General Conference of the Canadian Society for Civil Engineering, June 2-4,
Toronto.
Khan, M.M, Zatzman, G.M. and Islam, M.R, 2006a. The Formulation of A Coupled Mass
and Energy Balance, Journal of Nature Science and Sustainable Technology, Submitted
Khan, M.M., Prior, D., & Islam, M.R., 2006b. A Novel Sustainable Combined
Heating/Cooling/Refrigeration System, Journal of nature science and sustainable
technology, Vol. 1 (1).
Kongtragool, B. and Wongwises, S., 2005. Optimum Absorber Temperature of a Once-
Reflecting Full Conical Concentrator of a Low Temperature Differential Stirling Engine,
Renewable Energy, Vol. 30 (11):1671-1687
Krysiak, F. C. and Krysiak, D., 2006. Sustainability with Uncertain Future Preferences
Environmental and Resource Economics, Vol. 33(4): 511-531
Lemons, R.A., 1990. Fuel Cells for Transportation, Journal of Power Sources, Vol. 29 (1-2):
251-264.
Mancini, T., Heller, P., Butler, B., Osborn, B., Schiel, W., Goldberg, V., Buck, R., Diver, R.,
Andraka, C. and Moreno, J., 2003. Dish-Stirling Systems: an Overview of Development
and Status, Journal of Solar Energy Engineering, Transactions of the ASME, Vol. 125 (2)
:135-151.
Mandal, T. and Mandal, N.K., 1997. Comparative Study of Biogas Production from Different
Waste Materials, Energy Conversion and Management, Vol. 38 (7): 679-683.
Mendoza, B., 2005. Total Solar Irradiance and Climate, Advances in Space Research, Vol.
35: 882–890.
McCutcheon, J. R., McGinnis, R. L. and Elimelech, M., 2006. Desalination by Ammonia-
Carbon Dioxide Forward Osmosis: Influence of Draw and Feed Solution Concentrations
on Process Performance, Journal of Membrane Science, Vol. 278 (1-2):114-123.
McCutcheon, J. R., McGinnis, R. L. and Elimelech, M., 2005. A Novel Ammonia-Carbon
Dioxide Forward (Direct) Osmosis Desalination Process, Desalination, Vol.174 (1):1-
11.
Mills, A. R., Khan, M.M. and Islam, M. R., 2005. High Temperature Reactors for Hydrogen
Production”, Third International Conference on Energy Research & Development
(Icerd-3), Kuwait University, Kuwait, November 21-23.
Murphy, J.D. and McCarthy, K., 2005. Ethanol Production from Energy Crops and Wastes
for Use as a Transport Fuel in Ireland, Applied Energy, Vol. 82 (2): 148-166.
130 M. M. Khan, D. Prior, and M. R. Islam

Nebraska Public Power District, http://www.nppd.com, [Accessed: May 12, 2005]


Owens, J.M. and Chynoweth, D.P., 1993. Biochemical Methane Potential of Municipal Solid
Waste (MSW) Components, Water Science and Technology, Vol. 27 (2).
Rongved, P. I., 1997. US Patent No. 6,180,012, Sea Water Desalination Using CO2 Gas from
Combustion Exhaust.
Rubbia, C., 2006. Today the World of Tomorrow-the Energy Challenge, Energy Conversion
and Management, Vol. 47(17): 2695-2697.
Sadaka, S. S. and Engler, C. R., 2003. Effects of Initial Total Solids on Composting of Raw
Manure with Biogas Recovery, Compost Science and Utilization, Vol. 11(4): 361-369.
Salameh, S.G., 2003. Can Renewable and Unconventional Energy Sources Bridge the Global
Energy Gap in the 21st Century? Applied Energy, Vol. 75 (1-2): 33-42.
Sharma, V.K., Testa, C., Lastella, G., Cornacchia, G. and Comparato, M.P., 2000. Inclined-
plug-flow type reactor for anaerobic digestion of semi-solid waste, Applied Energy, Vol.
65(1): 173-185.
Spencer, R., 1990. Solar aquatic treatment of septage, BioCycle, Vol. 31(5): 66-70.
The solarserver, http://www.solarserver.de/lexikon/solarkonstante-e.html, [Accessed: August
25, 2006]
Todd, J., 1988. Design ecology solar aquatic wastewater treatment, BioCycle, Vol. (2): 38-40
Toman, M. and Palmer, K., 1997. How Should an Accumulative Toxic Substance be Banned?
Environmental & Resource Economics, Vol. 9 (1): 83-102.
Tso, G.K.F. and Yau, K.K.W., 2003. A Study of Domestic Energy Usage Patterns in Hong
Kong, Energy, Vol. 28 (15): 1671-1682.
Xie, C., Bostaph, J. and Pavio, J., 2004). Development of a 2W Direct Methanol Fuel Cell
Power Source, Journal of Power Sources, 13 (6): 55–65.
Valdes, L.-C., Competitive Solar Heat Engines, 2004. Renewable Energy, Vol. 29 (11): 1825-
1842.
Yamada, M. and Honma, I., 2006. Biomembranes for Fuel Cell Electrolytes Employing
Anhydrous Proton Conducting Uracil Composites, Biosensors and Bioelectronics, Vol.
21: 2064–2069.
Zhang, Z. G., Xu, G., Chen, X., Honda, K. And Yoshida, T., 2004. Process Development of
Hydrogenous Gas Production for PEFC from Biogas, Fuel Processing Technology, Vol.
85 (8-10): 1213-1229.
Zhou, W.J., Song, S.Q., Li, W.Z., Zhou, Z.H., Sun, G.Q., Xin, Q., Douvartzides, S. and
Tsiakaras, P., 2005. Direct Ethanol Fuel Cells Based on Ptsn Anodes: The Effect of Sn
Content on the Fuel Cell Performance, Journal of Power Sources, Vol. 140 (1): 50-58.
Zupancic, G.D. and Ros, M., 2003. Heat and energy requirements in thermophilic anaerobic
sludge digestion, Renewable Energy, Vol. 28 (14): 2255-2267.
In: Perspectives on Sustainable Technology ISBN: 978-1-60456-069-5
Editor: M. Rafiqul Islam, pp. 131-156 © 2008 Nova Science Publishers, Inc.

Chapter 5

TEA-WASTES AS ADSORBENTS FOR THE REMOVAL


OF LEAD FROM INDUSTRIAL WASTE WATER

M. Y. Mehedi1* and H. Mann†2


1
University of Calgary, Alberta, Canada
2
Dalhousie University, Halifax, Nova Scotia, Canada

ABSTRACT
Numerous engineering practices produce aqueous solutions that are high in heavy
metal contents. Industral waste water could cause severe environmental consequences
and irreversible damage to the environment and its biota if it is not controlled through
proper treatment. It is primarily because of their severe toxicity that disrupts the
ecosystem integrity and hinder sustainable management of the environment. Industrial
waste water is one of the major sources of lead (Pb) and is considered to be a toxic
element. Adsorption technique is one of the most popular and effective technology to
treat water contaminated by heavy metals. However, most commonly used adsorbents
themselves have some level of toxicity and are often expensive. In this research, Turkish
black tea-waste was studied as an adsorbent to remove lead from the contaminated water.
For this purpose, lead nitrate solution [Pb(NO3)2] of a wide rage of concentration (10-
10,000 ppm) was prepared to simulate the contaminant concentration in industrial waste
water. The initial analysis was carried out using 5 gm of fresh teas (as control), wet and
dry tea-waste. Wet tea-waste samples were also analyzed in 7 days intervals to find out if
there is any microbial growth and its possible influence in the removal of lead. The
adsorptive capacity of the adsorbent, the effect of pH on adsorption, the effect of initial
concentration was investigated. Results show that fresh tea and oven dried tea waste
samples effectively removed lead cations in the range of 57-92% and 69-94%
respectively. The wet tea-waste samples (immediately after collection) also showed good
performance and over time the removal efficiency of lead increases. About 97% of lead
was removed by wet samples after 30 days at 10,000ppm of initial concentrations. The
microbial concentrations were also increased overtime. The samples had representation

*
Corresponding author, University of Calgary, Alberta, Canada, Email: Ibrah.Khan@ucalgary.ca

Corresponding author, email: Henrietta.mann@dal.ca
132 M. Y. Mehedi and H. Mann

of cocci, rod shaped, fungus and yeasts. It is also revealed from the test that the
adsorption capacity of wet samples can be as large as 387.5 mg/gm, with a favorable
adsorption intensity value (>1). All the wet samples and the dry sample showed good
performance and distinct capacity measurement either in Langmuir or in Freundlich
Isotherm. Variation of pH in the range of 3.7-10.8 did not demonstrate any significant
increase or decrease in the adsorption process when compared to their results with almost
neutral environment of pH 6.44.

Key Words: Tea-waste, Adsorbent, Lead, Industrail Waste Water

INTRODUCTION
Industrial waste water contain heavy metals, which could pose several environmental
threat .Through waters the major toxic metals such as Pb, Cd, Hg, Zn, Cr, Ni finds their way
to the aquatic environment (Azmal et al., 1998). Several industrial processes releases toxic
metals and that ultimately turned as a hazrdous substances and enhance toxicity level to the
fresh and marine biota and the environment as well. The dominant constituent of industraial
waste water is water, with varying amounts of organic constituents, heavy metals, and other
components. The concentration of the heavy metals in industrial waste water depends on the
possible source and the quality of wastes.These metal ions in industrail waste water, when
present in sufficient quantity, can be harmful to aquatic life and human health. The seperation
of most of the heavy metals is very difficult due to their solubility in water that forms aqueous
solutions (Hussein et al., 2004). There are several technologies to treat heavy metals in
industrial water. Those include chemical precipitation, chemical oxidation or reduction,
coagulation/flotation, sedimentation, flotation, filtration, membrane process, electrochemical
techniques, ion exchange, biological process, and chemical reaction (CAPP, 2001, Khan and
Islam 2006b). Each method has its merits and limitations in application. Today, the
adsorption process is attracted by many scientists because of the effectiveness for the removal
of heavy metal from contaminated water. But the process has not been used extensively for its
high cost. For that reason, the uses of low cost materials as sorbent for metal removal from
industrail waste water have been highlighted. More recently, great effort has been contributed
to develop new adsorbents and improve existing adsorbents.
The biogeochemical process in environment often relies on the origins, diversity,
distributions, and functions of microorganisms. The discovery of novel microorganisms from
natural environment is crucial for introducing alternative environmental tool to remediate
contaminated environments, prevent environmental degradation and enhance energy
production. One of the major obstacles to remediate contaminated environments is the lack of
our knowledge in understanding the behavior of microorganisms into an environmental
context (Zhang et al., 2005). Microorganisms could play a significant role in removing heavy
metals (Chaalal et al., 2005, Mihova and Godjevargova, 2000).The bacteria are of mainly rod
shaped (bacilli) and little ball shaped (cocci). Some of the bacterial cells are single while
others cluster together to form pairs, chains, squares or other groupings (Gazso, 2001). The
biosorption of heavy metals ions by microorganisms is an effective remediation measures
(Chaalal et al., 2005).
Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 133

Lead is recognized and treated as highly toxic component that brings irreversible damage
to humans and aquatic habitats (Ajmal et al., 1998). There has been increasing concern with
regard to the treatment of lead from polluted water due to release of industrial wastes from
different point and nonpoint sources. Most of the treatment technologies use synthetic
products as adsorbents that create secondary toxicity which are sometimes more toxic than
original one and poses severe environmental consequences (Kim, 2003 and Mustafiz et al.,
2003). Considering all the facts in to account in this paper, Turkish black tea-waste has been
used as natural adsorbents to observe the growth of microganisms over time on its surface as
well as any change in their quantity and quality. The influence of microbial diversity in
removing lead from contaminated solution was also investigated over time. Fresh Turkish tea
samples were analyzed as control. The rationales embraced in the current paper are to develop
inexpensive lead adsorbents for the treatment of industrail waste water. The principle idea
behind the rationales was to find out alternative natural means for the remediation of pollution
from industrial waste water for the sustainable management of aquatic environment with
broad view to achieve the principles of zero-waste living (Figure 1).

ZERO-WASTE
LIVING
∆t ∞
HEAVY METALS
(LEAD)

Reduction of Amenities (-VE) Environmentally


appealing (+VE)
TEA-WASTE Increase toxicity (-VE)
Economically feasible
Harmful for Aquatic organisms (-VE) (+VE)
Microbial Pollution (-VE) Disruption of Food chain (-VE)
Nutrient enrichment (-VE) Biomagnifications (-VE) Socially responsible
Biodegradation (-VE) Mortality of natural resources (-VE)
Alteration of Habitats (-VE) Reduced fecundity (-VE)
(+VE)
Coastal erosion (-VE) Jeopardize natural cycles (-VE)
Depletion of natural resources
(-VE)
Sedimentation (-VE)

Figure 1. A step towards zero-waste living (Modified after Mustafiz, 2002) (+VE= Good; -VE=Bad)

C HEMICAL C OMPOSITION OF B LACK T EA


Tea is a popular beverage across the world due to the presence of polyphenols and methyl
xanthenes. In addition, various compounds of tea aroma are used to enhance the temptation
for a tea. Black tea also contains minerals such as calcium, phosphorus, iron, sodium
134 M. Y. Mehedi and H. Mann

potassium and vitamins such as A, B1, B2, Niacin & C besides the biochemical quality
constituents such as amino acids and soluble sugars (Figure 2) (Chung and Wang 1996, UPASI,
2003). The polyphenols work out to about 15-20% in the black tea and it comprises the
catechin fractions such as Epigallo catechin (EGO), +Catechin (+C), Epicatechin (EC),
Epigallocatechin gallate (EGCG) and Epicatechin gallate (ECG) (UPASI, 2003) (Figure 2).

35

30

Theaf lavins
25
Thearubigins
Highploymerized substance
Caf f eine
Total polyphenols
20
A mino acids
Protein
Lipid
Carbohydrates
15
Moisture
Gallic A cid
Epigallo catechin

10 (+) Catechin
Epicatechin
Epigallocatechin gallate
Epicatechin gallate

0
Units (%)

Figure 2. Major Constituents of Black Tea (Modified after Chung and Wang, 1996)

Most of the chemicals are produced during processing of the tea leaves. In manufacturing
process, the monomeric flavan-3-ols undergo polyphenol oxidase-dependent oxidative
polymerization leading to the formation of bisflavanols, thea-flavins, thearubigins, and other
oligomers. Theaflavins constitutes benzotropolone rings with dihydroxy or trihydroxy
substitution systems, that give the characteristic color and taste of black tea. The structure of
black tea is shown in Fig. 3 (Chung and Wang 1996).

Figure 3. Chemical Structure of Black Tea (Chung and Wang 1996)


Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 135

Tea flavanols may undergo oxidative condensation via either C-O or C-C bond formation
in oxidative polymerization reactions during autoxidation or coupled oxidation. According to
Chung and Wang (1996) tea polyphenols have high affinity to metals. One of the most
important constituents of tea is tannin, a group of chemicals with large molecular weights and
diverse structure (Chung and Wang, 1996).

EXPERIMENTAL SETUP AND PROCEDURE


Adsorbent

The waste material of Turkish black tea was used as the adsorbent in the experiment. The
samples were collected and were stored in clean plastic bags. Fresh tea sample before used
was also analyzed as controll to compare the results with tea-waste sample after use at
different intervals. Tea-waste samples were dried in an oven after collection to measure its
efficiency in dried conditions as well. The tea-waste samples were analyzed in 7 different
states (Runs). Fresh tea samples were treated as Run 1, Tea-waste materials immediately after
use was treated as Run 2, oven dried tea-waste samples after use was treated as Run 3. The
wet tea-waste analyzed after 7, 15, 22 and 30 days were treated as Run 4, Run 5, Run 6 and
Run 7. Since the raw tea-waste samples collected were wet in nature, they were used for the
wet-tests at different intervals (7days).
For the oven dried tea-waste experiment, deionized water (DI) was used to wash the
samples and dried at 55oC. Later, the dry-wastes were grounded using a grinder and sieved
thoroughly to make a homogeneous powder and stored into plastic bags. A stock-solution of
10,000 ppm of lead nitrate [Pb(NO3)2] was prepared in the laboratory. Later, the stock
solution was diluted using deionized water to prepare solutions of a wide range of initial
concentrations, from 10 ppm to 10,000 ppm.

Adsorption Experiment

The batch test, sometimes referred to as the bottle-point technique (Drostie, 1997) was
the method followed for all of the experiments. 20 ml of lead nitrate solution of different
initial concentrations was placed in several test-tubes. In every run, depending on the state of
the adsorbent, fresh tea, wet or dried, samples of particular condition was placed in each test-
tube in the amount of 5 gm. The experiments were carried out at room temperature of 24°C.
Plastic caps were used to cover the test-tube. Then, the test tubes were vigorously shaken in a
reciprocating shaker to ensure the sample is well-mixed.
The lead solution of each test-tube was measured for its equilibrium concentration after
24 hours, which is assumed to be the period equilibrium has attained. The lead solution was
analyzed by a Spectra 55 B Atomic Absorption Spectrophotometer (AAS) Hollw cathode
lamps for lead were used in the atomic absorption spectrophotometer analysis in combination
with air acetylene flame. Some runs were duplicated to observe the reproducibility of the
results. Also, to note that, some tests were done at varying pH by using the buffer solutions
(HCl or NaOH).
136 M. Y. Mehedi and H. Mann

Preparation of Sample for Bacterial Analysis

One drop of sample was put on a microscopic slide. Then samples were spread over the
surface of the slide with a glass rod as nonabsorbent sterile tool. In the analysis of bacterial
concentrations 1 ml sample was used. 18 drop makes 1 ml of sample.

Counting and Identifying Bacteria from Tea-Waste Samples

In order to identify and count bacteria in this study, an Image Analysis System (IAS) was
used. The imaging analyzer used in this research is a Zeiss 459310. The basic system consists
of a high resolution video camera (Axio-Cam) mounted on an optical microscope with high
magnification (1000x), an image processor, a Pentium PC consists of two softwares, Axio-
vision software and KS300, a high resolution image monitor, and a high resolution text
monitor (Chaalal and Islam 2001)
The image is visualized with the video camera though a microscopic lens. The signal
from the video camera is in analogue from and must be digitized so the computer is able to
store the image in the library. Therefore, the signal has to be processed by an analogue to
digital converter. However, the signal has to be converted into its analogue form in order for
the image to be produced in the monitor. In addition, the IAS has the facility of sharpening,
edge detection, threshold function, transition filter, chord sizing, erosion and dilatation. Once
binary images are produced from an accepted microphotography, a feature count is performed
(Mustafiz, 2002). According to Livingston and Islam (1999) the image analysis system is one
of the useful counting technique which can be used to count bacteria. Their results show that
numbers from the image analysis system match very closely with the microscopic
enumeration. This method has several good points. With experience a user could perform all
the image enhancements and make an accurate count as fast as or faster than using standard
microbiological techniques. The results can be reproduced at any time because the image
analysis system stores the images and can be manipulated easily and stored indefinitely
(Livingston and Islam, 1999).

Calculation the Concentration of Bacteria

In order to count bacteria, 1 drop of sample was placed on the glass slide and covered
with a glass cover slip which is a rectangle with dimensions of 22 X 30 mm. The image
received from the area that covered by glass cover slip. The bacteria were then counted from
the slide under glass cover slip.
Counting was repeated for n points on the cover slide and the average there-of was used
as the number of bacteria in each point.

Na = (N1 + N2 + N3 + … + Nn) / n

where,
Na = Average bacteria count in each point
Nn = Bacteria count in point n (by image analyzer)
Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 137

n = Number of points in which bacteria have been counted

The average bacteria count in each point will then be multiplied by the ratio of the area of
the cover slide to the area of the point (circle) to give the total bacteria count under the cover
slide.

Ncs = (Acs/Ap) Na

Where,
Ncs = Number of bacteria under each cover slide
Acs = Area of the cover slide
Ap = Area of each point.

Finally, the total bacteria count in each cover slide will be converted to bacteria count per
milliliter. As stated before, each time, only 1 drop of sample is placed under the cover slide.
The pipette used in this study is a Pasteur pipette. Using such a pipette, 1 ml of water will be
40 drops. Therefore, the total bacteria count under each cover slide will be multiplied by 18
and then rounded to give the bacteria count in milliliter.

Ncs = Number of bacteria under each cover slide

Nml = 18 Ncs (1 ml = 18 drops of microbiology pipette)

where,
Nml = Number of bacteria per ml

Also, when necessary, the total bacteria count in the entire sample could be determined
and used by knowing the overall volume of the sample. Therefore:
N = Vs . Nml

where,
N = Total bacteria count in the entire sample
Vs = Total volume of the sample

In order to determine the number of points (n) that correctly represent the total bacteria count
under a cover slide, the following method was used:
A 500 ml stock sample with tea-waste was stirred by a lab shaker for 10 hours. 1 drop of
tea-waste sample was placed under a cover slide and 5 successive counts were done with 5,
10, 15, 20 and 25 points respectively (n = 5, 10, 15, 20, 25). Then according to the
aforementioned method, bacteria count in each case was determined. The results are
presented as of bacteria per ml of examined tea-waste sample.
138 M. Y. Mehedi and H. Mann

RESULTS AND DISCUSSIONS


Batch Test
The results from the experiments are described by the state, if dry or wet, of the sample in
this section.

Run 1: Fresh Tea before Use


Fresh tea showed and excellent performance in removing lead from contaminated
solutions. Removing efficiency increases as the initial concentrations increased. About 96%
of lead was removed in 10,000 ppm initial lead nitrate solution (Figure 4).The initial
concentration was varied from 10 ppm to 1000 ppm. For this data range, the removal
efficiency is found to be between 79% and 96%. The removal efficiency increased as the
initial concentrations increased. Therefore, if compared to other new adsorbents, in recent
years, such as fish scale (Mustafiz, 2002), fly ash (Basu, 2005), the fresh tea showed better
performance. The above mentioned waste materials have been reported to remove more than
80% of the contaminant such as lead and arsenic.

Langmuir Isotherm
The most important feature of conducting the batch test was to develop the adsorption
isotherm graphs. Figure 5 presents the Langmuir isotherm for the fresh tea sample (Run 1).
To draw the graph, the results of the fresh tea sample were analyzed according to the
following equation:

Q = v(Ci-Cf)/m (1)

Where, Q is the metal uptake (mg metal per gm of adsorbent), v the liquid sample volume
(ml), Ci the initial concentration of the metal in the solution (mg/L), Cf the final (equilibrium)
concentration of the metal in the solution (mg/L) and m the amount of the added adsorbent
(mg).
The importance of this isotherm is it describes the relation between the amount of
concentration of lead nitrate accumulated on tea-waste and the equilibrium concentration of
dissolved lead nitrate. Typically, with an increase in equilibrium concentration, the amount of
contaminant adsorbed, increases. When the equilibrium concentration was 5,000 ppm, the
amount adsorbed was calculated 188.40 mg/gm. In 10 ppm equilibrium concentration the
amount adsorbed calculated was 0.316mg/gm (Figure 5). Therefore, it became easier to
describe the trend by an equation, where the coefficient of linear regression value (R2) was
found as 0.997.
The Langmuir model can also be expressed as-

Q = Q max Cf / (1/b+Cf) (2)

Where Q max is the maximum metal uptake under the given conditions, b a constant related to
the affinity between the adsorbent and adsorbate and 1/b is known as the Langmuir adsorption
factor (Figure 6).
Equation 2 is often re-expressed in linear form as by plotting 1/Q vs. 1/Cf:
Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 139

1/Q= 1/Qmax [1/(b.Cf)+1] (3)

The advantage of such linear plot is that the adsorptive capacity and Langmuir adsorption
factor can easily be measured. The intercept of equation 3 represents the inverse of maximum
adsorptive capacity (1/Qmax). The results from the Figure 6 showed a best-fitted data by a
straight line similar to as described by Equation 3. It is found that the intercept becomes
favorable, which is common in case of best established adsorbents. Therefore, the fresh tea
sample showed an excellent adsorptive capacity in the test.

100
Lead Removal (%)

80
60
40
20
0
0 2000 4000 6000 8000 10000
Initial concentration (ppm)

Figure 4. Removal of Lead (%) in Run 1

450
400
350
300
250
q

200
150
100
50
0
0 100 200 300 400 500
Ce

Figure 5. Langmuir isotherm for Run 1


140 M. Y. Mehedi and H. Mann

3.5 y = 6.6485x + 0.0196


3 R2 = 0.9975
2.5
2
1/q
1.5
1
0.5
0
0 0.1 0.2 0.3 0.4 0.5
1/Ce

Figure 6. Linearized form of Langmuir isotherm for Run 1

3 y = 0.7436x + 0.9265
R2 = 0.9544
2.5

2
Log q

1.5

0.5

0
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Log Ce

Figure 7: Linearized form of Freundlich isotherm for Run 1

Freundlich isotherm
Another commonly used adsorption isotherm is the Freundlich isotherm (Freundlich,
1926), which is expressed by the following empirical equation-

Q = KF.Ce 1/n (4)

where,
Q= uptake of metal (mg/gm),
Ce=equilibrium concentration (mg/L)
KF= a Freundlich constant; indicator of adsorption capacity

KF and n are known to be the Freundlich constants that correlate to the maximum
adsorption capacity and adsorption intensity respectively (Hussein et al., 2004). Equation 4
was linearized to determine the Freundlich parameters by plotting LogQ and LogCe, which
can be shown in Figure 11. In linear form, equation 4 appears as:
Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 141

Log Q = Log kF +1/n Log Ce (5)

In Figure 7, a best-fit straight line is drawn. The inverse of the slope of the straight line
determines the intensity of adsorption (n), which is 1.3448, and the Freundlich constant, KF is
found significant. Typically, the value of n is larger than 1, so the adsorption appears to be
favorable. Also, the value of KF is high, which all together determines the adsorption process
is very favorable.

Run 2: Wet Tea-Waste Sample Immediately after Use


Figure 8 shows the effect of wet tea-waste in removing lead contaminant. About 92% of
lead was removed at 10000 ppm initial concentrations of lead nitrate solutions. The removal
efficiency increases as the initial concentration increases except in few instances like the
initial concentration of 40 and 200 ppm (Figure 8). The removal efficiency varies between 57
and 92 %. The fresh tea showed better performance than wet tea-waste samples (Run 1). In
langmuir isotherm graph (Figure 9) the wet-waste samples also showed good performance in
removing heavy metals. At 10000 initial lead nitrate solution the maximum adsorptive
capacity was 370 mg/gm.
This value is found to be very significant. If such is the capacity, wet tea-waste can be
considered a significantly important and effective adsorbent as some well-known adsorbents
are already proven to be more effective. For example, granular activated carbon (30 mg/gm),
modified pit (90-130 mg/gm), zeolite (154.4 mg/gm) and fish scale (80 mg/gm) are reported
to work effectively in removing lead cation( Basu, 2005). In Figure 11, a best-fit straight line
is drawn. The inverse of the slope of the straight line determines the intensity of adsorption
(n), which is 1.2919, and the Freundlich constant, KF is found favourable. Typically, the value
of n is larger than 1, so the adsorption appears to be also favorable in case of wet tea-waste
samples (immediately after use). Also, the value of KF is high, which all together determines
the adsorption process is very favorable.

100
Lead Removal (%)

80

60
40

20

0
0 2000 4000 6000 8000 10000
Initial Concentration (ppm)

Figure 8: Lead Removal (%) for Run 2


142 M. Y. Mehedi and H. Mann

400
350
300
250
200

q
150
100
50
0
0 200 400 600 800
Ce

Figure 9: Langmuir isotherm for Run 2

5
4.5 y = 17.934x - 0.0948
4
R2 = 0.974
3.5
3
2.5
1/q

2
1.5
1
0.5
0
-0.5 0 0.05 0.1 0.15 0.2 0.25
1/Ce

Figure 10. Linearly rearranged Langmuir isotherm for Run 2

3.5
y = 0.7074x + 1.2013
3
R2 = 0.9787
2.5
Log q

2
1.5
1
0.5
0
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Log Ce

Figure 11. Linearized form of Freundlich isotherm for Run 2


Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 143

Run 3. Dry Tea-Waste Samples Immediately after Use


In Run 3 the oven dried tea-waste samples showed that the effect of dry tea-waste in
removing lead contaminant is significant. The oven dried samples showed a better
performance than the wet tea-waste samples. The removal efficiency increases as the initial
concentration of lead nitrate solutions increases (Figure 12). The removal efficiency varied
between 68 and 94. 94 % of lead was removed at 10000 ppm of lead nitrate solutions. Figure
13 shows the Langmuir Isotherm model for the dried tea-waste samples. Similar to Figure 9,
the Langmuir isotherm is also drawn for the dried sample, which is displayed in Figure 13. A
comparison between Figure 13 and Figure 9 demonstrates that the dried sample adsorb little
better than the wet samples. To measure the adsorptive capacity of the dried sample, a
linearized isotherm was plotted (Figure 14).

100
Lead Removal (%)

80
60
40
20
0
0 2000 4000 6000 8000 10000
Initial Concentration (ppm)

Figure 12: Lead removal (%) for Run 3

400
350
300
250
q

200
150
100
50
0
0 200 400 600 800
Ce

Figure 13. Langmuir isotherm for Run 3

Results show that the adsorptive capacity is 376.4 mg/gm. This capacity is based on the
the intercept in Figure 18. The graph also gives the value of 1/b, which is often described as
the Langmuir adsorption factor. This value is found to be significant. The capacity, the dry
tea-waste can be considered a significantly important and effective adsorbent as some well-
known effective and proven adsorbents. Several trials to best-fit the equation indicate that
considering the intercept and R2 values result showed an excellent performance in removing
lead. Figure 15 was used to estimate the adsorption intensity and Freundlich adsorption
144 M. Y. Mehedi and H. Mann

factor. The value of n was 1.384. Since, the value of n is higher than 1.0, it would be more
appropriate to describe the adsorption process as favorable. However, in compare to their
respective values for the wet sample (immediately after use), oven dried sample indeed was
found to be more effective.

4
y = 10.705x - 0.0185
3.5
R2 = 0.971
3
2.5
2
1/q

1.5
1
0.5
0
-0.5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
1/Ce

Figure 14. Linearized form of Langmuir isotherm for Run 3

3.5
3 y = 0.7222x + 1.0473
R2 = 0.9734
2.5
Log q

2
1.5
1
0.5
0
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Log Ce

Figure 15. Linearized form of Freundlich isotherm for Run 3

Run 4: Wet Samples after 7 Days


Figure 16 shows the effect of wet tea-waste samples (7 days after collection) in removing
lead contaminant. Result showed a little better performance than the wet tea-waste samples
right after collection. With regard to removal efficiency in Run 4 the same trend was observed
as the removal efficiency increases with the initial concentration of lead nitrate solutions
increased. The removal efficiency varied between 61 and 93 %. About 93 % of lead was
removed at 10000 ppm of lead nitrate solutions (Figure 16). In Langmuir isotherm graph
(Figure 17) the wet tea-waste samples (after 7 days) also showed good performance in
removing heavy metals. At 10000 initial lead nitrate solutions the maximum adsorptive
capacity was observed (373.18 mg/gm).Linearly rearranged Langmuir isotherm for lead on
wet tea-waste samples (after 7 days) were plotted (Figure 18). The result showed favorable
condition in the removal process. From the calculation of Freundlich isotherm, (Figure 19)
the intensity of adsorption (n), which is 1.387, and the Freundlich constant, KF is found to be
Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 145

favorable. Typically, the value of n is larger than 1, so the adsorption appears to be also
favorable in case of wet tea-waste samples (7 days after collection). Also, the value of KF (>1)
is high, which all together determines the adsorption process is favorable.

100

Lead Removal (%)


80
60
40
20
0
0 2000 4000 6000 8000 10000
Initial Concentration (ppm)

Figure 16. Lead removal (%) for Run 4

400
350
300
250
200
q

150
100
50
0
0 200 400 600 800
Ce

Figure 17. Langmuir isotherm for Run 4

4.5
4
y = 14.065x - 0.0933
3.5
R2 = 0.9401
3
2.5
1/q

2
1.5
1
0.5
0
-0.5 0 0.05 0.1 0.15 0.2 0.25 0.3
1/Ce

Figure 18. Linearly rearranged Langmuir isotherm for Run 4


146 M. Y. Mehedi and H. Mann

3.5
y = 0.7208x + 1.1045
3
R2 = 0.9734
2.5

Log q
2
1.5
1
0.5
0
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Log Ce

Figure 19. Linearized form of Freundlich isotherm for Run 4

Run 5: Wet Samples after 15 Days


Removal efficiency increases as usual with the increase of initial concentrations. The
result showed the amount of lead adsorbed increased with the increase of equilibrium
concentrations. At 10,000 ppm initial concentration the amount adsorbed was 200mg/gm.
About 95 % of lead was removed at 10,000 ppm initial concentrations (Figure 20). At 10,000
ppm initial concentrations the adsorption capacity was maximum (374mg/gm). In 10 ppm
initial concentration the adsorption capacity was minimum (0.28mg/gm). The coefficient of
linear regression value (R2) was found as 0.949. Linearized form of Langmuir Isotherm was
also plotted (Figure 22) to estimate the intercept.The result showed a favorable intercept
value. The R2 (0.949) value is also very significant.
Figure 23 was plotted to estimate the adsorption intensity and Freundlich adsorption
factor. The intensity value (n) of adsorption was found higher than 1 (1.413) and the value of
KF was very effective. As the e value of n is greater than 1, therefore the adsorption process
appears to be favorable. So from the above graph we can easily say that the adsorption is very
effective and favorable.

100
L e a d R e m o v a l (% )

80
60
40
20
0
0 2000 4000 6000 8000 10000
Initial concentration (ppm)

Figure 20. Removal efficiency (%) for Run 5


Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 147

400
350
300
250
200

q
150
100
50
0
0 100 200 300 400 500 600
Ce

Figure 21. Langmuir isotherm Model for Run 5

4.5
4
3.5 y = 11.712x - 0.0706
3 R2 = 0.9494
2.5
1/q

2
1.5
1
0.5
0
-0.5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
1/Ce

Figure 22. Linearized form of Langmuir isotherm for for Run 5

3.5
3 y = 0.7077x + 1.0551
R2 = 0.9628
2.5
Log q

2
1.5
1
0.5
0
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Log Ce
.

Figure 23. Linearized form of Freundlich isotherm Model for Run 5

Run 6: Wet Tea-Waste Samples after 22 Days


The removal efficiency increases after 22 days within the range between 72-95%. 95% of
lead was removed at 10000 ppm of initial concentrations. Except 200 ppm initial
concentration the removal efficiency increases as the concentration increased (Figure 24). The
maximum adsorption capacity was 382.31 mg/gm (Figure 25). The adsorption capacity is
148 M. Y. Mehedi and H. Mann

considered to be very effective. The adsorption capacity was found to be increased with the
increase initial concentrations (Figure 25).To calculate the R2 and the intercept value
Linearized form of Langmuir Isotherm was also plotted (Figure 26). From the interpretation
of the graph the R2 value was found as 0.9502. The value was found to be suitable for
effective adsorption. The samples analyzed after 22 days also showed good results in
Freundlich isotherm model. The value of n is 1.354. Like the value of n the KF value was also
significant. The R2 value is 0.9599. So considering the values it can be mentioned that the
adsorption process is favorable (Figure 27).

100
Lead Removal (%)

80
60
40
20
0
0 2000 4000 6000 8000 10000
Initial Concentration (ppm)

Figure 24: Lead removal (%) for Run 6

450
400
350
300
250
q

200
150
100
50
0
0 100 200 300 400 500
Ce

Figure 25. Langmuir isotherm for Run 6


Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 149

4
3.5
y = 8.5x - 0.0503
3
R2 = 0.9502
2.5
2

1/q
1.5
1
0.5
0
-0.5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
1/Ce

Figure 26. Linearized form of langmuir isotherm Model for Run 6

3 y = 0.7382x + 0.945
R2 = 0.9599
2.5

2
Log q

1.5

0.5

0
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Log Ce

Figure 27. Linearized form of Freundlich isotherm for Run 6

Run 7: Wet Tea-Waste Samples after 30 Days


With the increase concentration of bacteria after 30 days the removal efficiency also
increased. The removal efficiency varied between 78 and 97%. Again highest removal
efficiency has occurred at 10000 ppm of initial concentration. Similar to other runs the
removal efficiency increased as the initial concentration increased (Figure 28).Figure 29
shows the Langmuir isotherm model for the wet samples after 30 days. Result showed an
excellent performance in Langmuir. The highest adsorption capacity occurred during 10000
ppm of initial concentrations. The adsorption capacity varied between 0.3124 and 387.5
mg/gm.Figure 30 shows linearized form of Langmuir isotherm R2 value is 0.948. The
intercept value is significant that means the adsorption efficiency is favorable (Figure 30).
Similar to all Freundlich Isotherm runs, Figure 31 shows the linearized form of Freundlich
isotherm. This graph was used to calculate the adsorption intensity and Freundlich adsorption
factor. The result showed the value of n (1.359) is higher than 1 (Figure 31).
150 M. Y. Mehedi and H. Mann

100

Lead Removal (%)


80
60
40
20
0
0 2000 4000 6000 8000 10000
Initial Concentration (ppm)

Figure 28. Removal of lead (%)for Run 7

450
400
350
300
250
q

200
150
100
50
0
0 100 200 300 400
Ce

Figure 29. Langmuir isotherm model for Run 7

3.5
3 y = 6.1624x - 0.0151
2.5 R2 = 0.948

2
1/q

1.5
1
0.5
0
0 0.1 0.2 0.3 0.4 0.5
1/Ce

Figure 30. Linearized form of Langmuir isotherm for Run 7


Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 151

3 y = 0.7368x + 0.8622
R2 = 0.9445
2.5

Log q
1.5

0.5

0
-1 -0.5 0 0.5 1 1.5 2 2.5 3
Log Ce

Figure 31. Linearized form of Freundlich isotherm Model Run 7

Effect of pH

To observe the effect of pH, the adsorption of lead on tea-waste at different pH was
measured. The mixture was adjusted to various pH values by using 0.5M of sodium
hydroxide (NaOH) or 0.5M hydrochloric acid (HCl) depending on if acidic or basic of the
solution was desired. The following pH prevailed during the experiment: acidic environment:
3.7; basic environment: 10.8 and neutral environment: 6.44. Only one set having an initial
concentration of 30 ppm was investigated in this research.
Figure 32 shows the results at different pH condition. It is observed that a basic
environment worked better than the normal environment in removing lead cations. The
acidity is found to reduce the performance of the tea-waste in adsorbing the cations. However,
for the initial condition of 30 ppm, the difference in results was not significant.
Equilibrium c onc e ntration, C e (ppm )

25

20

15

10

0
normal acid basic
Contaminant environment

Figure 32. Effect of pH on lead adsorption on wet tea-waste


152 M. Y. Mehedi and H. Mann

Microbial Influence

Types of Microorganisms
Mostly cocci shaped bacteria was observed in the study. In some cases clusters of
bacteria has been observed (Fig 33c). Sometime chains of joint bacteria have been reported.
Rod shaped bacteria found adhered to the tea leaves (Fig 33). In few runs the samples contain
hyphae, molds and yeasts could be observed. Appearance of some common molds such as
alternaria, cunninghamella, helminthosporium, paecilomyces, aspergillus, fusarium,
penicilium, syncephalastrum has been observed. In few analysis diploid cells and protozoa
(Heteronema ) are found. In most samples rod and bacilli bacteria are almost common. Some
bacilli are curved into a form resembling a comma (Figure 33c).

(a)Filamentous bacteria (b)Rods and cocci

(c) Cocci, filamentous and rods (d) Yeast, filamentous bacteria

Figure 33. (Continued)


Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 153

(e) Rods bacteria (f) Rods and filamentous bacteria

Figure 33. Microphotographs of bacterial colonies (1000 magnification)

Yeast cells: Some cells are bigger than bacteria and had been observed during the
experiment. They are single cells, in the shape of filaments (Figure 33d).
Molds: Woolly, branching, hair-like filaments called hyphae observed in most of the
samples. There are hyphae on the surface of tea-waste indicating that mold is presnt. As
compared with bacteria, they are relatively large. The hyphae of mold colony grows as an
intertwined mass of filaments collectively called a mycelium (Figure 33f).

Cell Count At Different Intervals


Concentrations of microorganisms were increased over the period of experiment (Figure
34). During the whole period of investigation concentration increases with time.
Concentrations of
bacteria (Bacterial

14000
colonies/ml)

12000
10000
8000
6000
4000
2000
0
1

7
e

e
e
pl

pl

pl

pl

pl

pl
pl
am

am

am

am

am

am

am
S

Samples

Figure 34. Bacterial concentration in wet tea-waste samples.


154 M. Y. Mehedi and H. Mann

Concentration of Bacteria and its Influence in Lead Removal


The Turkish black tea-waste samples were analyzed simultaneously to measure its
efficiency in removing lead from contaminated water and to find out additional bacterial
influence along with tea-waste in removing lead from contaminated solutions. Result showed
that over time with the increase in microbial concentrations adsorption efficiency increases.
Different species of microorganisms that adheres to the surface of tea leaves proves that it is
indeed a potential surface for bacterial growth and it plays a significant influence in
enhancing the adsorption efficiency in removing heavy metals from polluted water (Figure
35).

Lead Removal (%)

98
96
94
92
90
88

Run 1 Run 2 Run 3 Run 4 Run 5 Run 6 Run 7

Samples

Figure 35. Lead removal efficiency (%) in different Run

CONCLUSION
From the above results and discussions we can conclude that Turkish black tea-waste is
an effective adsorbent and provides potential implications to treat industrial waste water,
discharged from land based and other sources with regard to removal of lead ions. The
adsorption of lead ions on tea-waste in different conditions (fresh tea,dried tea-waste and wet
tea-waste) had different removal rate, depending on the concentration of initial lead nitrate
solutions and the presence of bacteria. Dried tea-waste performed efficiently in removing lead
ion compared to wet tea-waste samples. The removal efficiency increases in wet tea-waste
samples over times. Result showed after 30 days 97% of lead was removed with the increased
microbial concentrations. Microbial concentration was increased over the time which showed
that the microbial activities influences the removal process. Tea leaves provided essential
surface for the bacterial growth. Bacteria played an additional role in the removal process.
Results showed there is a little influence of pH during the process of adsorption. Furthermore,
considering the low cost and better adsorptive capacity of tea-waste, we should consider this
Tea-Wastes as Adsorbents for the Removal of Lead from Industrial Waste Water 155

natural adsorbent to treat the industrial waste-water containing lead and other heavy metals to
achieve the goal towards sustainable management of aquatic environment.

ACKNOWLEDGEMENTS
The authors would like to thank the Atlantic Canada Opportunities Agency (ACOA) for
funding this project under the Atlantic Innovation Fund (AIF).

REFERENCES
Ajmal, M., Khan, A.H., Ahmad, S and Ahmad, A. 1998. Role of sawdust in the removal of
copper (II) From industrial wastes. Wat. Res. 32: 3085-3091.
Ajmal, M., A. Mohammad, R. Yousuf and A. Ahmed, 1998. 1998. Adsorption behavior of
Cadmium, Zink, Nickel, and Lead from aqueous solution by Mangifera India Seed Shell.
India J. Environ Hlth. 40: 15-26.
Basu, A. 2005. Experimental and Numerical Studies of a Novel Technique for abatement of
Toxic Metals from Aqueous Streams. Ph.D Thesis Faculty of Engineering, Dalhousie
University 304 pp.
CAPP, 2001. Canadian Association of Petroleum Producers. Technical Report. Produced
Water Waste Management. August.
Chaalal,0 and Islam, M.R. 2001. Integrated management of radioactive strontium
contamination in aqueous stream systems. Journal of Environmental Management 61, 51-
59
Chaalal, O.,. Zekri, A and Islam, M.R., 2005. Uptake of heavy metals by microorganisms: An
experimental approach, Energy Source,Vol 27, no 1-2, 87-100.
Chung, S.Y, Wang, Z.Y. 1996. The chemistry of Tea. Tea and Cancer. “The Tea Man”
Available from http:<www.teatalk.com/science/chemistry.htm> (accessed on April
30,2007).
Dorostie, R.L. 1997. Theory and Practice of Water and Waste Water Treatment. John Wiley
& Sons Inc. USA 816 pp.
Freundlich, H. 1926. “Colloid and Capillary Chemistry”, Methuen & Co.. London. 884 pp.
Gazso, L.G. 2001. The Key Microbial Processes in the Removal of Toxic Metals and
Radionuclides from the Environment. CHJOEM 2001, Vol.7.Nos. 3-4.: 178-185.
Hussein, H., Ibrahim, S.F., Kandeel, K., Moawad, H.. 2004. Biosorption of heavy metals
from waste water using Pseudomonas sp. Environmental Biotechnology. Vol. 7 No. 1.
Available on <http://www.ejbiotechnology.info/content/vol7/issue1/full/2/index.html>
(accessed on April 30,2007).
Khan, M.I, and Islam, M.R., 2005a. Environmental Modeling of Oil Discharges from
Produced Water in the Marine Environment, Canadian Society of Civil Engineers 2005.
Toronto, Canada, June 2-4, 2005.
Khan, M.I. and Islam, M. R. 2006 b. Technological Analysis and Quantitative Assessment of
Oil and Gas Development on the Scotian Shelf, Canada. Int. J. Risk Assessment and
Management : in press.
156 M. Y. Mehedi and H. Mann

Kim, D. S., 2003. The removal by crab shell of mixed heavy metal ions in aqueous solution,
Bioresource Technology, v 87, n 3, May, pp 355-357.
Livingston R. J. and Islam, M. R., (1999), Laboratory Modeling, Field Study, and Numerical
Simulation of Bioremediation of Petroleum Contaminants, Energy Sources, vol. 21, pp.
113-129.
Mihova, S and Godievargova, T., 2000. Biosorption of Heavy Metals from Aqueous
Solutions. Journal of International Research Publications. Issue 1 , 2000/1. Available on
< http://www.ejournalnet.com/Contents/Issue_1/6/6_2001.htm>.(accessed on April
30,2007).
Mustafiz, S., 2002. A Novel Method for Heavy Metal Removal from Aqueous Streams. M.A.S
Thesis Faculty of Engineering, Dalhousie University 150pp.
Mustafiz, S., Rahman, M.S.. Kelly, D. Tango, M. Islam, M. R., 2003. The application of the
fish scales in removing heavy metals from energy-produced waster streams: the role of
microbes, Energy Source, v 24, p 905-916.
UPASI, 2003. Technical Report UPASI Tea Research Foundation Niar Dam BPO, Valparai,
India . Available on < http://www.upasitearesearch.org>. (accessed on August 26,
2006).
Zhang, G., Dong, H. Xu, Z, Zhao, D. and Zhang, C., 2005. Microbial Diversity in Ultra-High-
Pressure Rocks and Fluids from the Chinese Continental Scientific Drilling Project in
China. Applied and Environmental Microbiology. Vol. 71, No.6 p 3213-3227 .
In: Perspectives on Sustainable Technology ISBN: 978-1-60456-069-5
Editor: M. Rafiqul Islam, pp. 157-175 © 2008 Nova Science Publishers, Inc.

Chapter 6

MULTIPLE SOLUTIONS IN NATURAL PHENOMENA

S. H. Mousavizadegan, S. Mustafiz∗ and M. R. Islam


Department of Civil and Resource Engineering,
Dalhousie University, Canada

ABSTRACT
Nature is nonlinear and all natural phenomena are multidimensional. The parameters
involved in a natural phenomenon are not independent of each other and the variation of
each of them causes others to be affected. The natural phenomena are chaotic-not in
conventional sense of being arbitrary and/or unpredictable, but in the sense that they
always produce multiple solutions and show no reproducibility. This is because time is a
space and time is also irreversible. At present, we are unaware of the equations that truly
govern natural phenomena and also the procedures to obtain multiple solutions. Often
several key simplifications are posed to eliminate nonlinearities and find a numerical
description of a natural phenomenon. This poses the problem of inherently wrong
formulation of a problem. In this paper, several polynomials and simultaneous equations
of two variables are applied as a model for a natural phenomenon, in which the other
parameters are kept constant. It is shown that they produce multiple solutions, even if
they are not realistic with current knowledge. The number of solutions depends on the
degree of nonlinearity of the equation. From the study it can be inferred that a
phenomenon with only two variables produces more than one solution and, therefore, a
multi-variable phenomenon surely has multiple solutions.

Keywords: Nonlinearity, polynomials, knowledge dimension, imaginary roots, Adomian


Decomposition method.


Corresponding author: Tel.:+1-902-494-3217; Fax: +1-902-494-6526. Email address: mustafiz@dal.ca (S.
Mustafiz)
158 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

INTRODUCTION
All mathematical models of the real physical problems are nonlinear. The nonlinearity is
related to the interaction and inclusion of different parameters involved in a physical problem.
Because, mathematical tools available today cannot solve non-linear problems beyond trivial
ones, it has been a common practice to linearize the problems in order to render them
‘solvable’. In this way, a problem is forced to have a single solution ignoring the possibility
of having multiple solutions. In addition, not all of the available methods are capable of
predicting multiple solutions. In fact, until now, a systematic method for determining multiple
solutions is limited to three variables.
The general development of the set of governing equations always proceeds the same
way for any material. A set of conservation laws is usually applied in integral form to a finite
mass of material. Typical ‘laws’ express the conservation of mass, momentum, and energy. It
is asserted that the ‘laws’ are true and the problems become that of solving the constitutive
relationship of the ‘law’. These equations are then converted to a local form and are cast in
the form of partial differential equations. These differential equations cannot be solved in a
general way for the details of the material motion. In order to close the system, the next step
is to specify the material response. The mathematical conditions are usually referred to as the
constitutive relationships. The last step is to combine these constitutive relations with the
local form of the balance equations. The combination of these two sets of relationships is
called the field equations which are the differential equations, governing the material of
interest.
We are unaware of the mathematical model that truly simulates a natural phenomenon.
The available models are based on several assumptions. For examples, there are many models
that describe different fluid flows. The most general equations in fluid mechanics are the
Navier-Stokes equations. The assumptions in derivation the Navier-Stokes equation are:

• The fluid is a continuum; it indicates that we deal with a continuous matter.


• The fields of interest such as pressure, velocity, density, temperature etc., are
piecewise continuous functions of space and time.
• The fluid is Newtonian; a further, and very strong, restriction used is a linear stress-
rate of strain relationship.

Note that none of the above assumptions can be remediated by invoking non-linear form
to an equation. For instance, non-linear power-law equations do not undo the information
imbedded in Newtonian fluids equations. In the above, the term, ‘continuous’ means, there
should be no boundary. Even quarks are not continuous. In fact, unless the size of the
constitutive particles is zero, there cannot be any continuity. For any variable to be continuous
in space, the above requirement of zero size must apply. For a variable to be continuous in
time, the notion of ‘piecewise’ is absurd. Both space and time domains are continuous and
must extend to infinity for ‘conservation of mass’ to hold true. There is not a single linear
object in nature, let alone a linear relationship. In reality, there is not a single Newtonian
fluid. The assumption of linear stress-rate of strain relationship is as aphenomenal (Zatzman
and Islam, 2006; Khan and Islam, 2006) as the steady state assumption, in which the time
dimension is eliminated.
Multiple Solutions in Natural Phenomena 159

A general model that explains completely the fluid motion and describes the nonlinearity
due to the turbulence and chaotic motion of a fluid flow has not been developed so far. The
solution for a turbulent flow is usually obtained based on the Navier-Stokes equations that are
not developed for such a flow.
The numerical description is also found based on some simplification and linearization
during the solution process. After the first linearization of the process itself by imposing
‘laws’ to forcibly describe natural phenomena, further linearization is involved during
solution schemes (Mustafiz et al., 2006). All analytical methods impose linearization by
dropping nonlinear terms, which is most often accomplished by neglecting terms or by
imposing a fictitious boundary condition. Numerical techniques, on the other hand, impose
linearization through discretization (Taylor series expansion), followed by solutins of a linear
matrix.
The existence of multiple solutions can be found in numerous problems. The occurrence
of multiple solutions in solving the TSD-Euler equation was examined by Nixon (1989) and it
was found that such solutions exist for a small range of Mach numbers and airfoil thicknesses.
Nixon (1989) also found that a vorticity flux on the airfoil surface can enhance the
appearance of multiple solutions.
We also observe the presence of multiple solutions, which depend on the pathway, in
material processing operations. The existence of multiple roots in isothermal ternary alloys
was discovered by Coates and Kirkaldy (1971) and was further explored by (Maugis et Al.
(1996). Coriell et Al. (1998) continued investigation of one-dimensional similarity solutions
during solidification/melting of a binary alloy. Their study, to some extent, was analogous to
the isothermal ternary system, except that the phases were then solid and liquid and
temperature played the role of one of the components of the ternary. The diffusivity equation
was used to express the variation of temperature and concentration of fluid and solid in time
and space. The equation was transferred to an ordinary differential equation using the
similarity technique and the existence of multiple similarity solutions for the
solidification/melting problem was noticed. These results corresponded to significantly
different temperature and composition profiles. Recently, a computational procedure to find
the multiple solutions of convective heat transfer was proposed by Mishra and DebRoy
(2005). In this approach, the conventional method of numerical solution was combined with a
real number genetic algorithm (GA). These led the researchers to find a population of
solutions and search for and obtain multiple set of input variable, all of which gave the
desired specific output.
The existence of multiple solutions was investigated in separation technology using
membrane separators by Tiscareno-Lechuga (1999). The author discussed conditions of the
occurrence of multiple solutions when the mole fraction of a component with intermediate
permeability was specified as a design variable. When the pressure in the permeate chamber
was significantly lower than that of the rententate, the conditions turned to be simpler and
were expressed through equations, which involved only the composition of the feed and the
permeability of the membrane. The existence of multiple solutions in natural processes has
been recognized by Islam and Nandakumar (1986) and later expanded by Islam and
Nandakumar (1990), Islam et al. (1992), and others.
We take into account some bivariate polynomials of different degree as a token-model for
a natural phenomenon. It is assumed that the other contributing parameters of the model of
the bivariate polynomial are constant. The number of solutions depends on the degree of the
160 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

nonlinearity of the polynomial. The solutions are obtained using the Newton’s method and
presented in graphical form for a limited region.
Some nonlinear simultaneous equations are also taken into account and the solutions of
them are obtained with the Newton and Adomian decomposition methods (ADM). Our
objective is to show that conventional techniques do not generate multiple solution, for
instance, ADM, which is a very powerful method for solution of nonlinear equations can not
produce multiple solutions. We also proposed a new scheme to show the feasibility of
generating multiple solutions.

KNOWLEDGE DIMENSION
Dimensions provide the existence and imagination of the universe. It may be defined as
the elements or factors making up a complete personality or entity. The dimensions are
unique (each dimension has unique properties that makes it different from others),
codependent (the dimensions are equally dependent to each other for their existence) and
transcendence (dimensions have the ability of extending or lying beyond what would
otherwise be able to do).
Knowledge is synonymous to truth and reflects information about the properties, which
exist in objects, events or facts. Knowledge explains the physical properties (which are
observable and measurable), date, history, theories, opinions, etc. It does not have time,
space, mass and energy. Knowledge is a dimension for phenomena and may be possible to
measure it by bits of information. Information can be lead to increasing knowledge if proper
science (science of nature) is used.
Knowledge can be obtained through the physical and/or mathematical simulation of a
phenomenon. The physical simulation is carried out by geometrical, kinematical and
dynamical scaling up or down of a problem. In many cases, it is not possible to obtain a
complete physical simulation and, therefore, the experimental results are based on several
assumptions. The mathematical simulation is obtained by finding the governing equation and
the internal relationship between the parameters involved. Because any phenomenon is
affected by a number of factors, any attempt to find the truth greatly relies on how closely
these factors are addressed. It is observed that the description of any physical phenomenon
involves a number of assumptions. It is understandable that as we reduce the number of
assumptions, we reach closer to the truth.
The multi-dimensionality is another aspect of the knowledge dimension. As the time is
passed, knowledge is increased if the pathway of nature is followed. This process is one-
dimensional, as knowledge cannot be retracted and it cannot regress. The next step is the
consciousness, which is knowledge of the knowledge. It may be considered as the second
dimension of the knowledge dimension. Each dimension is naturally independent and,
therefore, it may let that independent knowledge to enter. These indicate that there is no
limitation or beginning and end for the knowledge dimension. This multi-dimensionality of
the knowledge dimension indicate that there may be also be a range of solutions for a specific
phenomenon due to the fact that there are different factors contribute to that phenomenon.
If we consider a pure material and plot the curve for melting or freezing of it in a certain
pressure, there is a constant temperature during the freezing or melting process. However,
Multiple Solutions in Natural Phenomena 161

there is no pure substance. In fact, when we refer to 100% pure, it refers to detectionable limit
of the analytical method used. If an isomorphous alloy, which consists of an arbitrary
composition of components A and B, is taken into account, the freezing or melting process is
take place in a range of temperature that is depend on the composition of the alloy and the
pressure, as shown in Figure 1. Therefore, we are dealing with a range of temperature instead
of a constant temperature. Another interesting point is that during the freezing and melting
process the concentrations of the equilibrium liquid or solid phases are changing and varying
in a certain range dependent on the final concentration of liquid or solid state. This is more
pronounced for an alloy of more components.
Temperature

Weight percent of component B

Figure 1. The phase diagram for an isomorphous alloy.

There should be a population of solutions for a problem related to a natural phenomenon,


dependent on the number and the behavior of involved parameters. There are many situations
that the variation of parameters involved in a natural phenomenon is approximated with a
polynomial function (Bjorndalen, 2002; Bjorndalen et al., 2005; Mustafiz et al., 2006). This is
because we do not know the governing equation for that phenomenon. The number of
solution for such a function depends on the nonlinearity of that function. There are also roots
that are not real. All of the roots of such a polynomial indicate that we should expect different
solutions regardless of the physical significance of them. It also indicates that if we can
represent the natural data as a polynomial function at any given time, we can determine the
roots, all of which should be considered for reporting some natural phenomena. Thus, roots
including the imaginary ones are considered as the multiple solutions.
We take into account three bivariate polynomials and solve them for all possible roots.
These polynomiald are a third, two forth and a fifth degree polynomials with two variables x
and y.

Example 1

The first polynomial is a third degree bivariate polynomial.

4 y 3 − 3 y 2 − 2 y + 2 = −5 x 3 + 4 x 2 + 3 x + 1 (1)
162 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

The solution of this third degree polynomial is shown in Figures 2 to 4. The real roots of
the bivariate polynomial are depicted in Figure 2. This polynomial gives three real roots in a
limited range of x and y. In general, the polynomial has three complex roots at each constant
real value of x and y. These roots can not be shown in a single graph. These are depicted in
Figures 3 when the variable y is a fixed real number, while in Figure 4, it is plotted for a fixed
real value of x.

Figure 2. The graph of a third degree polynomial.

This figure indicates that with such a simple nonlinear problem we are dealing with a
population of solutions, some of which may not be tangible. However, It does not mean that
the intangible solutions are not natural. A complex number consists of a real part and an
imaginary part. In many cases, it was understood that the only real part describing the real
world. The later applications of the complex number in different brunch of sciences such as
quantum mechanics, control theory, fluid dynamics and signal analysis reveal that nature has
no preference for the real number and the imaginary part being just as physical as the real
part.

Example 2

The second example is a fourth degree polynomial.

(2)
5 y 4 + 4 y 3 − 3 y 2 − 2 y + 2 = 6 x 4 − 5x3 − 4 x 2 + 3x − 2

The roots of the fourth degree polynomial are given in Figs. 5 to 7. In general, this
polynomial should have four roots for each constant value of the variables. Four real roots for
x can be found four real roots in a limited range of the variable y. It does not have four real
roots for the variable y for a fixed real value of x. At most two real roots for the variable y can
be obtained if x has a real value.
6 6 6

4 4 4

2 2 2
Imag. part

Imag. part
Real part
0 0 0

-2 -2 -2

"a" "b" "c"


-4 -4 -4

-6 -6 -6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
Real part

Figure 3. The roots of the third degree polynomial for a fixed real value of γ.

6 6 6

4 4 4

2 2 2
Imag. part

Imag. part
Real part

0 0 0

-2 -2 -2
"a" "b"
-4 -4 -4

-6 -6 - 6- 6
-6 -4 -2 0 2 4 6 -4 -2 0 2 4 6
-6 -4 -2 0 2 4 6
Real part

Figure 4. The roots of the third polynomial for a fixed real value of χ.
164 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

Figure 5. The graph of the real value of the first fourth degree polynomial.

All of the imaginary roots for y = 0,±1,±2,±3,±4,±5 are given in Figure 6. The same
graphs are shown for the roots of the forth degree polynomial in Figure 7, for which x =
0,±1,±2,±3,±4,±5.

Example 3

The next example is a fifth degree polynomial.

6 y 6 + 5 y 4 + 4 y 3 + 3 y 2 + 2 y + 1 = 7 x 5 + 6 x 4 + 5 x3 + 4 x 2 + 3x + 2 (3)

The real roots of this polynomial are depicted in Figures 8 to 10. This fifth degree
polynomial has a real root for every real value of x and y. All roots of the polynomial are
shown in the complex planes in Figures 9-a and 10-a for y, x = 0,±1,±2,±3,±4,±5,
respectively. The figures show that each two of the four complex roots are complex
conjugate.
The variation of a parameter causes other parameters to be affected and prescribes some
changes during the process. This indicates that all of the contributing parameters in a natural
phenomenon are dependent on each other. These may be explained with some nonlinear
simultaneous equations. The solution for these systems of nonlinear function is obtained
mostly by the numerical methods. However, in many cases, the restriction of the applied
method may limit in obtaining all of the solutions possible.
The well known method in solution of nonlinear algebraic and simultaneous equations is
the Newton method. The main restriction of this method is that the initial value for starting
the iteration should be near the exact solution. If the initial guess is far from the exact solution
it may result in a divergent iterations. We also applied
6 6 6

4 4 4

2 2 2
Imag. part

Imag. part
Real part
0 0 0

-2 -2 -2

"a" "b" "c"


-4 -4 -4

- 6- 6 -6 -6
-4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
Real part

Figure 6. The roots of the first fourth degree polynomial for a fixed real value of γ.

6 6 6

4 4 4

2 2 2
Imag. part

Imag. part
Real part

0 0 0

-2 -2 -2
"a" "b" "c"
-4 -4 -4

- 6- 6 -4 -2 0 2 4 6 -6 -6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
Real part

Figure 7. The roots of the first fourth degree polynomial for a fixed real value of χ.
166 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

Figure 8. The graph of the real roots of the fifth degree polynomial.

THE ADOMIAN DECOMPOSITION METHOD IN SOLUTION OF


NONLINEAR SIMULATANEOUS EQUATIONS
The Adomian Decomposition Method (ADM) is a powerful method that can be used to
obtain the solutions of systems of nonlinear simultaneous equations. ADM was first proposed
by a North American physicist, G. Adomian (1923-1996). The method is well addressed in
Mousavizadegan et al. [2006] to discuss the limitations of ADM for partial differential
equations and Mustafiz et al. [2006] to solve the Buckley-Leverett equation with the effect of
the capillary pressure.
The ADM solution is obtained in a series form while the nonlinear term is decomposed
into a series in which the terms are calculated recursively using Adomian polynomials. A
simultaneous algebraic equation with n independent variables is taken into account. It may
written that

f i ( x1 , x 2 ,..., x n ) = 0 and i = 1,2,..., n (4)

Each equation can be solved for an independent variable as

xi = ai + g i ( x1 , x 2 ,..., x n ) and i = 1,2,..., n (5)

The solution may be expressed as a series solution as


xi = ∑ xi , j and i = 1,2,..., n (6)
i =0
6 6 6

4 4 4

2 2 2

Imag. part
Imag. part

Real part
0 0 0

-2 -2 -2
"a" "b" "c"
-4 -4 -4

- 6- 6 - 6- 6 -6
-4 -2 0 2 4 6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
Real part

Figure 9. The roots of the fifth degree polynomial for a fixed real value of γ.

6 6 6

4 4 4

2 2 2
Imag. part

Imag. part
Real part

0 0 0

-2 -2 -2

"a" "b"
-4 "c"
-4 -4

-6 -6 -6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
-6 -4 -2 0 2 4 6
Real part

Figure 10. The roots of the fifth degree polynomial for a fixed real value ofχ.
168 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

The components of the series solution are obtained using the Adomian decomposition
method in the form

∞ ∞ (7)
∑ xi, j = a i + ∑ Ai,k
i =0 k =0

where

xi ,0 = a i , xi ,1 = Ai ,0 , xi , 2 = Ai ,1 , ..., xi ,m = Ai ,m −1 , xi ,m +1 = Ai ,m , ... (8)

The term Ai ,m is obtained by using the Adomian polynomial that is given in the
following form.

1 ⎡ dm ⎛ ∞ k ∞ ∞ ⎞⎤ (9)
Ai ,m = ⎢ m g i ⎜⎜ ∑ λ x1,k , ∑ λk x2,k , ..., ∑ λk xn,k ⎟⎟⎥
m! ⎣⎢ dλ ⎝ k =0 k =0 k =0 ⎠⎦⎥ λ =0
and m = 1,2,...., ∞ for i = 1,2,..., n

These are the first three elements of the Adomian polynomials.

Ai ,0 = g i (x1,0 , x 2 ,0 ,...., x n ,0 )
∂ ∂
Ai ,1 = x1,1 g i (x1,0 , x 2 ,0 ,...., x n ,0 ) + ... + x n ,1 g i (x1,0 , x 2,0 ,...., x n,0 )
∂x 1 ∂x n
∂ ∂
Ai , 2 = x1, 2 g i (x1,0 , x 2,0 ,...., x n,0 ) + ... + x n, 2 g i (x1,0 , x 2,0 ,...., x n,0 ) +
∂x 1 ∂x n
1 2 ∂2 ∂2
g i (x1,0 , x 2,0 ,...., x n,0 ) + ... + x n2,1 g i (x1,0 , x 2,0 ,...., x n,0 ) +
1
x1,1
2 ∂x12 2 ∂x n2
∂2 ∂2
x1,1 x 2,1 g i (x1,0 , x 2,0 ,...., x n,0 ) + ... + x1,1 x n,1 g i (x1,0 , x 2,0 ,...., x n,0 ) +
∂x 1 x 2 ∂x 1 x n
∂2 ∂2
x 2,1 x 3,1 g i (x1,0 , x 2,0 ,...., x n,0 ) + ... + x 2,1 x n,1 g i (x1,0 , x 2,0 ,...., x n,0 ) +
∂x 2 x 3 ∂x 2 x n
∂2
... + .... + x n−1,1 x n,1 g i (x1,0 , x 2,0 ,...., x n,0 )
∂x 2 x 3
(10)

The rest is lengthy and more complicated. MATLAB is used to compute the elements
of Ai , j . The elements of the series solution xi are obtained according to (6).
Multiple Solutions in Natural Phenomena 169

Example 4

The first nonlinear system of equation is


⎧⎪ x 2 − 10 x + 4 y 2 + 9 = 0 (11)
⎨ 2 ,
⎪⎩ xy + x − 10 y + 5 = 0

The real solutions can be obtained by plotting the equations as given in Figure 11. The
plot indicates that there are two common real roots for this nonlinear SAE.

Figure 11. The graphs of the simultaneous equations (11).

We use the ADM to find the solution for (11). The series solutions are obtained using the
equations (4) to (10) with different numbers of the elements. The computations are carried on
using MATLAB. The solutions and the errors with different umber of elements are

x = 1.21244 & y = 0.669277 if i = 4 E1 = 0.1373 & E 2 = 0.0627


x = 1.23491 & y = 0.679418 if i = 8 E1 = 0.0223 & E 2 = 0.0108
x = 1.23802 & y = 0.680857 if i = 12 E1 = 0.0046 & E 2 = 0.0023
x = 1.23916 & y = 0.681396 if i = 16 E1 = 0.0011 & E 2 = 0.0006
x = 1.23933 & y = 0.681474 if i = 20 E1 = 0.0003 & E 2 = 0.0001

where E1 and E2 are the deviation of the first and second equation from zero. However, it
gives a good approximation for the solution of this system of simultaneous equations. The
deficiency of the ADM is that it is not able to give the second solution as seen from the
Figure 13. The other restriction is that the Adomian polynomial does not always give a
170 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

convergent series solution. It depends on the type of the equations and the first term of the
series solution. It is necessary sometimes to change the form of the equations to a get a
convergent series solution for the problem.

Figure 12. The graphs of the simultaneous equations (12).

Figure 13. The graphs of the simultaneous equations (14).


Multiple Solutions in Natural Phenomena 171

Example 5

The second nonlinear SAE is

⎧⎪ x 2 + 4 y 2 − 16 = 0 (12)

⎪⎩− 2 x 2 + xy − 3 y + 10 = 0

This system of simultaneous equations has four real common roots as shown in Figure
14.

Figure 14. The graphs of the simultaneous equations (16).

These roots are computed with ADM. The system of equations is written in the form

⎧⎪ x = ± 0.5( xy − 3 y + 10) (13)



⎪⎩ y = ±0.5 16 − x 2

∞ ∞
for each of the variables. It is assumed that x = ∑ xi and y = ∑ yi . Using ADM, it is set
i =0 i =0
that x0 = 0 and y0 = 0 . The elements of the series solutions are obtained using the
adomian polynomial as given in (9).
172 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

⎧ ⎡ ⎞ ⎤⎫
1 ⎪ ∂ m−1 ⎛ ∞ ∞ ∞
xm = ⎨ ⎢± 0.5 ⎜ ∑ λk x m ∑ λk y k − 3 ∑ λk y k + 10 ⎟ ⎥ ⎪⎬ for m = 1,2,....
m! ⎪ ∂λm−1 ⎢ ⎜ ⎟⎥
⎩ ⎣ ⎝ k =0 k =0 k =0 ⎠ ⎦ ⎪⎭
λ =0
⎧ ⎡ ⎤⎫
( )
2
1 ⎪ ∂ m−1 ⎢± 0.5 16 −

⎥ ⎪⎬
ym = ⎨ m−1
m! ⎪ ∂λ ⎢ ∑ λk x k ⎥⎪
for m = 1,2,....
k =0
⎩ ⎣⎢ ⎥⎦ ⎭ λ =0

The computation are carried out and the solution are obtained when the series solution are
truncated to i = 10 . The solutions are

x = 2.0426 & y = 1.7144 ;

x = −1.0435 & y = 1.9316 ;

x = −2.9794 & y = −1.3193 ; and

x = 2.3554 & y = −1.6235 .

which are very accurate. The more accurate result can be found with increasing the number of
the series solution elements. The multiple solutions can be found if the solution is arranged in
proper form. It can be considered as one of the most challenging tasks in application of ADM.
A proper arrangement should be selected to obtain a convergent result as well as the multiple
solutions if there are.

Example 6

The third example is a third degree system of two simultaneous equations of the variables
x and y.

⎧⎪ x 3 + y 3 − 10 x − 5 = 0 (14)
⎨ 3
⎪⎩ x − y 3 − 15 y 2 + 2 = 0

The equations are plotted in Figure 13. It is realized that there are four real solution for
this system of nonlinear equations. The solution for the variables may be arranged in the form

⎧ x = −0.2 + 0.1( x 3 + y 3 )
⎪ (15)
⎨ 1
⎪y = ± (2 + x 3 − y 3 )
⎩ 15
Multiple Solutions in Natural Phenomena 173

Using ADM, the solutions are expressed in the series form as (6). The elements of the
series solution for x and y are computed by taking into account that x0 = −0.5 and y0 = 0
the rest are computed using Adomian polynomial (9). The arrangement of the SAE in the
form of (15) gives two of the common real roots that are ( x = −0.5089 & y = 0.3489 )
and ( x = −0.5185 & y = 0.3565 ). This arrangement in the form of (15) does not give all
the common real roots of the SAE (14). The other solutions may be obtained with another
arrangement for x and y. We try different arrangement for the variables but almost all of them
result in a divergent series.

Example 7

The next example is a fourth degree system of simultaneous equations.

⎧ 4 1 4
⎪ x + x y + y − 15 x − 3 = 0
3
⎨ 5
⎪2 x 4 − y 4 − 10 y + 3 = 0 (16)

There are four common real roots for this system of equations as shown in Figure (14).
This system of equations are rearranged in the form for x and y variables. The only solution
that can be obtained with the ADM and the arrangement (17) is
x = −0.19995 & y = 0.29952 where the series solution is truncated to i = 5 .

⎧ 1 1 4 1 4
⎪⎪ x = − 5 + 15 ( x + x y + 5 y )
3

⎨ (17)
⎪ y = 3 + 1 (2 x 4 − y 4 )
⎪⎩ 10 10

CONCLUSIONS
Our objective in this paper is to discuss the possibility of multiple solutions in natural
phenomena. In many cases, it is possible to express or approximate a natural phenomenon by
a nonlinear polynomial. This nonlinear polynomial has multiple solutions which depends on
nonlinearity and number of variables. We explained about several bivariate polynomials of
different degrees and show a population of solutions in the complex plane.
The other objective is to show the limitation of the methods in finding the complete
solutions of a given nonlinear problem. The mathematical model for many real problems are
given in the form of partial differential equations. In most of the cases, the solution of these
PDEs are obtained by the numerical methods. The normal procedure is to recast the PDE in
the form of nonlinear simultaneous algebraic equations (SAE). The solutions of this nonlinear
SAE are obtained by some linearization during the process of computations. This may affect
the quality as well as the quantity of the solutions.
174 S. H. Mousavizadegan, S. Mustafiz and M. R. Islam

We introduce several bivariate nonlinear SAEs with different degrees of nonlinearity.


The real roots of these bivariate SAEs can be obtained graphically as shown in in the previous
sections. All of the SAEs are solved with Adomain decomposition method (ADM) and show
the limitation of the ADM method, which is used as a sample method. However, other
methods such as Newton’s method may give all probable solutions but it it necessary to know
the region of each solution to start a convergent iteration and find the roots. If the number of
variables are increased, it will be a challenging task to find the region of all roots. Such
investigation will enable us getting the complete picture of knowledge dimension, which may
be useful in decision making.

ACKNOWLEDGEMENTS
The authors gratefully acknowledge the research grant provided through the Atlantic
Innovation Fund (AIF). Mustafiz would also like to thank the Killam Foundation, Canada for
its financial support.

REFERENCES
Bjorndalen, N. 2002. Irradiation techniques for improved performance of horizontal wells,
MASC Thesis, Dalhousie University, Halifax, Canada.
Bjorndalen, N., Mustafiz, S. and Islam, M.R. 2005. The effect of irradiation on immiscible
fluids for increased oil production with horizontal wells, ASME International Mechanical
Engineering Congress and Exposition (IMECE), Orlando, Florida, USA, November.
Coates, D. E, and Kirkaldy, J. S. 1971. Morphological stability of !- ! phase interfaces in the
Cu- Zn- Ni system at 775 C , Met. Trans, vol. 2, no. 12, 3467-77, December.
Coriell, S.R., McFadden, G.B., Sekerka, R.F.and Boettinger W.J. 1998. Multiple similarity
solutions for solidification and melting, Journal of Crystal Growth, vol. 191, 573-585.
Islam, M.R., Chakma, A. and Nandakumar, K., 1990, "Flow Transition in Mixed Convection
in a Porous Medium Saturated With Water Near 4 C", Can. J. Chem. Eng., vol. 68, 777-
785.
Islam, M.R. and Nandakumar, K., 1986, "Multiple Solution for Buoyancy-Induced Flow in
Saturated Porous Media for Large Peclet Numbers", Trans. ASME Journal of Heat
Transfer, vol.108 (4), 866-871.
Islam, M.R. and Nandakumar, K., 1990, "Transient Convection in Saturated Porous Layers
With Internal Heat Sources", Int. J. Heat and Mass Transfer, vol. 33 (1), 151-161.
Khan, M.I. and Islam, M.R. 2006. True sustainability in technological development and
natural resources management, Nova Science Publishers, NY, USA.
Maugis, P.; Hopfe, W.D.; Morral, J.E.; Kirkaldy, J.S. 1996. Degeneracy of diffusion paths in
ternary, two-phase diffusion couples, Journal of Applied Physics, vol. 79, no. 10, 7592-
7596.
Mishra, S., and DebRoy, T. 2005. A computational procedure for finding multiple solutions
of convective heat transfer equations, J. Phys. D: Appl. Phys., vol. 38, 2977–2985.
Multiple Solutions in Natural Phenomena 175

Mustafiz, S., Mousavizadegan, S.H. and Islam, M.R. 2006. The effects of linearization on
solutions of reservoir engineering problems, Petroleum Science and Technology,
accepted for publication, 17 pg.
Nixon, D. 1989. Occurence of multiple solutions for the TSD-Euler equation Source: Acta
Mechanica, vol. 80, no. 3-4, 191-199.
Tiscareno-Lechuga, F. 1999. A sufficient condition for multiple solutions in gas membrane
separators with perfect mixing, Computers and Chemical Engineering, vol. 23, no. 3:
391-394.
Zatzman, G.M. and Islam, M.R. 2006. Economics of intangibles: Nova Science Publisher,
NY, USA, in press.
INDEX

Albert Einstein, 5
alcohol(s), 75, 118, 119
A algae, 116, 117
algorithm, 104, 159
abatement, 155
alkaline, 78, 94
Abdullah, 124, 127
Allah, 1
accelerator, 48
alloys, 159
access, 12, 34, 35
alternative(s), 26, 31, 46, 93, 111, 118, 122, 132, 133
accounting, 26, 28, 34
aluminum, 48, 83
accuracy, 100, 103
amines, 76
acetylene, 135
amino, 71, 72, 77, 134
acid, 71, 73, 77, 78, 86, 87, 113, 151
amino acid(s), 71, 72, 77, 134
acidic, 151
ammonia, 30, 80, 105, 108, 110, 112, 114, 116, 117,
acidity, 151
122, 123, 126, 127
acrylonitrile, 88
ammonium, 112, 113
activated carbon, 141
amortization, 44
adaptability, 24, 90, 115, 125
Amsterdam, 54
adaptation, 46
anaerobic sludge, 130
additives, 25, 81, 82
anatomy, 32
adenosine, 77
animal waste, 108
adhesives, 89
animals, 64, 76, 115
adjustment, 34
antibiotics, 40
administration, 47, 49
antioxidants, 56
Adomian Decomposition method, 104, 157
aquatic habitats, 115, 132
adsorption, 131, 132, 138, 139, 140, 141, 143, 144,
aqueous solutions, 131, 132
146, 147, 149, 151, 154
Aquinas, Thomas, 1, 2, 34
adulthood, 75
Arab world, 31
advertising, 12
Arabs, 30
aerobic bacteria, 116
Argentina, 49
age, 1, 2, 3, 4, 34, 51, 84, 92, 93, 105
arginine, 71
aggression, 49
argument, 4, 15, 19, 33, 41, 51
aging, 30
Aristotle, 1, 2, 4, 33
agriculture, 49
arithmetic, 34, 45
AIDS, 84
armed forces, 47
air emissions, 86
aromatic hydrocarbons, 86
air pollutants, 87
arrest, 11
air pollution, 117, 127
arsenic, 138
Al Qaeda, 38
arteries, 58
alanine, 71
178 Index

arthritis, 30 biomass, 105, 106, 119, 120, 124


articulation, 38 biopolymers, 78
asbestos, 9, 10 bioremediation, 94
ash, 138 biosorption, 132
Asia, 41 biosphere, 94
Aspartame, 29, 66 biota, 131, 132
aspirin, 58, 59 birth, 64, 88, 89
assessment, 67, 92, 94, 124, 127 black tea, 5, 131, 133, 134, 135, 154
assets, 47 blame, 42
assumptions, 2, 6, 14, 18, 20, 22, 29, 34, 35, 109, blasphemy, 4
158, 160 blocks, 3, 31, 71
asthma, 30, 78, 86, 88 blood, 88
Athens, 42 body fat, 88
atoms, 17, 54, 77 bonding, 72
ATP, 77 bonds, 71
attacks, 39, 58, 59 borrowers, 44
attention, 9, 10, 12, 28, 112, 124 boundary value problem, 97
Australia, 55 boys, 63
authority, 11, 12, 13, 32, 47, 56 brain, 16, 20, 21, 26, 27, 30, 34, 40, 59, 76
automata, 28 brainwashing, 40
automobiles, 80 branching, 153
availability, 35 breakdown, 31, 86
Averröes, 1, 2, 3, 33, 34 breast milk, 75, 76, 93, 94, 95
awareness, 6 breathing, 88, 89
brominated flame retardants, 91
bronchitis, 88
B Brusselator, 97, 103, 104
budget deficit, 46
B vitamins, 56
buffer, 135
Bacillus, 78, 117
building blocks, 71
Bacillus subtilis, 117
burning, 79, 81, 85, 106, 125
backwardness, 31
Bush administration, 49
bacteria, 4, 5, 78, 83, 115, 116, 117, 132, 136, 137,
business model, 56
149, 152, 153, 154
buttons, 82
bacterial cells, 132
Baghdad, 38
bankruptcy, 47 C
banks, 12, 43
batteries, 27 cables, 76
behavior, 4, 22, 25, 30, 67, 132, 155, 161 cadmium, 81
behavioral change, 75 calcium, 80, 133
beliefs, 42, 50, 53, 60 calcium carbonate, 80
beneficial effect, 21 California, 44, 94
benefits, 26, 56, 82, 124 campaigns, 10
benign, 121, 123 Canada, 1, 4, 9, 40, 61, 65, 67, 80, 92, 94, 95, 105,
beverages, 4 115, 129, 131, 155, 157, 174
bias, 15, 33 cancer, 3, 30, 35, 56, 75, 84, 88, 94
bible, 1, 2, 34, 51, 52 capillary, 166
biochemistry, 52 capsule, 84
biodegradability, 77 carbohydrates, 84, 86
biodegradable, 70, 77, 83, 89 carbon, 70, 76, 77, 78, 80, 87, 110, 113, 115, 117,
biodegradation, 77, 78 125, 126, 129, 141
biodiversity, 95 carbon atoms, 77
bioethanol, 127 carbon dioxide, 87, 110, 125, 126
Index 179

carbon monoxide, 76 coatings, 76, 84


carcinogenicity, 86 coffee, 76, 88
carcinogen(s), 76, 81, 85, 88, 89 coherence, 44
cardiovascular disease, 57 coke, 80
Carnot, 124 Cold War, 38, 40
carotene, 56 colon, 57
carrier, 32, 123 colonization, 60
case study, 82, 95 combustion, 20, 21, 78, 87, 92, 93, 106
cast, 158 commerce, 60
catalysis, 20 commodity, 89
catalyst(s), 20, 73, 80 communication, 1, 31, 91, 94
Catholic(s), 1, 11, 32 community, 30, 60, 63, 93, 95
Catholic Church, 11, 32 compensation, 9, 10
cation, 141 competition, 12, 33, 44
C-C, 135 competitiveness, 46
CDC, 89, 91 compiler, 39
cell, 15, 29, 32, 105, 108, 110, 118, 119, 127 complexity, 13, 23, 28, 50
cell phones, 32 components, 2, 21, 22, 77, 80, 81, 84, 86, 88, 100,
cellulose, 120 132, 159, 161, 168
cement, 48 composites, 68
central bank, 47 composition, 71, 77, 78, 79, 159, 161
certainty, 36 composting, 78, 108
chain molecules, 84 compounds, 16, 20, 73, 81, 83, 84, 86, 87, 111, 133
chaos, 92 compressibility, 22
charities, 39 computation, 26, 27, 172
chemical approach’, 67, 68 computers, 21, 32
chemical bonds, 71 computing, 35, 104
chemical composition, 78, 79 concentration, 12, 20, 25, 29, 113, 117, 131, 132,
chemical energy, 105, 118 135, 138, 140, 141, 143, 144, 146, 147, 149, 151,
chemical engineering, 29, 53 153, 154, 159, 161
chemical industry, 20 concrete, 6
chemical kinetics, 97 condensation, 135
chemical reactions, 20, 119 conditioning, 122
children, 36, 61, 63, 88 conductivity, 118
Chile, 54 confidence, 60
chimpanzee, 50 configuration, 22
China, 49, 156 confinement, 16
Chinese, 82, 156 conflict, 35
chitin, 118 confusion, 2, 26, 42, 60
chloride, 88, 113, 115 congestive heart failure, 56
chlorine, 20, 111 Congress, 65, 93, 95, 174
cholesterol, 59 conscious knowledge, 6
Christianity, 35 consciousness, 5, 11, 16, 49, 160
chromium, 3 consensus, 56
CIA, 40, 47 conservation, 158
citizenship, 15 conspiracy, 37, 47
cleaning, 21, 113 construction, 43, 45, 46, 48
clinical trials, 12, 56 consulting, 58
clusters, 152 consumer price index, 46
CO2, 26, 76, 87, 106, 110, 112, 113, 114, 115, 119, consumer protection, 43, 44
130 consumers, 5
coagulation, 132 consumption, 43, 46, 48, 84, 86, 87, 106, 120
coal, 106 contaminant(s), 94, 131, 138, 141, 143, 144
180 Index

contamination, 75, 89, 124, 155 degradation process, 78, 108


continuity, 158 degradation rate, 77, 84
control, 37, 49, 59, 61, 65, 131, 133, 162 demand, 51, 52, 111, 115
convergence, 103 democracy, 41, 42, 49
conversion, 30, 106, 110, 117, 118, 121 democrats, 45
cooking, 21, 30, 39, 108, 110 denaturation, 78
cooling, 44, 105, 106, 110, 120, 122, 123 density, 68, 119, 158
Copenhagen, 91 dentures, 89
copper, 155 Department of Energy, 87
corporations, 89 depolymerization, 80, 81
correlation, 27, 42 depression, 30
corrosion, 127 desalination, 105, 106, 108, 110, 111, 112, 113, 114,
corruption, 45 115, 116, 117, 126
cortex, 32 desire(s), 45, 61
cosmetics, 88 destruction, 12, 13, 45, 108
costs, 26, 89, 120 detection, 54, 86, 94, 136
cotton, 82 determinism, 42
cough, 89 developmental disorder, 84
coughing, 89 deviation, 169
couples, 84, 174 diarrhea, 89
coupling, 108 differential equations, 97, 158, 166, 173
covering, 62 differentiation, 50
creative abilities, 50 diffusion, 97, 103, 104, 174
credibility, 24, 29 diffusion process, 97
credit, 3, 36, 46, 48 diffusivity, 159
crime, 11, 41 digestion, 108, 109, 116, 125, 130
critical period, 84 dimensionality, 160
critical thinking, 33 dimer, 73, 75
criticism, 92 dioxin, 19, 30, 90
crude oil, 25, 81, 83, 84 diploid, 152
culture, 2, 31, 35, 41, 42, 62, 82, 93, 117 disaster, 9, 10, 21, 75
currency, 42 discourse, 1, 10, 17, 37
cycles, 21, 39, 122 discretization, 159
cystine, 71 distribution, 11, 32, 39, 92, 95, 111
cytoskeleton, 72 divergence, 2
diversity, vii, 21, 68, 132, 133
division, 27, 34
D dizziness, 88
DNA, 28, 29, 50
danger, 70
doctors, 56, 58, 59
Darwin, Charles, 11, 50
doors, 85
database, 15, 87
drinking water, 111
dating, 82
drugs, 40, 58, 59
death(s), 9, 10, 38, 56, 58
dumping, 111
debt, 43, 45, 48
DuPont, 19, 80, 118
decision making, 174
durability, 30, 67, 69, 83
decisions, 13, 36, 60, 61
duration, 17, 28, 106
decomposition, 67, 81, 83, 91, 97, 103, 160, 168, 174
duties, 60, 61, 63
defects, 88, 89
deficiency, 28, 57, 169
deficit(s), 46, 75 E
definition, 5, 17, 18, 20, 29, 39, 99, 125
degenerate, 40 earth, 12, 15, 50, 51, 52, 60, 111, 121
degradation, 67, 69, 77, 78, 79, 83, 91, 108, 132 eating, 122, 128
Index 181

ecology, 92, 94, 130 estates, 48


economic cycle, 39 ethanol, 118, 119, 120, 128, 129, 130
economic development model, 26 ethers, 96
economic growth, 49 ethnicity, 2
economic theory, 26 ethylene, 76, 89, 94
economics, 22, 92 ethylene glycol, 76, 94
ecosystem, 30, 68, 78, 88, 94, 115, 116, 131 eucalyptus, 117
Ecuador, 25 Euler, 159, 175
education, 31, 34, 65, 92, 94, 96 Europe, 1, 2, 30, 33, 34, 35, 41
effluent(s), 105, 108, 109, 110, 115, 116, 123 evidence, 2, 3, 6, 10, 14, 50, 54, 56, 82
EIA, 87 evolution, 29, 50, 51, 52, 53
Einstein, 2, 5, 17, 34, 122, 128 exchange rate, 47
elaboration, 37, 51, 89 excuse, 35, 41, 89
elasticity, 70, 71 expertise, 36
electric charge, 118 exploitation, 13
electric circuit, 15 exports, 48
electric energy, 5 exposure, 40, 57, 75, 80, 84, 86, 89, 91
electricity, 5, 21, 48, 105, 106, 111, 118, 120, 121, extraction, 31, 48
122, 127 extremism, 15
electrodes, 40, 119 eyes, 41, 88, 89
electrolyte, 118, 127
electromagnetic, 54
electron(s), 17, 54, 55, 70, 76 F
email, 131
fabric, 53, 89
emission, 76, 87
failure, 4, 9, 10, 11, 13, 46, 58, 88
employment, 12
faith, 2, 12, 15
encoding, 32
family, 36, 63, 72
endometriosis, 88
farmers, 48
energy, 4, 18, 20, 21, 22, 26, 27, 29, 31, 52, 78, 80,
fat, 88
81, 86, 87, 89, 92, 93, 95, 105, 106, 108, 111,
fatigue, 89
112, 118, 119, 120, 121, 122, 123, 124, 127, 130,
fear, 60, 61, 89
132, 156, 158, 160
feces, 109
energy consumption, 86
feedback, 28
energy density, 119
fermentation, 120
England, 42
fertilizers, 3, 5, 70
entrepreneurs, 49
fever, 30
environment, 4, 5, 12, 16, 18, 19, 21, 25, 51, 67, 70,
fibers, 40, 68, 69, 70, 71, 72, 88, 89
75, 76, 77, 79, 81, 84, 86, 89, 119, 123, 131, 132,
fidelity, 51
133, 151, 155
filtration, 132
environmental chemicals, 91
finance, 43, 45, 46, 48
environmental context, 132
financial institutions, 12, 46
environmental degradation, 132
financial support, 174
environmental impact, 4, 26, 77, 78, 86
financial system, 45, 46
environmental regulations, 25
financing, 43
environmental sustainability, 92, 93, 96
fine wool, 73
enzyme, 20
fire retardants, 76, 94
EPA, 87, 111, 128
First World, 12
epidemic, 30
fish, 40, 57, 88, 116, 138, 141, 156
equality, 62
fission, 20
equilibrium, 16, 135, 138, 140, 146, 161
flame, 79, 91, 135
equipment, 40, 41, 48, 89, 123, 127
flame retardants, 91
equity, 44
flexibility, vii, 21, 24, 71, 89
erosion, 136
flooring, 88
182 Index

flotation, 80, 88, 132 glycol, 76, 94


fluid, 27, 122, 123, 124, 158, 159, 162 glycolysis, 76, 94
fluid intelligence, 27 God, 1, 2, 3, 6, 12, 51, 52, 53
fluorescence, 80 gold, 58
fluoride, 128 government, 26, 37, 39, 40, 45, 46, 47, 49, 62, 89
foams, 76 GPS, 35
focusing, 21, 39, 40 grants, 53
food, 3, 48, 77, 84, 86, 88, 89, 90, 119, 125 granules, 80
Food and Drug Administration (FDA), 56, 57, 58, 88 graph, 34, 125, 138, 141, 143, 144, 146, 148, 149,
food products, 84 162, 164, 166
football, 55 grass, 77
footwear, 88 gravity, 45, 54
foreign exchange, 48 grazing, 117
formaldehyde, 20, 30, 89 Greece, 42
fossil, 20, 50, 81, 86, 89, 106, 108, 123, 127 Greeks, 42
fossil fuels, 81, 86, 106 greenhouse, 87, 88, 122
fraud, 10 greenhouse gas(es), 87, 88
freedom, 2, 23, 42 gross domestic product, 44
freezing, 160 ground water, 111
freshwater, 111, 117 grouping, 28
Freundlich isotherm, 140, 142, 144, 146, 147, 148, groups, 2, 12, 28, 31, 37, 51, 109
149, 151 growth, 45, 48, 49, 53, 117, 131, 133, 154
fruits, 64 Guerilla, 62
fuel, 20, 87, 89, 105, 106, 108, 110, 117, 118, 119, guidance, 62
120, 123, 125, 127 guidelines, 68
fuel cell, 105, 108, 110, 118, 119, 127
funding, 47, 53, 58, 59, 155
fundraising, 39 H
funds, 39
habitat, 16, 117
fungi(us), 78, 131
hands, 10, 12, 13, 41
furniture, 76, 80
harm, 20, 56, 62, 75, 125
futures, 13
Harvard, 58, 59
fuzzy logic, 27
hazards, 15, 89
headache, 89
G healing, 23
health, 20, 56, 57, 75, 76, 78, 86, 87, 88, 89, 95, 132
gambling, 43, 44 health effects, 75, 88, 95
gases, 77, 80, 83, 88, 110, 114, 117 heart attack, 58, 59
GDP, 44, 45, 46, 47, 48 heart disease, 58
GDP per capita, 46 heart failure, 56
general knowledge, 37 heat, 21, 88, 122, 123, 124, 159, 174
generation, 50, 77, 108, 109, 118, 124, 127 heat transfer, 159, 174
genocide, 60 heating, 44, 105, 106, 108, 110, 120, 121, 122, 123,
genome, 29, 50, 64 127
Georgia, 52, 91, 128 heavy metals, 27, 73, 86, 117, 131, 132, 141, 144,
Germany, 25, 128 154, 155, 156
gifts, 63 height, 6
girls, 63 helix, 71, 93
glaciers, 111 heterogeneity, vii, 21
glass(es), 80, 81, 88, 115, 122, 136 hexachlorobenzene, 95
glucose, 27 high school, 51, 53
glutamic acid, 71 higher quality, 116
glycine, 71 hip, 56
Index 183

histidine, 71 independence, 5, 47
homogeneity, vii, 21, 83 independent variable, 6, 17, 18, 166
Honda, 130 India, 17, 30, 49, 65, 93, 155, 156
honesty, 9, 25 Indians, 42
Hong Kong, 130 indicators, 21, 49, 92, 93, 95
hormone, 75, 81 indigenous, 37, 60, 61
host, 59 indirect solar energy, 121
hotels, 111 Indonesia, 95
households, 56 industrial production, 86
housing, 43, 48 industrial revolution, 106
human brain, 16, 20, 21, 26, 27 industrial wastes, 119, 132, 155
human condition, 10, 26 industry, 9, 10, 11, 12, 20, 25, 26, 46, 48, 49, 56, 57,
human development, 11 58, 84, 87, 89
human exposure, 91 infancy, 75
human genome, 50 infertility, 88
human milk, 95 infinite, 3, 15, 28, 77, 90, 125
human subjects, 40 inflation, 45, 46, 47, 48
humanity, 11, 34 Information Age, 9, 13, 15, 26, 37, 65, 128
humidity, 68 infrared spectroscopy, 80
humus, 108 infrastructure, 48
husband, 63 inhibitor, 58
hydrocarbon(s), 46, 47, 67, 73, 84, 86 injury, 84
hydrochloric acid, 151 inner ear, 40
hydroelectric power, 48 innovation, 92, 127, 155, 174
hydrogen, 54, 55, 70, 71, 72, 110, 118 insane, 27
hydrogen bonds, 71 insects, 116
hydrogen gas, 118 insight, 39
hydrophobic, 73 inspiration, 33
hydroxide, 113, 151 instability, 14, 25
hypothesis, 1 instinct, 12, 60
institutions, 10, 12, 46, 52, 60
instruction, 52
I instruments, 48
insulation, 71, 76, 88, 89, 95
ice caps, 111
integration, 26, 117
identity, 19
integrity, 131
ideology, 40
intelligence, 27, 37, 39
illusion, 32, 45
intensity, 5, 21, 68, 125, 132, 140, 141, 143, 144,
image analysis, 136
146, 149
images, 40, 136
intentions, 11, 12, 14, 36, 38
imagination, 160
interaction, 73, 158
imaging, 136
Inter-American Development Bank, 48
imitation, 76, 105
interest groups, 2
immune system, 56, 88
interest rates, 43, 44
immunological, 84
interference, 39
implementation, 46, 47
International Monetary Fund (IMF), 45, 46
imports, 48
international relations, 39, 49
impregnation, 76
international terrorism, 37
impurities, 78
internet, 32, 38, 65, 89
inauguration, 46
interpretation, 27, 148
incidence, 121
intervention, 12, 15, 16, 34, 125
inclusion, 49, 158
inventions, 35
income, 42, 46, 47, 49
investment, 12, 43, 48, 58
income tax, 46
184 Index

investment capital, 12 leachate, 114, 117, 126


investors, 49 leaching, 86, 88
ions, 132, 154, 156 lead, 1, 24, 40, 81, 86, 105, 131, 132, 135, 138, 141,
Iran, 15, 49, 97 143, 144, 146, 147, 150, 151, 154, 160
Iran-Contra, 15 lead cations, 131
Iraq, 36, 38 learning, 31, 35, 75
Ireland, 129 lens, 136
iris, 116, 117 leucine, 71
iron, 48, 133 life cycle, 77, 78, 90, 94
irradiation, 86, 174 Life Cycle Assessment (LCA), 78, 94
Islam, 1, 2, 3, 5, 6, 7, 12, 15, 17, 18, 21, 23, 24, 25, lifecycle, 77, 79, 83
26, 29, 33, 34, 42, 65, 66, 70, 73, 78, 79, 81, 83, lifestyle, 83, 84, 105, 106
84, 85, 86, 87, 91, 92, 93, 94, 95, 99, 104, 106, lifetime, 43, 54, 62
124, 125, 128, 129, 132, 136, 155, 156, 157, 158, limitation, 160, 173, 174
159, 174, 175 linear dependence, 5
Islamic, 15, 34, 35, 39, 42, 64 linear model, 22
isolation, 39, 49 linkage, 69, 70
isoleucine, 71 links, 37, 42
isothermal, 159 liquid fuels, 118
isotope, 88, 125 listening, 64
Italy, 10, 32, 49, 65 literature, 11, 124
iteration, 164, 174 liver, 57, 76, 88
loans, 43, 44
London, 6, 15, 64, 94, 96, 155
J Los Angeles, 58
LSD, 40
Japan, 91, 93
LTD, 124
Jerusalem, 1
lung, 30, 56, 78, 86, 89
jihad, 37, 38
lung cancer, 30, 56
jobs, 45, 49
lying, 160
Jordan, 92
justification, 39, 40
M
K machinery, 11, 48
males, 63
keratin, 71, 72, 78
management, 12, 25, 69, 78, 91, 92, 93, 131, 133,
Keynes, 36
155, 174
kidney, 76
mandates, 12
kinetics, 93, 95, 97
manipulation, 6, 11, 24
King, 25
manners, 61
Korea, 42
manufacturing, 45, 48, 69, 76, 77, 80, 89, 93, 134
Kuwait, 129
manure, 108
mapping, 29
L market(s), 11, 12, 13, 32, 43, 44, 47
marsh, 117, 125
labor, 45 Marx, 9, 34, 94
lakes, 111 mass communication, 32
land, 46, 60, 61, 62, 63, 89, 111, 120, 154 mass media, 11, 46
landscapes, 32 mathematical logic, 29
language, 31, 42, 62 mathematics, 21, 25, 28, 34, 35
laser, 54, 97 matrix, 159
Latin America, 46, 49 meanings, 18
laws, 2, 4, 18, 22, 25, 46, 158, 159 measurement, 1, 91, 132
Index 185

measures, 21, 47, 54, 132 molecular weight, 75, 112, 135
meat, 88 molecules, 22, 54, 69, 84, 86, 115, 118
media, 11, 29, 32, 37, 39, 46, 61 momentum, 48, 158
median, 44 money, 39
medicine, 18, 34 monomer(s), 68, 73, 74, 75, 80, 83, 86, 88, 90
melt, 80 monopoly, 12, 32
melting, 80, 159, 160, 174 morality, 32
membership, 14, 49 morning, 38
memory, 18, 63, 75 mortality, 57
men, 30, 60, 61, 62, 63 motion, 16, 17, 32, 105, 158, 159
mentor, 33, 62 motives, 39
MERCOSUR, 49 movement, 5, 32, 53
mercury, 88 MTBE, 30
messages, 12 multidimensional, 157
metal content, 131 multiplicity, vii, 21
metal ions, 132, 156 multiplier, 48
metals, 27, 73, 84, 86, 111, 117, 131, 132, 135, 141, muscles, 40
144, 154, 155, 156 mushrooms, 57
metaphor, 22 Muslim(s), 35, 42
methane, 105, 108, 110, 114, 118, 124 mutation, 84
methanol, 76, 94, 118, 119 mycelium, 153
methionine, 71
methyl methacrylate, 88
Mexico, 52, 53 N
Miami, 65
NaCl, 113, 115
microbes, 117
naming, 18
microbial, 152, 154, 155, 156
nation, 60, 62
microorganisms, 78, 117, 132, 153, 154, 155
National Institutes of Health, 57
microscope, 136
natural environment, 12, 16, 19, 67, 132
microscopy, 79
natural gas, 81, 113, 114, 118, 122
microwave, 21, 67, 69, 79, 80, 86
natural laws, 25
Middle Ages, 2, 32
natural resources, 174
militant, 20
natural science(s), 14, 18, 29, 37
military, 36, 39, 40, 41, 46, 60
natural selection, 11, 34, 50, 52
milk, 57, 75, 76, 93, 94, 95
nausea, 89
millennium, 94
Navier-Stokes equation, 158, 159
minerals, 20, 30, 133
near infrared spectroscopy, 80
mining, 49, 111
Nebraska, 121, 130
Minnesota, 52, 53
Nepal, 95
minority, 42
nerve, 40
misconceptions, 17, 22, 25
nerve fibers, 40
missions, 86, 87, 88
Netherlands, 54
MIT, 3, 65
network, 38
mixing, 175
New Jersey, 128
modeling, 13, 28, 29, 94, 97
New Mexico, 52, 53
models, 16, 22, 24, 25, 26, 28, 125, 158
New South Wales, 55
modernization, 111
New York, 7, 37, 38, 64, 65, 66, 92, 93, 95, 129
modus operandi, 2, 4
New Zealand, 91
moisture, 79
Newton, 17, 22, 160, 164, 174
mold, 153
Newtonian, 17, 158
mole, 159
nitrate(s) 116, 117, 131, 135, 138, 141, 143, 144, 154
molecular biology, 51
nitrifying bacteria, 117
molecular structure, 71, 78
nitrogen, 70, 87, 116, 117
186 Index

nitrogen gas, 116


nitrogen oxides, 87
P
nitrous oxide, 87
pacemakers, 9, 10
Nobel Prize, 3, 18, 20
packaging, 76, 84, 88, 89
nonlinearities, 157
pain, 9, 10, 30, 58
North America, 33, 37, 166
paints, 26, 84, 88, 89
Norway, 94
Palestine, 2
nuclear energy, 20, 21, 112
parallelism, 27
nutrients, 5, 30, 68, 77, 117
parameter, 16, 164
nutrition, 20, 30
parasite, 61
parents, 51, 53
O Paris, 91, 95
partial differential equations, 97, 158, 166, 173
obesity, 30 particles, 54, 116, 158
observations, 14, 15, 16, 21, 39, 51, 54, 55 particulate matter, 17
oceans, 111 passive, 121, 122
octane, 74 pathogens, 115
oil, 9, 10, 20, 25, 44, 45, 46, 47, 48, 49, 57, 68, 80, pathways, 2, 3, 13, 35, 59, 67, 69, 73, 77
81, 83, 84, 93, 106, 123, 127, 174 payroll, 37
oil production, 47, 81, 174 PDEs, 173
oil refineries, 48 peptide chain, 71
oil revenues, 45, 47 peptides, 71, 77
older people, 62 perception(s), 5, 6, 15, 16, 17, 18, 22, 24, 39, 90,
oligomers, 73, 75, 134 125, 126
omission, 38 performance, 30, 92, 95, 122, 131, 138, 141, 143,
one dimension, 18 144, 149, 151, 174
opacity, 14 Periodic Table, 19
operations research, 35 permeability, 159
operator, 27, 98, 100, 101 permeable membrane, 112
organic compounds, 86, 87, 111 permit, 44
organic matter, 108, 117, 119 perpetration, 15
organism, 27, 50, 51, 68, 86, 117 personal, 38, 125
Organization for Economic Cooperation and personality, 160
Development (OECD), 95 pessimism, 42
organizations, 125 pesticide(s), 5, 20, 30
orthogonality, 6 PET, 35
oscillation, 17 petroleum, 65, 91, 95, 155, 156, 175
osmosis, 112 petroleum products, 86
osmotic, 112 pH, 131, 135, 151, 154
osmotic pressure, 112 pharmaceuticals, 35, 59
osteoporosis, 57 phase diagram, 161
overtime, 131 phenylalanine, 71
ownership, 12, 32, 46 phosphate, 77, 118
oxalate, 76 phosphorus, 111, 116, 117, 133
oxidants, 20, 30 photosynthesis, 55, 117
oxidation, 67, 76, 77, 78, 79, 81, 82, 84, 86, 89, 91, photovoltaic, 106
94, 108, 119, 132, 135 physical environment, 86
oxidation products, 78, 87 physical properties, 160
oxidation rate, 81 physics, 17, 20, 54, 55, 97
oxides, 86, 87 physiology, 18
oxygen, 70, 97, 115, 116, 117 phytoplankton, 117
ozone, 86, 87, 97 pigments, 81
plague, 94
Index 187

planets, 11 primacy, 12
planning, 26, 31, 45, 51, 53 privacy, 15
plants, 48, 64, 83, 85, 106, 111, 115, 116, 117 private investment, 48
plasma, 97 private property, 11
plastic industry, 87 probability, 13
plastic products, 48, 77, 78, 81, 82, 86, 89 producers, 5, 48
plasticizer, 88 production, 11, 12, 22, 26, 31, 32, 35, 47, 48, 56, 73,
plastics, 21, 27, 68, 80, 81, 82, 84, 85, 86, 87, 94, 95, 75, 81, 82, 84, 86, 87, 92, 105, 106, 108, 109,
106 110, 113, 116, 117, 118, 120, 123, 125, 132, 174
platinum, 94, 118 productivity, 29, 46, 117
Plato, 17 profession, 12
poison, 58 profit, 12, 58, 59
Poland, 29 program, 99, 103
police, 61 programming, 29
policy instruments, 48 promote, 33, 56
political opposition, 45 propaganda, 61, 89
pollutants, 87 proposition, 10, 49
pollution, 25, 89, 106, 117, 127, 133 prostate, 57, 94
polycarbonate, 80 prostate cancer, 94
polycyclic aromatic hydrocarbon, 86 protein(s), 29, 68, 70, 71, 72, 78, 83
polyethylene, 70 protocol(s), 3, 68, 69
polymer(s), 69, 70, 73, 74, 77, 78, 80, 81, 83, 88, 90, protons, 17, 54, 55
118 protozoa, 152
polymerization, 74, 78, 134, 135 Prozac, 21, 30
polynomials, 99, 103, 157, 159, 161, 166, 168, 173 Pseudomonas, 78, 117, 155
polypeptide, 71 Pseudomonas aeruginosa, 78
polyphenols, 133, 135 Pseudomonas spp, 117
polypropylene, 70 psychoactive, 40
polystyrene, 88 psychoactive drug, 40
polyurethane, 67, 69, 70, 71, 73, 74, 75, 76, 77, 78, psychology, 53
79, 80, 83, 88, 90, 91, 94 puberty, 75
polyurethane foam, 76, 94 public health, 88
polyvinyl chloride, 88 public investment, 48
pools, 88 public sector, 45
poor, 34, 45 pulses, 5
population, 84, 105, 108, 159, 161, 162, 173 pumps, 122
population size, 108 purification, 20, 105, 114, 117
portability, 118 PVC, 19, 20, 76, 81, 88
potassium, 133
power, 11, 13, 26, 27, 44, 45, 48, 63, 69, 85, 158
power plants, 85 Q
power sharing, 45
quantum mechanics, 162
pragmatism, 4, 28
quarks, 158
precipitation, 132
quasars, 54
prediction, 16, 25
questioning, 3, 15, 32
preference, 162
prejudice, 42
president, 46, 49 R
presidential elections, 45, 49
pressure, 20, 22, 80, 112, 122, 158, 159, 160, 166 racism, 41
prevention, 94, 127 radiation, 86, 121, 128
price index, 46 radio, 32
prices, 43, 46, 47, 48 radius, 6
188 Index

rain, 86 routing, 40
range, 9, 10, 16, 20, 28, 29, 106, 124, 131, 135, 138, rubber, 82
147, 159, 160, 161, 162 rubbers, 48, 82
rate of return, 12
raw materials, 11, 13, 48, 80, 114
reaction rate, 4, 20 S
reading, 11, 39, 51, 52
SA, 128
real estate, 43, 44
sabotage, 47, 48, 49
reality, 2, 3, 6, 16, 25, 26, 32, 40, 45, 58, 125, 158
sacrifice, 33
reasoning, 27
safety, 12, 22, 89
recession, 49
sales, 43, 48, 59
recognition, 34, 39
salt, 111, 112
recovery, 46, 47, 49, 110, 112
sample, 132, 135, 136, 137, 138, 139, 143, 144, 174
recycling, 69, 80, 81, 83, 84, 85, 93, 96
sampling, 12
redistribution, 49
Samsung, 69
reduction, 94, 108, 127, 132
satellite, 35
refining, 20
saturated fat, 30
reflection, 46, 91
Saudi Arabia, 111, 128
refrigeration, 21, 105, 122, 123, 127
savings, 106
regenerate, 27, 83
sawdust, 155
regeneration, 78, 79, 83, 106
scaling, 160
regression, 138, 146
scarcity, 111
regulations, 25
school, 44, 51, 53
rejection, 11, 51
science, 1, 2, 4, 9, 10, 11, 12, 14, 17, 18, 20, 21, 22,
relationship(s), 6, 28, 36, 158, 160
23, 29, 31, 33, 34, 36, 39, 52, 53, 59, 64, 89, 92,
relatives, 60, 63
97, 129, 155, 160
relativity, 34, 55
scientific knowledge, 33
relevance, 2, 42
scientific method, 33, 51
reliability, 12, 15, 112
scientific understanding, 56
religion, 2, 17
search, 41, 159
remediation, 132, 133
seawater, 111
renewable energy, 106
security, 10
Renewable Energy Technology, 95
sediment, 116
repair, 21
sedimentation, 132
reprocessing, 80
seed, 117
research funding, 59
selecting, 27, 90, 93
reserves, 47, 106
semi-permeable membrane, 112
residues, 84, 86, 119
separation, 86, 159
resilience, 79
sequencing, 15, 32, 59
resins, 82, 85, 88
series, 6, 22, 27, 38, 48, 87, 94, 99, 116, 159, 166,
resistance, 36, 42, 49, 58, 59, 61, 78
168, 169, 170, 171, 172, 173
resolution, 136
serine, 71
resources, 31, 93, 96, 106, 174
sewage, 107, 108, 110, 114, 116, 117, 127
respiration, 78, 87, 119
shape, 55, 71, 80, 153
respiratory, 85, 89
sharing, 45
rice, 10, 124
sheep, 73, 77
rice husk, 124
shock, 51
rickets, 57
shrimp, 116, 117
rings, 68, 134
signals, 32, 40, 44
risk, 12, 37, 56, 57, 59, 88, 95
sign(s), 11, 15, 45, 49
rods, 152
similarity, 69, 90, 159, 174
Rome, 42
simulation, 27, 28, 29, 35, 36, 94, 160
room temperature, 118, 135
sites, 61, 89
Index 189

skills, 64 strategies, 60
skin, 16, 30, 57, 88, 89 strength, vii, 21, 54, 71
skin cancer, 30 stress, 111, 158
skin diseases, 88 strontium, 155
sludge, 111, 116, 130 structural changes, 46, 49
smog, 86 students, 50, 51, 52, 53, 128
smoke, 76 substitutes, 57, 58, 59
smokers, 56 substitution, 22, 134
snakes, 40 sugar, 29, 46
social indicator, 49 sugar mills, 46
socialization, 31 suicidal behavior, 30
society, 31, 32, 42, 43, 60, 61, 62, 92 suicide, 33
sodium, 113, 133, 151 sulfur, 86
sodium hydroxide, 151 sulfur oxides, 86
software, 28, 136 sulphur, 70
soil, 30, 61, 63, 78, 86 summer, 56
solar energy, 5, 105, 106, 108, 120, 121, 123, 124, Sun, 11, 129, 130
127 supercritical, 80
solar system, 123 supply, 26, 27, 86, 93, 114, 117
solid phase, 161 supply chain, 93
solid state, 161 surplus, 43
solid waste, 81, 108, 119, 130 surveillance, 61
solidification, 159, 174 survival, 60
solubility, 112, 132 sustainability, vii, 6, 25, 35, 67, 68, 69, 78, 90, 91,
solvent, 86 92, 93, 94, 95, 96, 106, 112, 124, 125, 126, 127,
sorting, 9, 16 174
sounds, 41 sustainable development, 68, 92, 95, 96, 125
South America, 118 sweets, 30
South Korea, 42 swelling, 89
Soviet Union, 41 switching, 57
Spain, 49, 93 symmetry, 35
speciation, 11 symptom(s), 30, 34, 35
species, 12, 29, 50, 51, 60, 68, 78, 116, 117, 154 syndrome, 29, 34
spectrophotometer, 135 synthesis, 20, 57, 68, 95
spectroscopy, 80 synthetic polymers, 77
spectrum, 57, 91 systems, 11, 12, 13, 32, 35, 37, 40, 56, 90, 95, 106,
speech, 2, 65, 92, 128 115, 116, 117, 121, 122, 134, 155, 164, 166
speed, 5, 27, 30, 45, 55
speed of light, 55
sperm, 30, 75, 88 T
stability, 174
tactics, 24, 62
stabilizers, 81, 86
Taiwan, 65
stages, 46, 78
tanks, 115, 116, 117, 122
standards, 22, 25
tar, 84
statistics, 49
targets, 39
steel, 48
tax collection, 47
stereotyping, 42
taxation, 46
sterile, 136
taxis, 1
stock, 43, 135, 137
tea, 131, 132, 133, 134, 135, 136, 138, 141, 143,
stoichiometry, 106
147, 149, 154, 155, 156
stomach, 34
teachers, 51, 53
storage, 121, 122
teaching, 51, 53
strain, 158
technological advancement, 68
190 Index

technological developments, 41, 67, 93 translation, 25, 50


technology, vii, 3, 4, 9, 10, 17, 21, 25, 28, 30, 31, 33, translocation, 117
34, 36, 41, 69, 82, 90, 91, 92, 106, 110, 119, 120, transmission, 30
124, 125, 126, 127, 129, 131, 159 transport, 31, 112
telephone, 32 transportation, 45, 48
television, 32, 38 trees, 116, 119
temperament, 63 trend, 34, 93, 138, 144
temperature, 4, 20, 22, 68, 78, 80, 84, 85, 87, 89, tribes, 2, 37
118, 122, 124, 135, 158, 159, 160 triggers, 28, 55
territory, 16, 50 trimer, 75
terrorism, 37, 38, 39 turbulence, 159
Texas, 95 turbulent, 159
textiles, 70, 95, 115 Turkey, 5, 93
theory, 11, 12, 13, 18, 26, 28, 34, 42, 50, 51, 86, 162 turnover, 43
theory of simulation, 28 tyrosine, 71
therapy, 65
thermal energy, 122
thermodynamic, 22 U
thinking, 17, 27, 33, 35, 42, 50, 59
U.A.E., 92
threat, 19, 26, 76, 82, 132
UK, 6, 29
three-dimensional space, 6, 32, 33
ultraviolet light, 56, 57
threonine, 71
UN, 46, 68
threshold(s), 20, 28, 31, 136
uncertainty, 36
thyroid, 75
unemployment, 45, 46, 47, 48
TIA, 15
unemployment rate, 45, 47
time, 2, 3, 5, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
UNESCO, 91
20, 22, 24, 26, 27, 29, 31, 32, 33, 34, 37, 45, 46,
uniform, 23, 56, 90
48, 51, 53, 54, 55, 57, 60, 61, 63, 68, 77, 79, 81,
United Arab Emirates, 4
82, 90, 109, 121, 124, 125, 131, 133, 136, 137,
United Kingdom, 84
153, 154, 157, 158, 159, 160, 161
United Nations, 94, 95
time frame, 5
United States, 9, 10, 37, 41, 43, 52, 62, 128
time use, 90
universe, 2, 6, 17, 160
tobacco, 48
universities, 52, 93
toluene, 89
upholstery, 88, 89
tongue, 40, 59, 65
uranium, 20
torture, 12, 40
urethane, 68, 69, 70, 74, 75, 83
total utility, 120
users, 41, 56, 85
toxic gases, 80, 83
UV, 56, 57, 115, 116, 128
toxic metals, 132
UV light, 57
toxic products, 78, 87
toxic substances, 117
toxicity, 20, 84, 88, 95, 123, 131, 132, 133 V
toxin, 88
toys, 81, 82, 88 validity, 23
trace elements, 83 valine, 71
tracking, 18 values, 13, 101, 143, 148, 151
trade, 45, 59 vapor, 122
tradition, 42 variable(s), 6, 13, 17, 18, 35, 157, 158, 159, 161,
training, 36, 58 162, 166, 171, 172, 173, 174
traits, 3, 4, 33, 34 variance, 54
transcendence, 160 variation, 54, 157, 159, 161, 164
transformation(s), 6, 32, 49, 53, 84, 93, 105 vegetable oil, 123, 127
transition(s), 32, 34, 136 vehicles, 38, 48
Index 191

velocity, 158 wealth, 39, 49


Venezuela, 44, 45, 46, 47, 48, 49, 65 well-being, 124
ventilation, 122 wells, 174
versatility, 119 Western countries, 42
vertebrates, 39 Western Hemisphere, 60
vestibular system, 40 wetlands, 115
victims, 40 White House, 49
violence, 39, 49 wholesale, 39
visas, 15 wind, 73, 106
vision, 4, 17, 40, 41, 88, 136 winning, 55
visualization, 31 winter, 57, 117
vitamin A, 56 wires, 61
vitamin C, 3, 56 Wisconsin, 40
vitamin D, 57 women, 60, 61, 62, 63
Vitamin D, 55, 57, 64 wood, 30, 48, 76
vitamin D deficiency, 57 wool, 67, 68, 69, 70, 71, 72, 73, 74, 77, 78, 79, 83,
vitamin E, 56 84, 91
vitamins, 56, 133 workers, 48, 86, 88
vomiting, 89 World Bank, 46
voting, 42 World Health Organization (WHO), 76, 96
World War, 12
World Wide Web, 10
W worldview, 17
worry, 2
Wales, 55
writing, 3
walking, 40, 50
Wall Street Journal, 39, 41, 56, 58, 64, 65
war, 36, 47, 62 Y
Washington, 41, 46, 47, 65, 94
Washington Consensus, 46 yield, 20, 30, 109, 110
waste disposal, 85 young men, 63
waste management, 69, 78
wastewater, 78, 108, 111, 115, 116, 117, 119, 130,
131, 132, 133, 154, 155 Z
wastewater treatment, 78, 130
zinc, 86
water heater, 122
weakness, 89

Você também pode gostar